352
OpenShift Container Platform 4.6 Installing on AWS Installing OpenShift Container Platform AWS clusters Last Updated: 2021-03-31

OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6

  • Upload
    others

  • View
    15

  • Download
    1

Embed Size (px)

Citation preview

Page 1: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6

OpenShift Container Platform 46

Installing on AWS

Installing OpenShift Container Platform AWS clusters

Last Updated 2021-03-31

OpenShift Container Platform 46 Installing on AWS

Installing OpenShift Container Platform AWS clusters

Legal Notice

Copyright copy 2021 Red Hat Inc

The text of and illustrations in this document are licensed by Red Hat under a Creative CommonsAttributionndashShare Alike 30 Unported license (CC-BY-SA) An explanation of CC-BY-SA isavailable athttpcreativecommonsorglicensesby-sa30 In accordance with CC-BY-SA if you distribute this document or an adaptation of it you mustprovide the URL for the original version

Red Hat as the licensor of this document waives the right to enforce and agrees not to assertSection 4d of CC-BY-SA to the fullest extent permitted by applicable law

Red Hat Red Hat Enterprise Linux the Shadowman logo the Red Hat logo JBoss OpenShiftFedora the Infinity logo and RHCE are trademarks of Red Hat Inc registered in the United Statesand other countries

Linux reg is the registered trademark of Linus Torvalds in the United States and other countries

Java reg is a registered trademark of Oracle andor its affiliates

XFS reg is a trademark of Silicon Graphics International Corp or its subsidiaries in the United Statesandor other countries

MySQL reg is a registered trademark of MySQL AB in the United States the European Union andother countries

Nodejs reg is an official trademark of Joyent Red Hat is not formally related to or endorsed by theofficial Joyent Nodejs open source or commercial project

The OpenStack reg Word Mark and OpenStack logo are either registered trademarksservice marksor trademarksservice marks of the OpenStack Foundation in the United States and othercountries and are used with the OpenStack Foundations permission We are not affiliated withendorsed or sponsored by the OpenStack Foundation or the OpenStack community

All other trademarks are the property of their respective owners

Abstract

This document provides instructions for installing and uninstalling OpenShift Container Platformclusters on Amazon Web Services

Table of Contents

CHAPTER 1 INSTALLING ON AWS11 CONFIGURING AN AWS ACCOUNT

111 Configuring Route 531111 Ingress Operator endpoint configuration for AWS Route 53

112 AWS account limits113 Required AWS permissions114 Creating an IAM user115 Supported AWS regions116 Next steps

12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS121 Prerequisites122 Internet and Telemetry access for OpenShift Container Platform123 Generating an SSH private key and adding it to the agent124 Obtaining the installation program125 Creating the installation configuration file

1251 Installation configuration parameters1252 Sample customized install-configyaml file for AWS

126 Deploying the cluster127 Installing the OpenShift CLI by downloading the binary

1271 Installing the OpenShift CLI on Linux1272 Installing the OpenShift CLI on Windows1273 Installing the OpenShift CLI on macOS

128 Logging in to the cluster by using the CLI129 Logging in to the cluster by using the web console1210 Next steps

13 INSTALLING A CLUSTER ON AWS WITH NETWORK CUSTOMIZATIONS131 Prerequisites132 Internet and Telemetry access for OpenShift Container Platform133 Generating an SSH private key and adding it to the agent134 Obtaining the installation program135 Creating the installation configuration file

1351 Installation configuration parameters1352 Network configuration parameters1353 Sample customized install-configyaml file for AWS

136 Modifying advanced network configuration parameters137 Configuring an Ingress Controller Network Load Balancer on a new AWS cluster138 Cluster Network Operator configuration

1381 Configuration parameters for the OpenShift SDN default CNI network provider1382 Configuration parameters for the OVN-Kubernetes default CNI network provider1383 Cluster Network Operator example configuration

139 Configuring hybrid networking with OVN-Kubernetes1310 Deploying the cluster1311 Installing the OpenShift CLI by downloading the binary

13111 Installing the OpenShift CLI on Linux13112 Installing the OpenShift CLI on Windows13113 Installing the OpenShift CLI on macOS

1312 Logging in to the cluster by using the CLI1313 Logging in to the cluster by using the web console1314 Next steps

14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC141 Prerequisites

77778

10181819

2020202122232432353636373738383939404041

424244525355575859606061

62646465656666676767

Table of Contents

1

142 About using a custom VPC1421 Requirements for using your VPC1422 VPC validation1423 Division of permissions1424 Isolation between clusters

143 Internet and Telemetry access for OpenShift Container Platform144 Generating an SSH private key and adding it to the agent145 Obtaining the installation program146 Creating the installation configuration file

1461 Installation configuration parameters1462 Sample customized install-configyaml file for AWS1463 Configuring the cluster-wide proxy during installation

147 Deploying the cluster148 Installing the OpenShift CLI by downloading the binary

1481 Installing the OpenShift CLI on Linux1482 Installing the OpenShift CLI on Windows1483 Installing the OpenShift CLI on macOS

149 Logging in to the cluster by using the CLI1410 Logging in to the cluster by using the web console1411 Next steps

15 INSTALLING A PRIVATE CLUSTER ON AWS151 Prerequisites152 Private clusters

1521 Private clusters in AWS15211 Limitations

153 About using a custom VPC1531 Requirements for using your VPC1532 VPC validation1533 Division of permissions1534 Isolation between clusters

154 Internet and Telemetry access for OpenShift Container Platform155 Generating an SSH private key and adding it to the agent156 Obtaining the installation program157 Manually creating the installation configuration file

1571 Installation configuration parameters1572 Sample customized install-configyaml file for AWS1573 Configuring the cluster-wide proxy during installation

158 Deploying the cluster159 Installing the OpenShift CLI by downloading the binary

1591 Installing the OpenShift CLI on Linux1592 Installing the OpenShift CLI on Windows1593 Installing the OpenShift CLI on macOS

1510 Logging in to the cluster by using the CLI1511 Logging in to the cluster by using the web console1512 Next steps

16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION161 Prerequisites162 AWS government regions163 Private clusters

1631 Private clusters in AWS16311 Limitations

164 About using a custom VPC1641 Requirements for using your VPC

686870717171727374758486888990909191

929393939394949494979797979899

100101

109111

113114114115115116116117117117118118119119119

120

OpenShift Container Platform 46 Installing on AWS

2

1642 VPC validation1643 Division of permissions1644 Isolation between clusters

165 Internet and Telemetry access for OpenShift Container Platform166 Generating an SSH private key and adding it to the agent167 Obtaining the installation program168 Manually creating the installation configuration file

1681 Installation configuration parameters1682 Sample customized install-configyaml file for AWS1683 AWS regions without a published RHCOS AMI1684 Uploading a custom RHCOS AMI in AWS1685 Configuring the cluster-wide proxy during installation

169 Deploying the cluster1610 Installing the OpenShift CLI by downloading the binary

16101 Installing the OpenShift CLI on Linux16102 Installing the OpenShift CLI on Windows16103 Installing the OpenShift CLI on macOS

1611 Logging in to the cluster by using the CLI1612 Logging in to the cluster by using the web console1613 Next steps

17 INSTALLING A CLUSTER ON USER-PROVISIONED INFRASTRUCTURE IN AWS BY USINGCLOUDFORMATION TEMPLATES

171 Prerequisites172 Internet and Telemetry access for OpenShift Container Platform173 Required AWS infrastructure components

1731 Cluster machines1732 Certificate signing requests management1733 Other infrastructure components1734 Required AWS permissions

174 Obtaining the installation program175 Generating an SSH private key and adding it to the agent176 Creating the installation files for AWS

1761 Optional Creating a separate var partition1762 Creating the installation configuration file1763 Configuring the cluster-wide proxy during installation1764 Creating the Kubernetes manifest and Ignition config files

177 Extracting the infrastructure name178 Creating a VPC in AWS

1781 CloudFormation template for the VPC179 Creating networking and load balancing components in AWS

1791 CloudFormation template for the network and load balancers1710 Creating security group and roles in AWS

17101 CloudFormation template for security objects1711 RHCOS AMIs for the AWS infrastructure

17111 AWS regions without a published RHCOS AMI17112 Uploading a custom RHCOS AMI in AWS

1712 Creating the bootstrap node in AWS17121 CloudFormation template for the bootstrap machine

1713 Creating the control plane machines in AWS17131 CloudFormation template for control plane machines

1714 Creating the worker nodes in AWS17141 CloudFormation template for worker machines

1715 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure

122122122123123124125126134136137139141

142142143143144144145

146146146147148149149155163163164165167168170172172174180183191

194203204204206

211215221

228232235

Table of Contents

3

1716 Installing the OpenShift CLI by downloading the binary17161 Installing the OpenShift CLI on Linux17162 Installing the OpenShift CLI on Windows17163 Installing the OpenShift CLI on macOS

1717 Logging in to the cluster by using the CLI1718 Approving the certificate signing requests for your machines1719 Initial Operator configuration

17191 Image registry storage configuration171911 Configuring registry storage for AWS with user-provisioned infrastructure171912 Configuring storage for the image registry in non-production clusters

1720 Deleting the bootstrap resources1721 Creating the Ingress DNS Records1722 Completing an AWS installation on user-provisioned infrastructure1723 Logging in to the cluster by using the web console1724 Additional resources1725 Next steps

18 INSTALLING A CLUSTER ON AWS THAT USES MIRRORED INSTALLATION CONTENT181 Prerequisites182 About installations in restricted networks

1821 Additional limits183 Internet and Telemetry access for OpenShift Container Platform184 Required AWS infrastructure components

1841 Cluster machines1842 Certificate signing requests management1843 Other infrastructure components1844 Required AWS permissions

185 Generating an SSH private key and adding it to the agent186 Creating the installation files for AWS

1861 Optional Creating a separate var partition1862 Creating the installation configuration file1863 Configuring the cluster-wide proxy during installation1864 Creating the Kubernetes manifest and Ignition config files

187 Extracting the infrastructure name188 Creating a VPC in AWS

1881 CloudFormation template for the VPC189 Creating networking and load balancing components in AWS

1891 CloudFormation template for the network and load balancers1810 Creating security group and roles in AWS

18101 CloudFormation template for security objects1811 RHCOS AMIs for the AWS infrastructure1812 Creating the bootstrap node in AWS

18121 CloudFormation template for the bootstrap machine1813 Creating the control plane machines in AWS

18131 CloudFormation template for control plane machines1814 Creating the worker nodes in AWS

18141 CloudFormation template for worker machines1815 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure1816 Logging in to the cluster by using the CLI1817 Approving the certificate signing requests for your machines1818 Initial Operator configuration

18181 Image registry storage configuration181811 Configuring registry storage for AWS with user-provisioned infrastructure181812 Configuring storage for the image registry in non-production clusters

236236237237237238241242242243243244247247248248249249250250250251251

253253259266267268270272273275276278283287295297306307312316321

329333335336337340341341

342

OpenShift Container Platform 46 Installing on AWS

4

1819 Deleting the bootstrap resources1820 Creating the Ingress DNS Records1821 Completing an AWS installation on user-provisioned infrastructure1822 Logging in to the cluster by using the web console1823 Additional resources1824 Next steps

19 UNINSTALLING A CLUSTER ON AWS191 Removing a cluster that uses installer-provisioned infrastructure

342343345346347347348348

Table of Contents

5

OpenShift Container Platform 46 Installing on AWS

6

CHAPTER 1 INSTALLING ON AWS

11 CONFIGURING AN AWS ACCOUNT

Before you can install OpenShift Container Platform you must configure an Amazon Web Services(AWS) account

111 Configuring Route 53

To install OpenShift Container Platform the Amazon Web Services (AWS) account you use must have adedicated public hosted zone in your Route 53 service This zone must be authoritative for the domainThe Route 53 service provides cluster DNS resolution and name lookup for external connections to thecluster

Procedure

1 Identify your domain or subdomain and registrar You can transfer an existing domain andregistrar or obtain a new one through AWS or another source

NOTE

If you purchase a new domain through AWS it takes time for the relevant DNSchanges to propagate For more information about purchasing domains throughAWS see Registering Domain Names Using Amazon Route 53 in the AWSdocumentation

2 If you are using an existing domain and registrar migrate its DNS to AWS See Making AmazonRoute 53 the DNS Service for an Existing Domain in the AWS documentation

3 Create a public hosted zone for your domain or subdomain See Creating a Public Hosted Zonein the AWS documentationUse an appropriate root domain such as openshiftcorpcom or subdomain such as clustersopenshiftcorpcom

4 Extract the new authoritative name servers from the hosted zone records See Getting theName Servers for a Public Hosted Zone in the AWS documentation

5 Update the registrar records for the AWS Route 53 name servers that your domain uses Forexample if you registered your domain to a Route 53 service in a different accounts see thefollowing topic in the AWS documentation Adding or Changing Name Servers or Glue Records

6 If you are using a subdomain add its delegation records to the parent domain This givesAmazon Route 53 responsibility for the subdomain Follow the delegation procedure outlined bythe DNS provider of the parent domain See Creating a subdomain that uses Amazon Route 53as the DNS service without migrating the parent domain in the AWS documentation for anexample high level procedure

1111 Ingress Operator endpoint configuration for AWS Route 53

If you install in either Amazon Web Services (AWS) GovCloud (US) US-West or US-East region theIngress Operator uses us-gov-west-1 region for Route53 and tagging API clients

The Ingress Operator uses httpstaggingus-gov-west-1amazonawscom as the tagging APIendpoint if a tagging custom endpoint is configured that includes the string us-gov-east-1

CHAPTER 1 INSTALLING ON AWS

7

1

2

For more information on AWS GovCloud (US) endpoints see the Service Endpoints in the AWSdocumentation about GovCloud (US)

IMPORTANT

Private disconnected installations are not supported for AWS GovCloud when you installin the us-gov-east-1 region

Example Route 53 configuration

Route 53 defaults to httpsroute53us-govamazonawscom for both AWS GovCloud (US)regions

Only the US-West region has endpoints for tagging Omit this parameter if your cluster is inanother region

112 AWS account limits

The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) componentsand the default Service Limits affect your ability to install OpenShift Container Platform clusters If youuse certain cluster configurations deploy your cluster in certain AWS regions or run multiple clustersfrom your account you might need to request additional resources for your AWS account

The following table summarizes the AWS components whose limits can impact your ability to install andrun OpenShift Container Platform clusters

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

platform aws region us-gov-west-1 serviceEndpoints - name ec2 url httpsec2us-gov-west-1amazonawscom - name elasticloadbalancing url httpselasticloadbalancingus-gov-west-1amazonawscom - name route53 url httpsroute53us-govamazonawscom 1 - name tagging url httpstaggingus-gov-west-1amazonawscom 2

OpenShift Container Platform 46 Installing on AWS

8

InstanceLimits

Varies Varies By default each cluster creates the followinginstances

One bootstrap machine which is removedafter installation

Three master nodes

Three worker nodes

These instance type counts are within a newaccountrsquos default limit To deploy more workernodes enable autoscaling deploy large workloadsor use a different instance type review your accountlimits to ensure that your cluster can deploy themachines that you need

In most regions the bootstrap and worker machinesuses an m4large machines and the mastermachines use m4xlarge instances In some regionsincluding all regions that do not support theseinstance types m5large and m5xlarge instancesare used instead

Elastic IPs(EIPs)

0 to 1 5 EIPs peraccount

To provision the cluster in a highly availableconfiguration the installation program creates apublic and private subnet for each availability zonewithin a region Each private subnet requires a NATGateway and each NAT gateway requires aseparate elastic IP Review the AWS region map todetermine how many availability zones are in eachregion To take advantage of the default highavailability install the cluster in a region with at leastthree availability zones To install a cluster in aregion with more than five availability zones youmust increase the EIP limit

IMPORTANT

To use the us-east-1 region youmust increase the EIP limit for youraccount

VirtualPrivateClouds(VPCs)

5 5 VPCs perregion

Each cluster creates its own VPC

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

CHAPTER 1 INSTALLING ON AWS

9

ElasticLoadBalancing(ELBNLB)

3 20 per region By default each cluster creates internal andexternal network load balancers for the master APIserver and a single classic elastic load balancer forthe router Deploying more Kubernetes Serviceobjects with type LoadBalancer will createadditional load balancers

NATGateways

5 5 per availabilityzone

The cluster deploys one NAT gateway in eachavailability zone

ElasticNetworkInterfaces(ENIs)

At least 12 350 per region The default installation creates 21 ENIs and an ENIfor each availability zone in your region Forexample the us-east-1 region contains sixavailability zones so a cluster that is deployed inthat zone uses 27 ENIs Review the AWS region mapto determine how many availability zones are in eachregion

Additional ENIs are created for additional machinesand elastic load balancers that are created bycluster usage and deployed workloads

VPCGateway

20 20 per account Each cluster creates a single VPC Gateway for S3access

S3 buckets 99 100 buckets peraccount

Because the installation process creates atemporary bucket and the registry component ineach cluster creates a bucket you can create only99 OpenShift Container Platform clusters per AWSaccount

SecurityGroups

250 2500 peraccount

Each cluster creates 10 distinct security groups

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

113 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 11 Required EC2 permissions for installation

tagTagResources

tagUntagResources

OpenShift Container Platform 46 Installing on AWS

10

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

CHAPTER 1 INSTALLING ON AWS

11

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 12 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

OpenShift Container Platform 46 Installing on AWS

12

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 13 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

CHAPTER 1 INSTALLING ON AWS

13

elasticloadbalancingSetLoadBalancerPoliciesOfListener

Example 14 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 15 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

OpenShift Container Platform 46 Installing on AWS

14

route53ListHostedZonesByName

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 16 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 17 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

CHAPTER 1 INSTALLING ON AWS

15

s3GetObjectAcl

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 18 Required permissions to delete base cluster resources

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 19 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

OpenShift Container Platform 46 Installing on AWS

16

ec2DeleteVpc

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

Example 110 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 111 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

CHAPTER 1 INSTALLING ON AWS

17

114 Creating an IAM user

Each Amazon Web Services (AWS) account contains a root user account that is based on the emailaddress you used to create the account This is a highly-privileged account and it is recommended touse it for only initial account and billing configuration creating an initial set of users and securing theaccount

Before you install OpenShift Container Platform create a secondary IAM administrative user As youcomplete the Creating an IAM User in Your AWS Account procedure in the AWS documentation set thefollowing options

Procedure

1 Specify the IAM user name and select Programmatic access

2 Attach the AdministratorAccess policy to ensure that the account has sufficient permission tocreate the cluster This policy provides the cluster with the ability to grant credentials to eachOpenShift Container Platform component The cluster grants the components only thecredentials that they require

NOTE

While it is possible to create a policy that grants the all of the required AWSpermissions and attach it to the user this is not the preferred option The clusterwill not have the ability to grant additional credentials to individual componentsso the same credentials are used by all components

3 Optional Add metadata to the user by attaching tags

4 Confirm that the user name that you specified is granted the AdministratorAccess policy

5 Record the access key ID and secret access key values You must use these values when youconfigure your local machine to run the installation program

IMPORTANT

You cannot use a temporary session token that you generated while using amulti-factor authentication device to authenticate to AWS when you deploy acluster The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials

Additional resources

See Manually creating IAM for AWS for steps to set the Cloud Credential Operator (CCO) tomanual mode prior to installation Use this mode in environments where the cloud identity andaccess management (IAM) APIs are not reachable or if you prefer not to store an administrator-level credential secret in the cluster kube-system project

115 Supported AWS regions

You can deploy an OpenShift Container Platform cluster to the following public regions

OpenShift Container Platform 46 Installing on AWS

18

af-south-1 (Cape Town)

ap-east-1 (Hong Kong)

ap-northeast-1 (Tokyo)

ap-northeast-2 (Seoul)

ap-south-1 (Mumbai)

ap-southeast-1 (Singapore)

ap-southeast-2 (Sydney)

ca-central-1 (Central)

eu-central-1 (Frankfurt)

eu-north-1 (Stockholm)

eu-south-1 (Milan)

eu-west-1 (Ireland)

eu-west-2 (London)

eu-west-3 (Paris)

me-south-1 (Bahrain)

sa-east-1 (Satildeo Paulo)

us-east-1 (N Virginia)

us-east-2 (Ohio)

us-west-1 (N California)

us-west-2 (Oregon)

The following AWS GovCloud regions are supported

us-gov-west-1

us-gov-east-1

116 Next steps

Install an OpenShift Container Platform cluster

Quickly install a cluster with default options on installer-provisioned infrastructure

Install a cluster with cloud customizations on installer-provisioned infrastructure

Install a cluster with network customizations on installer-provisioned infrastructure

Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormationtemplates

CHAPTER 1 INSTALLING ON AWS

19

12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS

In OpenShift Container Platform version 46 you can install a customized cluster on infrastructure thatthe installation program provisions on Amazon Web Services (AWS) To customize the installation youmodify parameters in the install-configyaml file before you install the cluster

121 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

122 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

OpenShift Container Platform 46 Installing on AWS

20

1

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

123 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

CHAPTER 1 INSTALLING ON AWS

21

1

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

124 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operating

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

22

1

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

125 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

$ tar xvf openshift-install-linuxtargz

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

23

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1251 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 11 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

OpenShift Container Platform 46 Installing on AWS

24

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

25

Table 12 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

OpenShift Container Platform 46 Installing on AWS

26

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

27

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

28

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

29

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 13 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

OpenShift Container Platform 46 Installing on AWS

30

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

31

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1252 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

OpenShift Container Platform 46 Installing on AWS

32

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 amiID ami-96c6f8f7 12 serviceEndpoints 13

CHAPTER 1 INSTALLING ON AWS

33

1 10 11 14

2

3 7

4

5 8

6 9

12

13

15

16

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

- name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 14fips false 15sshKey ssh-ed25519 AAAA 16

OpenShift Container Platform 46 Installing on AWS

34

1

2

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

126 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-

CHAPTER 1 INSTALLING ON AWS

35

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

127 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1271 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

36

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1272 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

1273 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

37

1

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

128 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

129 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

38

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1210 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

13 INSTALLING A CLUSTER ON AWS WITH NETWORKCUSTOMIZATIONS

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)with customized network configuration options By customizing your network configuration your clustercan coexist with existing IP address allocations in your environment and integrate with existing MTU andVXLAN configurations

You must set most of the network configuration parameters during installation and you can modify only kubeProxy configuration parameters in a running cluster

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

39

131 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

132 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

OpenShift Container Platform 46 Installing on AWS

40

1

1

See About remote health monitoring for more information about the Telemetry service

133 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

41

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

134 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

135 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

$ tar xvf openshift-install-linuxtargz

OpenShift Container Platform 46 Installing on AWS

42

1

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

43

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1351 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 14 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

OpenShift Container Platform 46 Installing on AWS

44

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

Table 15 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

45

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

46

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

47

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

48

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

49

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 16 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

50

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

51

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1352 Network configuration parameters

You can modify your cluster network configuration parameters in the install-configyaml configurationfile The following table describes the parameters

NOTE

You cannot modify these parameters in the install-configyaml file after installation

Table 17 Required network parameters

Parameter Description Value

networkingnetworkType

The default Container Network Interface (CNI)network provider plug-in to deploy

Either OpenShiftSDN or OVNKubernetes Thedefault value is OpenShiftSDN

OpenShift Container Platform 46 Installing on AWS

52

networkingclusterNetwork[]cidr

A block of IP addresses from which pod IP addressesare allocated The OpenShiftSDN network plug-insupports multiple cluster networks The addressblocks for multiple cluster networks must not overlapSelect address pools large enough to fit youranticipated workload

An IP address allocation inCIDR format The defaultvalue is 101280014

networkingclusterNetwork[]hostPrefix

The subnet prefix length to assign to each individualnode For example if hostPrefix is set to 23 theneach node is assigned a 23 subnet out of the given cidr allowing for 510 (2^(32 - 23) - 2) pod IPaddresses

A subnet prefix The defaultvalue is 23

networkingserviceNetwork[]

A block of IP addresses for services OpenShiftSDNallows only one serviceNetwork block The addressblock must not overlap with any other network block

An IP address allocation inCIDR format The defaultvalue is 172300016

networkingmachineNetwork[]cidr

A block of IP addresses assigned to nodes created bythe OpenShift Container Platform installationprogram while installing the cluster The addressblock must not overlap with any other network blockMultiple CIDR ranges may be specified

An IP address allocation inCIDR format The defaultvalue is 1000016

Parameter Description Value

1353 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b rootVolume iops 4000 size 500 type io1 6

CHAPTER 1 INSTALLING ON AWS

53

1 10 12 15

2

3 7 11

4

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

If you do not provide these parameters and values the installation program provides thedefault value

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only one

type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking 11 clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 12 userTags adminContact jdoe costCenter 7536 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17

OpenShift Container Platform 46 Installing on AWS

54

5 8

6 9

13

14

16

17

control plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

136 Modifying advanced network configuration parameters

You can modify the advanced network configuration parameters only before you install the clusterAdvanced configuration customization lets you integrate your cluster into your existing networkenvironment by specifying an MTU or VXLAN port by allowing customization of kube-proxy settingsand by specifying a different mode for the openshiftSDNConfig parameter

IMPORTANT

Modifying the OpenShift Container Platform manifest files directly is not supported

Prerequisites

Create the install-configyaml file and complete any modifications to it

Procedure

CHAPTER 1 INSTALLING ON AWS

55

1

1

1

1 Change to the directory that contains the installation program and create the manifests

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-network-03-configyml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

3 Open the cluster-network-03-configyml file in an editor and enter a CR that describes theOperator configuration you want

The parameters for the spec parameter are only an example Specify your configurationfor the Cluster Network Operator in the CR

The CNO provides default values for the parameters in the CR so you must specify only theparameters that you want to change

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-network-03-configyml 1

$ ls ltinstallation_directorygtmanifestscluster-network-

cluster-network-01-crdymlcluster-network-02-configymlcluster-network-03-configyml

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec 1 clusterNetwork - cidr 101280014 hostPrefix 23 serviceNetwork - 172300016 defaultNetwork type OpenShiftSDN openshiftSDNConfig mode NetworkPolicy mtu 1450 vxlanPort 4789

OpenShift Container Platform 46 Installing on AWS

56

1

1

4 Save the cluster-network-03-configyml file and quit the text editor

5 Optional Back up the manifestscluster-network-03-configyml file The installation programdeletes the manifests directory when creating the cluster

NOTE

For more information on using a Network Load Balancer (NLB) on AWS see ConfiguringIngress cluster traffic on AWS using a Network Load Balancer

137 Configuring an Ingress Controller Network Load Balancer on a new AWScluster

You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster

Prerequisites

Create the install-configyaml file and complete any modifications to it

Procedure

Create an Ingress Controller backed by an AWS NLB on a new cluster

1 Change to the directory that contains the installation program and create the manifests

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-ingress-default-ingresscontrolleryaml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

3 Open the cluster-ingress-default-ingresscontrolleryaml file in an editor and enter a CR thatdescribes the Operator configuration you want

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-ingress-default-ingresscontrolleryaml 1

$ ls ltinstallation_directorygtmanifestscluster-ingress-default-ingresscontrolleryaml

cluster-ingress-default-ingresscontrolleryaml

apiVersion operatoropenshiftiov1kind IngressController

CHAPTER 1 INSTALLING ON AWS

57

1 2

3

4 Save the cluster-ingress-default-ingresscontrolleryaml file and quit the text editor

5 Optional Back up the manifestscluster-ingress-default-ingresscontrolleryaml file Theinstallation program deletes the manifests directory when creating the cluster

138 Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO)configuration and stored in a CR object that is named cluster The CR specifies the parameters for the Network API in the operatoropenshiftio API group

You can specify the cluster network configuration for your OpenShift Container Platform cluster bysetting the parameter values for the defaultNetwork parameter in the CNO CR The following CRdisplays the default configuration for the CNO and explains both the parameters you can configure andthe valid parameter values

Cluster Network Operator CR

Specified in the install-configyaml file

Configures the default Container Network Interface (CNI) network provider for the clusternetwork

metadata creationTimestamp null name default namespace openshift-ingress-operatorspec endpointPublishingStrategy loadBalancer scope External providerParameters type AWS aws type NLB type LoadBalancerService

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec clusterNetwork 1 - cidr 101280014 hostPrefix 23 serviceNetwork 2 - 172300016 defaultNetwork 3 kubeProxyConfig 4 iptablesSyncPeriod 30s 5 proxyArguments iptables-min-sync-period 6 - 0s

OpenShift Container Platform 46 Installing on AWS

58

4

5

6

1

2

3

4

5

The parameters for this object specify the kube-proxy configuration If you do not specify theparameter values the Cluster Network Operator applies the displayed default parameter values If

The refresh period for iptables rules The default value is 30s Valid suffixes include s m and hand are described in the Go time package documentation

NOTE

Because of performance improvements introduced in OpenShift Container Platform43 and greater adjusting the iptablesSyncPeriod parameter is no longernecessary

The minimum duration before refreshing iptables rules This parameter ensures that the refreshdoes not happen too frequently Valid suffixes include s m and h and are described in the Go timepackage

1381 Configuration parameters for the OpenShift SDN default CNI network provider

The following YAML object describes the configuration parameters for the OpenShift SDN defaultContainer Network Interface (CNI) network provider

Specified in the install-configyaml file

Specify only if you want to override part of the OpenShift SDN configuration

Configures the network isolation mode for OpenShift SDN The allowed values are Multitenant Subnet or NetworkPolicy The default value is NetworkPolicy

The maximum transmission unit (MTU) for the VXLAN overlay network This is detectedautomatically based on the MTU of the primary network interface You do not normally need tooverride the detected MTU

If the auto-detected value is not what you expected it to be confirm that the MTU on the primarynetwork interface on your nodes is correct You cannot use this option to change the MTU value ofthe primary network interface on the nodes

If your cluster requires different MTU values for different nodes you must set this value to 50 lessthan the lowest MTU value in your cluster For example if some nodes in your cluster have an MTUof 9001 and some have an MTU of 1500 you must set this value to 1450

The port to use for all VXLAN packets The default value is 4789 If you are running in a virtualizedenvironment with existing nodes that are part of another VXLAN network then you might berequired to change this For example when running an OpenShift SDN overlay on top of VMwareNSX-T you must select an alternate port for VXLAN since both SDNs use the same defaultVXLAN port number

On Amazon Web Services (AWS) you can select an alternate port for the VXLAN between port

defaultNetwork type OpenShiftSDN 1 openshiftSDNConfig 2 mode NetworkPolicy 3 mtu 1450 4 vxlanPort 4789 5

CHAPTER 1 INSTALLING ON AWS

59

1

2

3

4

On Amazon Web Services (AWS) you can select an alternate port for the VXLAN between port 9000 and port 9999

1382 Configuration parameters for the OVN-Kubernetes default CNI network provider

The following YAML object describes the configuration parameters for the OVN-Kubernetes defaultCNI network provider

Specified in the install-configyaml file

Specify only if you want to override part of the OVN-Kubernetes configuration

The maximum transmission unit (MTU) for the Geneve (Generic Network VirtualizationEncapsulation) overlay network This is detected automatically based on the MTU of the primarynetwork interface You do not normally need to override the detected MTU

If the auto-detected value is not what you expected it to be confirm that the MTU on the primarynetwork interface on your nodes is correct You cannot use this option to change the MTU value ofthe primary network interface on the nodes

If your cluster requires different MTU values for different nodes you must set this value to 100 lessthan the lowest MTU value in your cluster For example if some nodes in your cluster have an MTUof 9001 and some have an MTU of 1500 you must set this value to 1400

The UDP port for the Geneve overlay network

1383 Cluster Network Operator example configuration

A complete CR object for the CNO is displayed in the following example

Cluster Network Operator example CR

defaultNetwork type OVNKubernetes 1 ovnKubernetesConfig 2 mtu 1400 3 genevePort 6081 4

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec clusterNetwork - cidr 101280014 hostPrefix 23 serviceNetwork - 172300016 defaultNetwork type OpenShiftSDN openshiftSDNConfig mode NetworkPolicy mtu 1450

OpenShift Container Platform 46 Installing on AWS

60

1

1

139 Configuring hybrid networking with OVN-Kubernetes

You can configure your cluster to use hybrid networking with OVN-Kubernetes This allows a hybridcluster that supports different node networking configurations For example this is necessary to runboth Linux and Windows nodes in a cluster

IMPORTANT

You must configure hybrid networking with OVN-Kubernetes during the installation ofyour cluster You cannot switch to hybrid networking after the installation process

Prerequisites

You defined OVNKubernetes for the networkingnetworkType parameter in the install-configyaml file See the installation documentation for configuring OpenShift ContainerPlatform network customizations on your chosen cloud provider for more information

Procedure

1 Create the manifests from the directory that contains the installation program

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-network-03-configyml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

vxlanPort 4789 kubeProxyConfig iptablesSyncPeriod 30s proxyArguments iptables-min-sync-period - 0s

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-network-03-configyml 1

$ ls -1 ltinstallation_directorygtmanifestscluster-network-

cluster-network-01-crdymlcluster-network-02-configymlcluster-network-03-configyml

CHAPTER 1 INSTALLING ON AWS

61

1

2

3

4

3 Open the cluster-network-03-configyml file and configure OVN-Kubernetes with hybridnetworking For example

The parameters for the spec parameter are only an example Specify your configurationfor the Cluster Network Operator in the custom resource

Specify the CIDR configuration used when adding nodes

Specify OVNKubernetes as the Container Network Interface (CNI) cluster networkprovider

Specify the CIDR configuration used for nodes on the additional overlay network The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR

4 Optional Back up the ltinstallation_directorygtmanifestscluster-network-03-configyml fileThe installation program deletes the manifests directory when creating the cluster

NOTE

For more information on using Linux and Windows nodes in the same cluster seeUnderstanding Windows container workloads

1310 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

apiVersion operatoropenshiftiov1kind Networkmetadata creationTimestamp null name clusterspec 1 clusterNetwork 2 - cidr 101280014 hostPrefix 23 externalIP policy serviceNetwork - 172300016 defaultNetwork type OVNKubernetes 3 ovnKubernetesConfig hybridOverlayConfig hybridClusterNetwork 4 - cidr 101320014 hostPrefix 23status

OpenShift Container Platform 46 Installing on AWS

62

1

2

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

CHAPTER 1 INSTALLING ON AWS

63

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

1311 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

13111 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

$ tar xvzf ltfilegt

OpenShift Container Platform 46 Installing on AWS

64

After you install the CLI it is available using the oc command

13112 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

13113 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

$ echo $PATH

CHAPTER 1 INSTALLING ON AWS

65

1

1312 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1313 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ cat ltinstallation_directorygtauthkubeadmin-password

OpenShift Container Platform 46 Installing on AWS

66

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1314 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC

In OpenShift Container Platform version 46 you can install a cluster into an existing Amazon VirtualPrivate Cloud (VPC) on Amazon Web Services (AWS) The installation program provisions the rest ofthe required infrastructure which you can further customize To customize the installation you modifyparameters in the install-configyaml file before you install the cluster

141 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

67

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

142 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

1421 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

OpenShift Container Platform 46 Installing on AWS

68

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

CHAPTER 1 INSTALLING ON AWS

69

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

1422 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

OpenShift Container Platform 46 Installing on AWS

70

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1423 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1424 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

143 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintained

CHAPTER 1 INSTALLING ON AWS

71

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

144 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

OpenShift Container Platform 46 Installing on AWS

72

1

1

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

145 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

73

1

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

146 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

$ tar xvf openshift-install-linuxtargz

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

OpenShift Container Platform 46 Installing on AWS

74

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1461 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

CHAPTER 1 INSTALLING ON AWS

75

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 18 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

OpenShift Container Platform 46 Installing on AWS

76

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

Table 19 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

77

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

78

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

79

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

80

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

81

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 110 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

82

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

83

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1462 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b

OpenShift Container Platform 46 Installing on AWS

84

1 10 11 15

2

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 subnets 12 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17

CHAPTER 1 INSTALLING ON AWS

85

3 7

4

5 8

6 9

12

13

14

16

17

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

1463 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

OpenShift Container Platform 46 Installing on AWS

86

1

2

3

4

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless the

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

87

1

2

proxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

147 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

OpenShift Container Platform 46 Installing on AWS

88

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

148 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

CHAPTER 1 INSTALLING ON AWS

89

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1481 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1482 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

OpenShift Container Platform 46 Installing on AWS

90

1

After you install the CLI it is available using the oc command

1483 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

149 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Cgt oc ltcommandgt

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

CHAPTER 1 INSTALLING ON AWS

91

Example output

1410 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

$ oc whoami

systemadmin

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

OpenShift Container Platform 46 Installing on AWS

92

1411 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

15 INSTALLING A PRIVATE CLUSTER ON AWS

In OpenShift Container Platform version 46 you can install a private cluster into an existing VPC onAmazon Web Services (AWS) The installation program provisions the rest of the requiredinfrastructure which you can further customize To customize the installation you modify parameters inthe install-configyaml file before you install the cluster

151 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

152 Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose externalendpoints Private clusters are accessible from only an internal network and are not visible to theInternet

By default OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpointsA private cluster sets the DNS Ingress Controller and API server to private when you deploy yourcluster This means that the cluster resources are only accessible from your internal network and are notvisible to the internet

To deploy a private cluster you must use existing networking that meets your requirements Your clusterresources might be shared between other clusters on the network

Additionally you must deploy a private cluster from a machine that has access the API services for thecloud you provision to the hosts on the network that you provision and to the internet to obtaininstallation media You can use any machine that meets these access requirements and follows your

CHAPTER 1 INSTALLING ON AWS

93

companyrsquos guidelines For example this machine can be a bastion host on your cloud network or amachine that has access to the network through a VPN

1521 Private clusters in AWS

To create a private cluster on Amazon Web Services (AWS) you must provide an existing private VPCand subnets to host the cluster The installation program must also be able to resolve the DNS recordsthat the cluster requires The installation program configures the Ingress Operator and API server foraccess from only the private network

The cluster still requires access to Internet to access the AWS APIs

The following items are not required or created when you install a private cluster

Public subnets

Public load balancers which support public ingress

A public Route 53 zone that matches the baseDomain for the cluster

The installation program does use the baseDomain that you specify to create a private Route 53 zoneand the required records for the cluster The cluster is configured so that the Operators do not createpublic records for the cluster and all cluster machines are placed in the private subnets that you specify

15211 Limitations

The ability to add public functionality to a private cluster is limited

You cannot make the Kubernetes API endpoints public after installation without takingadditional actions including creating public subnets in the VPC for each availability zone in usecreating a public load balancer and configuring the control plane security groups to allow trafficfrom Internet on 6443 (Kubernetes API port)

If you use a public Service type load balancer you must tag a public subnet in each availabilityzone with kubernetesioclusterltcluster-infra-idgt shared so that AWS can use them tocreate public load balancers

153 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

1531 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

OpenShift Container Platform 46 Installing on AWS

94

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

CHAPTER 1 INSTALLING ON AWS

95

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

OpenShift Container Platform 46 Installing on AWS

96

1532 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1533 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1534 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

154 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster The

CHAPTER 1 INSTALLING ON AWS

97

Telemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

155 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on your

OpenShift Container Platform 46 Installing on AWS

98

1

1

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

156 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for your

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

99

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

157 Manually creating the installation configuration file

For installations of a private OpenShift Container Platform cluster that are only accessible from aninternal network and are not visible to the Internet you must manually generate your installationconfiguration file

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for yourcluster

Procedure

1 Create an installation directory to store your required installation assets in

IMPORTANT

You must create a directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse an installationdirectory If you want to reuse individual files from another cluster installationyou can copy them into your directory However the file names for theinstallation assets might change between releases Use caution when copyinginstallation files from an earlier OpenShift Container Platform version

$ tar xvf openshift-install-linuxtargz

$ mkdir ltinstallation_directorygt

OpenShift Container Platform 46 Installing on AWS

100

2 Customize the following install-configyaml file template and save it in the ltinstallation_directorygt

NOTE

You must name this configuration file install-configyaml

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the next step of the installationprocess You must back it up now

1571 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 111 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

CHAPTER 1 INSTALLING ON AWS

101

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

OpenShift Container Platform 46 Installing on AWS

102

Table 112 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

CHAPTER 1 INSTALLING ON AWS

103

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

104

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

105

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

106

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 113 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

CHAPTER 1 INSTALLING ON AWS

107

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

108

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1572 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b

CHAPTER 1 INSTALLING ON AWS

109

1 10 11 15

2

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operators

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 subnets 12 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17publish Internal 18

OpenShift Container Platform 46 Installing on AWS

110

3 7

4

5 8

6 9

12

13

14

16

17

18

reference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

How to publish the user-facing endpoints of your cluster Set publish to Internal to deploy aprivate cluster which cannot be accessed from the Internet The default value is External

1573 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS

CHAPTER 1 INSTALLING ON AWS

111

1

2

3

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

OpenShift Container Platform 46 Installing on AWS

112

4

1

2

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificates

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

158 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

CHAPTER 1 INSTALLING ON AWS

113

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

159 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1591 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

114

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1592 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

1593 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and click

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

115

1

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1510 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1511 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

116

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1512 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)into a government region To configure the government region modify parameters in the install-configyaml file before you install the cluster

161 Prerequisites

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

117

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

162 AWS government regions

OpenShift Container Platform supports deploying a cluster to AWS GovCloud (US) regions AWSGovCloud is specifically designed for US government agencies at the federal state and local level aswell as contractors educational institutions and other US customers that must run sensitive workloadsin the cloud

These regions do not have published Red Hat Enterprise Linux CoreOS (RHCOS) Amazon MachineImages (AMI) to select so you must upload a custom AMI that belongs to that region

The following AWS GovCloud partitions are supported

us-gov-west-1

us-gov-east-1

The AWS GovCloud region and custom AMI must be manually configured in the install-configyaml filesince RHCOS AMIs are not provided by Red Hat for those regions

163 Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose externalendpoints Private clusters are accessible from only an internal network and are not visible to theInternet

NOTE

Public zones are not supported in Route 53 in AWS GovCloud Therefore clusters mustbe private if they are deployed to an AWS government region

By default OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpointsA private cluster sets the DNS Ingress Controller and API server to private when you deploy yourcluster This means that the cluster resources are only accessible from your internal network and are notvisible to the internet

OpenShift Container Platform 46 Installing on AWS

118

To deploy a private cluster you must use existing networking that meets your requirements Your clusterresources might be shared between other clusters on the network

Additionally you must deploy a private cluster from a machine that has access the API services for thecloud you provision to the hosts on the network that you provision and to the internet to obtaininstallation media You can use any machine that meets these access requirements and follows yourcompanyrsquos guidelines For example this machine can be a bastion host on your cloud network or amachine that has access to the network through a VPN

1631 Private clusters in AWS

To create a private cluster on Amazon Web Services (AWS) you must provide an existing private VPCand subnets to host the cluster The installation program must also be able to resolve the DNS recordsthat the cluster requires The installation program configures the Ingress Operator and API server foraccess from only the private network

The cluster still requires access to Internet to access the AWS APIs

The following items are not required or created when you install a private cluster

Public subnets

Public load balancers which support public ingress

A public Route 53 zone that matches the baseDomain for the cluster

The installation program does use the baseDomain that you specify to create a private Route 53 zoneand the required records for the cluster The cluster is configured so that the Operators do not createpublic records for the cluster and all cluster machines are placed in the private subnets that you specify

16311 Limitations

The ability to add public functionality to a private cluster is limited

You cannot make the Kubernetes API endpoints public after installation without takingadditional actions including creating public subnets in the VPC for each availability zone in usecreating a public load balancer and configuring the control plane security groups to allow trafficfrom Internet on 6443 (Kubernetes API port)

If you use a public Service type load balancer you must tag a public subnet in each availabilityzone with kubernetesioclusterltcluster-infra-idgt shared so that AWS can use them tocreate public load balancers

164 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

CHAPTER 1 INSTALLING ON AWS

119

1641 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

OpenShift Container Platform 46 Installing on AWS

120

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

CHAPTER 1 INSTALLING ON AWS

121

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

1642 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1643 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1644 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

OpenShift Container Platform 46 Installing on AWS

122

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

165 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

166 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

CHAPTER 1 INSTALLING ON AWS

123

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

167 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

124

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

168 Manually creating the installation configuration file

When installing OpenShift Container Platform on Amazon Web Services (AWS) into a region requiring acustom Red Hat Enterprise Linux CoreOS (RHCOS) AMI you must manually generate your installationconfiguration file

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for yourcluster

Procedure

1 Create an installation directory to store your required installation assets in

IMPORTANT

$ tar xvf openshift-install-linuxtargz

$ mkdir ltinstallation_directorygt

CHAPTER 1 INSTALLING ON AWS

125

IMPORTANT

You must create a directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse an installationdirectory If you want to reuse individual files from another cluster installationyou can copy them into your directory However the file names for theinstallation assets might change between releases Use caution when copyinginstallation files from an earlier OpenShift Container Platform version

2 Customize the following install-configyaml file template and save it in the ltinstallation_directorygt

NOTE

You must name this configuration file install-configyaml

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the next step of the installationprocess You must back it up now

1681 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 114 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

OpenShift Container Platform 46 Installing on AWS

126

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

127

Table 115 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

OpenShift Container Platform 46 Installing on AWS

128

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

129

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

130

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

131

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 116 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

132

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

133

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1682 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-gov-west-1a - us-gov-west-1b

OpenShift Container Platform 46 Installing on AWS

134

1 10 14

2

Required

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operators

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-gov-west-1c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-gov-west-1 userTags adminContact jdoe costCenter 7536 subnets 11 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 12 serviceEndpoints 13 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 14fips false 15sshKey ssh-ed25519 AAAA 16publish Internal 17

CHAPTER 1 INSTALLING ON AWS

135

3 7

4

5 8

6 9

11

12

13

15

16

17

reference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

How to publish the user-facing endpoints of your cluster Set publish to Internal to deploy aprivate cluster which cannot be accessed from the Internet The default value is External

1683 AWS regions without a published RHCOS AMI

You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions

OpenShift Container Platform 46 Installing on AWS

136

1

1

1

without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI)or the AWS software development kit (SDK) If a published AMI is not available for an AWS region youcan upload a custom AMI prior to installing the cluster This is required if you are deploying your clusterto an AWS government region

If you are deploying to a non-government region that does not have a published RHCOS AMI and youdo not specify a custom AMI the installation program copies the us-east-1 AMI to the user accountautomatically Then the installation program creates the control plane machines with encrypted EBSvolumes using the default or user-specified Key Management Service (KMS) key This allows the AMI tofollow the same process workflow as published RHCOS AMIs

A region without native support for an RHCOS AMI is not available to select from the terminal duringcluster creation because it is not published However you can install to this region by configuring thecustom AMI in the install-configyaml file

1684 Uploading a custom RHCOS AMI in AWS

If you are deploying to a custom Amazon Web Services (AWS) region you must upload a custom RedHat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region

Prerequisites

You configured an AWS account

You created an Amazon S3 bucket with the required IAM service role

You uploaded your RHCOS VMDK file to Amazon S3 The RHCOS VMDK file must be thehighest version that is less than or equal to the OpenShift Container Platform version you areinstalling

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer

Procedure

1 Export your AWS profile as an environment variable

The AWS profile name that holds your AWS credentials like govcloud

2 Export the region to associate with your custom AMI as an environment variable

The AWS region like us-gov-east-1

3 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable

The RHCOS VMDK version like 460

$ export AWS_PROFILE=ltaws_profilegt 1

$ export AWS_DEFAULT_REGION=ltaws_regiongt 1

$ export RHCOS_VERSION=ltversiongt 1

CHAPTER 1 INSTALLING ON AWS

137

1

2

4 Export the Amazon S3 bucket name as an environment variable

5 Create the containersjson file and define your RHCOS VMDK file

6 Import the RHCOS disk as an Amazon EBS snapshot

The description of your RHCOS disk being imported like rhcos-$RHCOS_VERSION-x86_64-awsx86_64

The file path to the JSON file describing your RHCOS disk The JSON file should containyour Amazon S3 bucket name and key

7 Check the status of the image import

Example output

$ export VMIMPORT_BUCKET_NAME=lts3_bucket_namegt

$ cat ltltEOF gt containersjson Description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 Format vmdk UserBucket S3Bucket $VMIMPORT_BUCKET_NAME S3Key rhcos-$RHCOS_VERSION-x86_64-awsx86_64vmdk EOF

$ aws ec2 import-snapshot --region $AWS_DEFAULT_REGION --description ltdescriptiongt 1 --disk-container ltfile_pathgtcontainersjson 2

$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region $AWS_DEFAULT_REGION

ImportSnapshotTasks [ Description rhcos-460-x86_64-awsx86_64 ImportTaskId import-snap-fh6i8uil SnapshotTaskDetail Description rhcos-460-x86_64-awsx86_64 DiskImageSize 8190566400 Format VMDK SnapshotId snap-06331325870076318 Status completed UserBucket S3Bucket external-images S3Key rhcos-460-x86_64-awsx86_64vmdk

OpenShift Container Platform 46 Installing on AWS

138

1

2

3

4

Copy the SnapshotId to register the image

8 Create a custom RHCOS AMI from the RHCOS snapshot

The RHCOS VMDK architecture type like x86_64 s390x or ppc64le

The Description from the imported snapshot

The name of the RHCOS AMI

The SnapshotID from the imported snapshot

To learn more about these APIs see the AWS documentation for importing snapshots and creatingEBS-backed AMIs

1685 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

]

$ aws ec2 register-image --region $AWS_DEFAULT_REGION --architecture x86_64 1 --description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 2 --ena-support --name rhcos-$RHCOS_VERSION-x86_64-awsx86_64 3 --virtualization-type hvm --root-device-name devxvda --block-device-mappings DeviceName=devxvdaEbs=DeleteOnTermination=trueSnapshotId=ltsnapshot_IDgt 4

CHAPTER 1 INSTALLING ON AWS

139

1

2

3

4

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

OpenShift Container Platform 46 Installing on AWS

140

1

2

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

169 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

CHAPTER 1 INSTALLING ON AWS

141

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

1610 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

16101 Installing the OpenShift CLI on Linux

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

142

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

16102 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

16103 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

143

1

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1611 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1612 Logging in to the cluster by using the web console

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

144

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1613 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

17 INSTALLING A CLUSTER ON USER-PROVISIONED

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

145

17 INSTALLING A CLUSTER ON USER-PROVISIONEDINFRASTRUCTURE IN AWS BY USING CLOUDFORMATIONTEMPLATES

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)that uses infrastructure that you provide

One way to create this infrastructure is to use the provided CloudFormation templates You can modifythe templates to customize your infrastructure or use the information that they contain to create AWSobjects according to your companyrsquos policies

IMPORTANT

The steps for performing a user-provisioned infrastructure installation are provided as anexample only Installing a cluster with infrastructure you provide requires knowledge ofthe cloud provider and the installation process of OpenShift Container Platform SeveralCloudFormation templates are provided to assist in completing these steps or to helpmodel your own You are also free to create the required resources through othermethods the templates are just an example

171 Prerequisites

You reviewed details about the OpenShift Container Platform installation and updateprocesses

You configured an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix) in the AWS documentation

If you use a firewall you configured it to allow the sites that your cluster requires access to

NOTE

Be sure to also review this site list if you are configuring a proxy

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

172 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster The

OpenShift Container Platform 46 Installing on AWS

146

Telemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

173 Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services(AWS) you must manually create both the machines and their supporting infrastructure

For more information about the integration testing for different platforms see the OpenShift ContainerPlatform 4x Tested Integrations page

By using the provided CloudFormation templates you can create stacks of AWS resources thatrepresent the following components

An AWS Virtual Private Cloud (VPC)

Networking and load balancing components

Security groups and roles

An OpenShift Container Platform bootstrap node

OpenShift Container Platform control plane nodes

An OpenShift Container Platform compute node

Alternatively you can manually create the components or you can reuse existing infrastructure that

CHAPTER 1 INSTALLING ON AWS

147

Alternatively you can manually create the components or you can reuse existing infrastructure thatmeets the cluster requirements Review the CloudFormation templates for more details about how thecomponents interrelate

1731 Cluster machines

You need AWSEC2Instance objects for the following machines

A bootstrap machine This machine is required during installation but you can remove it afteryour cluster deploys

Three control plane machines The control plane machines are not governed by a machine set

Compute machines You must create at least two compute machines which are also known asworker machines during installation These machines are not governed by a machine set

You can use the following instance types for the cluster machines with the provided CloudFormationtemplates

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5types instead

Table 117 Instance types for machines

Instance type Bootstrap Control plane Compute

i3large x

m4large x

m4xlarge x x

m42xlarge x x

m44xlarge x x

m48xlarge x x

m410xlarge x x

m416xlarge x x

m5large x

m5xlarge x x

m52xlarge x x

m54xlarge x x

OpenShift Container Platform 46 Installing on AWS

148

m58xlarge x x

m510xlarge x x

m516xlarge x x

c4large x

c4xlarge x

c42xlarge x x

c44xlarge x x

c48xlarge x x

r4large x

r4xlarge x x

r42xlarge x x

r44xlarge x x

r48xlarge x x

r416xlarge x x

Instance type Bootstrap Control plane Compute

You might be able to use other instance types that meet the specifications of these instance types

1732 Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructurethat you provision you must provide a mechanism for approving cluster certificate signing requests(CSRs) after installation The kube-controller-manager only approves the kubelet client CSRs The machine-approver cannot guarantee the validity of a serving certificate that is requested by usingkubelet credentials because it cannot confirm that the correct machine issued the request You mustdetermine and implement a method of verifying the validity of the kubelet serving certificate requestsand approving them

1733 Other infrastructure components

A VPC

DNS entries

CHAPTER 1 INSTALLING ON AWS

149

Load balancers (classic or network) and listeners

A public and a private Route 53 zone

Security groups

IAM roles

S3 buckets

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

OpenShift Container Platform 46 Installing on AWS

150

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a privatehosted zone similar to the one that the installation program uses if it provisions the clusterrsquosinfrastructure You must create a DNS entry that resolves to your load balancer An entry for apiltcluster_namegtltdomaingt must point to the external load balancer and an entry for api-intltcluster_namegtltdomaingt must point to the internal load balancer

The cluster also requires load balancers and listeners for port 6443 which are required for theKubernetes API and its extensions and port 22623 which are required for the Ignition config files fornew machines The targets will be the master nodes Port 6443 must be accessible to both clientsexternal to the cluster and nodes within the cluster Port 22623 must be accessible to nodes within thecluster

Component AWS type Description

DNS AWSRoute53HostedZone

The hosted zone for your internal DNS

CHAPTER 1 INSTALLING ON AWS

151

etcd recordsets

AWSRoute53RecordSet

The registration records for etcd for your control plane machines

Public loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your public subnets

External APIserver record

AWSRoute53RecordSetGroup

Alias records for the external API server

Externallistener

AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the external load balancer

External targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the external load balancer

Private loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your private subnets

Internal APIserver record

AWSRoute53RecordSetGroup

Alias records for the internal API server

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 22623 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the internal load balancer

Component AWS type Description

OpenShift Container Platform 46 Installing on AWS

152

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Component AWS type Description

Security groups

The control plane and worker machines require access to the following ports

Group Type IP Protocol Port range

MasterSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

tcp 6443

tcp 22623

WorkerSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

BootstrapSecurityGroup

AWSEC2SecurityGroup

tcp 22

tcp 19531

Control plane Ingress

The control plane machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd tcp 2379- 2380

MasterIngressVxlan

Vxlan packets udp 4789

MasterIngressWorkerVxlan

Vxlan packets udp 4789

MasterIngressInternal

Internal cluster communication and Kubernetesproxy metrics

tcp 9000 - 9999

CHAPTER 1 INSTALLING ON AWS

153

MasterIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

MasterIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Worker Ingress

The worker machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets udp 4789

WorkerIngressWorkerVxlan

Vxlan packets udp 4789

WorkerIngressInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

OpenShift Container Platform 46 Installing on AWS

154

WorkerIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Roles and instance profiles

You must grant the machines permissions in AWS The provided CloudFormation templates grant themachines Allow permissions for the following AWSIAMRole objects and provide a AWSIAMInstanceProfile for each set of roles If you do not use the templates you can grant themachines the following broad permissions or the following individual permissions

Role Effect Action Resource

Master Allow ec2

Allow elasticloadbalancing

Allow iamPassRole

Allow s3GetObject

Worker Allow ec2Describe

Bootstrap Allow ec2Describe

Allow ec2AttachVolume

Allow ec2DetachVolume

1734 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 112 Required EC2 permissions for installation

tagTagResources

tagUntagResources

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

CHAPTER 1 INSTALLING ON AWS

155

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

OpenShift Container Platform 46 Installing on AWS

156

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 113 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

CHAPTER 1 INSTALLING ON AWS

157

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 114 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

elasticloadbalancingSetLoadBalancerPoliciesOfListener

OpenShift Container Platform 46 Installing on AWS

158

Example 115 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 116 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

route53ListHostedZonesByName

CHAPTER 1 INSTALLING ON AWS

159

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 117 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 118 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

s3GetObjectAcl

OpenShift Container Platform 46 Installing on AWS

160

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 119 Required permissions to delete base cluster resources

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 120 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

ec2DeleteVpc

CHAPTER 1 INSTALLING ON AWS

161

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

Example 121 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 122 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

OpenShift Container Platform 46 Installing on AWS

162

174 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

175 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster the

$ tar xvf openshift-install-linuxtargz

CHAPTER 1 INSTALLING ON AWS

163

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram If you install a cluster on infrastructure that you provision you must provide this key toyour clusterrsquos machines

176 Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisionedinfrastructure you must generate the files that the installation program needs to deploy your clusterand modify them so that the cluster creates only the machines that it will use You generate and

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

164

customize the install-configyaml file Kubernetes manifests and Ignition config files You also have theoption to first set up a separate var partition during the preparation phases of installation

1761 Optional Creating a separate var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installerHowever there are cases where you might want to create separate partitions in a part of the filesystemthat you expect to grow

OpenShift Container Platform supports the addition of a single partition to attach storage to either the var partition or a subdirectory of var For example

varlibcontainers Holds container-related content that can grow as more images andcontainers are added to a system

varlibetcd Holds data that you might want to keep separate for purposes such asperformance optimization of etcd storage

var Holds data that you might want to keep separate for purposes such as auditing

Storing the contents of a var directory separately makes it easier to grow storage for those areas asneeded and reinstall OpenShift Container Platform at a later date and keep that data intact With thismethod you will not have to pull all your containers again nor will you have to copy massive log fileswhen you update systems

Because var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS)the following procedure sets up the separate var partition by creating a machine config that is insertedduring the openshift-install preparation phases of an OpenShift Container Platform installation

IMPORTANT

If you follow the steps to create a separate var partition in this procedure it is notnecessary to create the Kubernetes manifest and Ignition config files again as describedlater in this section

Procedure

1 Create a directory to hold the OpenShift Container Platform installation files

2 Run openshift-install to create a set of files in the manifest and openshift subdirectoriesAnswer the system questions as you are prompted

3 Create a MachineConfig object and add it to a file in the openshift directory For examplename the file 98-var-partitionyaml change the disk device name to the name of the storage

$ mkdir $HOMEclusterconfig

$ openshift-install create manifests --dir $HOMEclusterconfig SSH Public Key $ ls $HOMEclusterconfigopenshift99_kubeadmin-password-secretyaml99_openshift-cluster-api_master-machines-0yaml99_openshift-cluster-api_master-machines-1yaml99_openshift-cluster-api_master-machines-2yaml

CHAPTER 1 INSTALLING ON AWS

165

1

2

device on the worker systems and set the storage size as appropriate This attaches storage toa separate var directory

The storage device name of the disk that you want to partition

When adding a data partition to the boot disk a minimum value of 25000 MiB (Mebibytes)is recommended The root file system is automatically resized to fill all available space upto the specified offset If no value is specified or if the specified value is smaller than therecommended minimum the resulting root file system will be too small and futurereinstalls of RHCOS might overwrite the beginning of the data partition

4 Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories

Now you can use the Ignition config files as input to the installation procedures to install Red HatEnterprise Linux CoreOS (RHCOS) systems

apiVersion machineconfigurationopenshiftiov1kind MachineConfigmetadata labels machineconfigurationopenshiftiorole worker name 98-var-partitionspec config ignition version 310 storage disks - device devltdevice_namegt 1 partitions - sizeMiB ltpartition_sizegt startMiB ltpartition_start_offsetgt 2 label var filesystems - path var device devdiskby-partlabelvar format xfs systemd units - name varmount enabled true contents | [Unit] Before=local-fstarget [Mount] Where=var What=devdiskby-partlabelvar [Install] WantedBy=local-fstarget

$ openshift-install create ignition-configs --dir $HOMEclusterconfig$ ls $HOMEclusterconfigauth bootstrapign masterign metadatajson workerign

OpenShift Container Platform 46 Installing on AWS

166

1

1762 Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deployyour cluster

Prerequisites

You obtained the OpenShift Container Platform installation program for user-provisionedinfrastructure and the pull secret for your cluster

You checked that you are deploying your cluster to a region with an accompanying Red HatEnterprise Linux CoreOS (RHCOS) AMI published by Red Hat If you are deploying to a regionthat requires a custom AMI such as an AWS GovCloud region you must create the install-configyaml file manually

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select aws as the platform to target

iii If you do not have an AWS profile stored on your computer enter the AWS access keyID and secret access key for the user that you configured to run the installationprogram

NOTE

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

167

NOTE

The AWS access key ID and secret access key are stored in ~awscredentials in the home directory of the current user on theinstallation host You are prompted for the credentials by the installationprogram if the credentials for the exported profile are not present in thefile Any credentials that you provide to the installation program arestored in the file

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Optional Back up the install-configyaml file

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

Additional resources

See Configuration and credential file settings in the AWS documentation for more informationabout AWS profile and credential configuration

1763 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

OpenShift Container Platform 46 Installing on AWS

168

1

2

3

4

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

169

1

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

1764 Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines you mustgenerate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines

The installation configuration file transforms into the Kubernetes manifests The manifests wrap into theIgnition configuration files which are later used to create the cluster

IMPORTANT

The Ignition config files that the installation program generates contain certificates thatexpire after 24 hours which are then renewed at that time If the cluster is shut downbefore renewing the certificates and the cluster is later restarted after the 24 hours haveelapsed the cluster automatically recovers the expired certificates The exception is thatyou must manually approve the pending node-bootstrapper certificate signing requests(CSRs) to recover kubelet certificates See the documentation for Recovering fromexpired control plane certificates for more information

Prerequisites

You obtained the OpenShift Container Platform installation program

You created the install-configyaml installation configuration file

Procedure

1 Change to the directory that contains the installation program and generate the Kubernetesmanifests for the cluster

Example output

For ltinstallation_directorygt specify the installation directory that contains the install-configyaml file you created

2 Remove the Kubernetes manifest files that define the control plane machines

By removing these files you prevent the cluster from automatically generating control planemachines

3 Remove the Kubernetes manifest files that define the worker machines

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

INFO Credentials loaded from the myprofile profile in file homemyuserawscredentialsINFO Consuming Install Config from target directoryINFO Manifests created in install_dirmanifests and install_diropenshift

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_master-machines-yaml

OpenShift Container Platform 46 Installing on AWS

170

1 2

1

Because you create and manage the worker machines yourself you do not need to initializethese machines

4 Check that the mastersSchedulable parameter in the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml Kubernetes manifestfile is set to false This setting prevents pods from being scheduled on the control planemachines

a Open the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml file

b Locate the mastersSchedulable parameter and ensure that it is set to false

c Save and exit the file

5 Optional If you do not want the Ingress Operator to create DNS records on your behalf removethe privateZone and publicZone sections from the ltinstallation_directorygtmanifestscluster-dns-02-configyml DNS configuration file

Remove this section completely

If you do so you must add ingress DNS records manually in a later step

6 To create the Ignition configuration files run the following command from the directory thatcontains the installation program

For ltinstallation_directorygt specify the same installation directory

The following files are generated in the directory

auth kubeadmin-password kubeconfig bootstrapign masterign metadatajson workerign

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_worker-machineset-yaml

apiVersion configopenshiftiov1kind DNSmetadata creationTimestamp null name clusterspec baseDomain exampleopenshiftcom privateZone 1 id mycluster-100419-private-zone publicZone 2 id exampleopenshiftcomstatus

$ openshift-install create ignition-configs --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

171

1

1

177 Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify yourcluster in Amazon Web Services (AWS) The infrastructure name is also used to locate the appropriateAWS resources during an OpenShift Container Platform installation The provided CloudFormationtemplates contain references to this infrastructure name so you must extract it

Prerequisites

You obtained the OpenShift Container Platform installation program and the pull secret foryour cluster

You generated the Ignition config files for your cluster

You installed the jq package

Procedure

To extract and view the infrastructure name from the Ignition config file metadata run thefollowing command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

The output of this command is your cluster name and a random string

178 Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShiftContainer Platform cluster to use You can customize the VPC to meet your requirements includingVPN and route tables

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the VPC

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

$ jq -r infraID ltinstallation_directorygtmetadatajson 1

openshift-vw9j6 1

OpenShift Container Platform 46 Installing on AWS

172

1

2

3

4

5

6

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

Procedure

1 Create a JSON file that contains the parameter values that the template requires

The CIDR block for the VPC

Specify a CIDR block in the format xxxx16-24

The number of availability zones to deploy the VPC in

Specify an integer between 1 and 3

The size of each subnet in each availability zone

Specify an integer between 5 and 13 where 5 is 27 and 13 is 19

2 Copy the template from the CloudFormation template for the VPC section of this topic andsave it as a YAML file on your computer This template describes the VPC that your clusterrequires

3 Launch the CloudFormation template to create a stack of AWS resources that represent theVPC

IMPORTANT

You must enter the command on a single line

[ ParameterKey VpcCidr 1 ParameterValue 1000016 2 ParameterKey AvailabilityZoneCount 3 ParameterValue 1 4 ParameterKey SubnetBits 5 ParameterValue 12 6 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

173

1

2

3

ltnamegt is the name for the CloudFormation stack such as cluster-vpc You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

VpcId The ID of your VPC

PublicSubnetIds

The IDs of the new public subnets

PrivateSubnetIds

The IDs of the new private subnets

1781 CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for yourOpenShift Container Platform cluster

Example 123 CloudFormation template for the VPC

arnawscloudformationus-east-1269333783861stackcluster-vpcdbedae40-2fd3-11eb-820e-12a48460849f

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for Best Practice VPC with 1-3 AZs

Parameters VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String AvailabilityZoneCount ConstraintDescription The number of availability zones (Min 1 Max 3) MinValue 1 MaxValue 3

OpenShift Container Platform 46 Installing on AWS

174

Default 1 Description How many AZs to create VPC subnets for (Min 1 Max 3) Type Number SubnetBits ConstraintDescription CIDR block parameter must be in the form xxxx19-27 MinValue 5 MaxValue 13 Default 12 Description Size of each subnet to create within the availability zones (Min 5 = 27 Max 13 = 19) Type Number

Metadata AWSCloudFormationInterface ParameterGroups - Label default Network Configuration Parameters - VpcCidr - SubnetBits - Label default Availability Zones Parameters - AvailabilityZoneCount ParameterLabels AvailabilityZoneCount default Availability Zone Count VpcCidr default VPC CIDR SubnetBits default Bits Per Subnet

Conditions DoAz3 Equals [3 Ref AvailabilityZoneCount] DoAz2 Or [Equals [2 Ref AvailabilityZoneCount] Condition DoAz3]

Resources VPC Type AWSEC2VPC Properties EnableDnsSupport true EnableDnsHostnames true CidrBlock Ref VpcCidr PublicSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [0 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PublicSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC

CHAPTER 1 INSTALLING ON AWS

175

CidrBlock Select [1 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PublicSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [2 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion InternetGateway Type AWSEC2InternetGateway GatewayToInternet Type AWSEC2VPCGatewayAttachment Properties VpcId Ref VPC InternetGatewayId Ref InternetGateway PublicRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PublicRoute Type AWSEC2Route DependsOn GatewayToInternet Properties RouteTableId Ref PublicRouteTable DestinationCidrBlock 00000 GatewayId Ref InternetGateway PublicSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PublicSubnet2 RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation3 Condition DoAz3 Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet3 RouteTableId Ref PublicRouteTable PrivateSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [3 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion

OpenShift Container Platform 46 Installing on AWS

176

PrivateRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PrivateSubnet RouteTableId Ref PrivateRouteTable NAT DependsOn - GatewayToInternet Type AWSEC2NatGateway Properties AllocationId FnGetAtt - EIP - AllocationId SubnetId Ref PublicSubnet EIP Type AWSEC2EIP Properties Domain vpc Route Type AWSEC2Route Properties RouteTableId Ref PrivateRouteTable DestinationCidrBlock 00000 NatGatewayId Ref NAT PrivateSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [4 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PrivateRouteTable2 Type AWSEC2RouteTable Condition DoAz2 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PrivateSubnet2 RouteTableId Ref PrivateRouteTable2 NAT2 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz2

CHAPTER 1 INSTALLING ON AWS

177

Properties AllocationId FnGetAtt - EIP2 - AllocationId SubnetId Ref PublicSubnet2 EIP2 Type AWSEC2EIP Condition DoAz2 Properties Domain vpc Route2 Type AWSEC2Route Condition DoAz2 Properties RouteTableId Ref PrivateRouteTable2 DestinationCidrBlock 00000 NatGatewayId Ref NAT2 PrivateSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [5 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion PrivateRouteTable3 Type AWSEC2RouteTable Condition DoAz3 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation3 Type AWSEC2SubnetRouteTableAssociation Condition DoAz3 Properties SubnetId Ref PrivateSubnet3 RouteTableId Ref PrivateRouteTable3 NAT3 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz3 Properties AllocationId FnGetAtt - EIP3 - AllocationId SubnetId Ref PublicSubnet3 EIP3 Type AWSEC2EIP Condition DoAz3 Properties Domain vpc

OpenShift Container Platform 46 Installing on AWS

178

Route3 Type AWSEC2Route Condition DoAz3 Properties RouteTableId Ref PrivateRouteTable3 DestinationCidrBlock 00000 NatGatewayId Ref NAT3 S3Endpoint Type AWSEC2VPCEndpoint Properties PolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Action - Resource - RouteTableIds - Ref PublicRouteTable - Ref PrivateRouteTable - If [DoAz2 Ref PrivateRouteTable2 Ref AWSNoValue] - If [DoAz3 Ref PrivateRouteTable3 Ref AWSNoValue] ServiceName Join - - - comamazonaws - Ref AWSRegion - s3 VpcId Ref VPC

Outputs VpcId Description ID of the new VPC Value Ref VPC PublicSubnetIds Description Subnet IDs of the public subnets Value Join [ [Ref PublicSubnet If [DoAz2 Ref PublicSubnet2 Ref AWSNoValue] If [DoAz3 Ref PublicSubnet3 Ref AWSNoValue]] ] PrivateSubnetIds Description Subnet IDs of the private subnets Value Join [ [Ref PrivateSubnet If [DoAz2 Ref PrivateSubnet2 Ref AWSNoValue] If [DoAz3 Ref PrivateSubnet3 Ref AWSNoValue]] ]

CHAPTER 1 INSTALLING ON AWS

179

1

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

179 Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS)that your OpenShift Container Platform cluster can use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the networking and load balancing components that yourOpenShift Container Platform cluster requires The template also creates a hosted zone and subnettags

You can run the template multiple times within a single Virtual Private Cloud (VPC)

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-configyaml file for your cluster You can obtain details about your hosted zone by running thefollowing command

For the ltroute53_domaingt specify the Route 53 base domain that you used when yougenerated the install-configyaml file for the cluster

Example output

In the example output the hosted zone ID is Z21IXYZABCZ2A4

2 Create a JSON file that contains the parameter values that the template requires

$ aws route53 list-hosted-zones-by-name --dns-name ltroute53_domaingt 1

myclusterexamplecom False 100HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA hostedzoneZ21IXYZABCZ2A4 myclusterexamplecom 10

OpenShift Container Platform 46 Installing on AWS

180

1

2

3

4

5

6

7

8

9

A short representative cluster name to use for host names etc

Specify the cluster name that you used when you generated the install-configyaml filefor the cluster

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The Route 53 public zone ID to register the targets with

Specify the Route 53 public zone ID which as a format similar to Z21IXYZABCZ2A4 Youcan obtain this value from the AWS console

The Route 53 zone to register the targets with

Specify the Route 53 base domain that you used when you generated the install-configyaml file for the cluster Do not include the trailing period () that is displayed in theAWS console

The public subnets that you created for your VPC

[ ParameterKey ClusterName 1 ParameterValue mycluster 2 ParameterKey InfrastructureName 3 ParameterValue mycluster-ltrandom_stringgt 4 ParameterKey HostedZoneId 5 ParameterValue ltrandom_stringgt 6 ParameterKey HostedZoneName 7 ParameterValue examplecom 8 ParameterKey PublicSubnets 9 ParameterValue subnet-ltrandom_stringgt 10 ParameterKey PrivateSubnets 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey VpcId 13 ParameterValue vpc-ltrandom_stringgt 14 ]

CHAPTER 1 INSTALLING ON AWS

181

10

11

12

13

14

1

2

3

4

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

3 Copy the template from the CloudFormation template for the network and load balancerssection of this topic and save it as a YAML file on your computer This template describes thenetworking and load balancing objects that your cluster requires

IMPORTANT

If you are deploying your cluster to an AWS government region you must updatethe InternalApiServerRecord in the CloudFormation template to use CNAMErecords Records of type ALIAS are not supported for AWS government regions

4 Launch the CloudFormation template to create a stack of AWS resources that provide thenetworking and load balancing components

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-dns You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole resources

Example output

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-dnscd3e5de0-2fd4-11eb-5cf0-12be5c33a183

OpenShift Container Platform 46 Installing on AWS

182

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

PrivateHostedZoneId

Hosted zone ID for the private DNS

ExternalApiLoadBalancerName

Full name of the external API load balancer

InternalApiLoadBalancerName

Full name of the internal API load balancer

ApiServerDnsName

Full host name of the API server

RegisterNlbIpTargetsLambda

Lambda ARN useful to help registerderegister IP targets for these load balancers

ExternalApiTargetGroupArn

ARN of external API target group

InternalApiTargetGroupArn

ARN of internal API target group

InternalServiceTargetGroupArn

ARN of internal service target group

1791 CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and loadbalancers that you need for your OpenShift Container Platform cluster

Example 124 CloudFormation template for the network and load balancers

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Network Elements (Route53 amp LBs)

Parameters ClusterName

CHAPTER 1 INSTALLING ON AWS

183

AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Cluster name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short representative cluster name to use for host names and other identifying names Type String InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String HostedZoneId Description The Route53 public zone ID to register the targets with such as Z21IXYZABCZ2A4 Type String HostedZoneName Description The Route53 zone to register the targets with such as examplecom Omit the trailing period Type String Default examplecom PublicSubnets Description The internet-facing subnets Type ListltAWSEC2SubnetIdgt PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - ClusterName - InfrastructureName - Label default Network Configuration Parameters - VpcId - PublicSubnets - PrivateSubnets - Label default DNS Parameters - HostedZoneName - HostedZoneId ParameterLabels

OpenShift Container Platform 46 Installing on AWS

184

ClusterName default Cluster Name InfrastructureName default Infrastructure Name VpcId default VPC ID PublicSubnets default Public Subnets PrivateSubnets default Private Subnets HostedZoneName default Public Hosted Zone Name HostedZoneId default Public Hosted Zone ID

Resources ExtApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName ext]] IpAddressType ipv4 Subnets Ref PublicSubnets Type network

IntApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName int]] Scheme internal IpAddressType ipv4 Subnets Ref PrivateSubnets Type network

IntDns Type AWSRoute53HostedZone Properties HostedZoneConfig Comment Managed by CloudFormation Name Join [ [Ref ClusterName Ref HostedZoneName]] HostedZoneTags - Key Name Value Join [- [Ref InfrastructureName int]] - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value owned VPCs - VPCId Ref VpcId VPCRegion Ref AWSRegion

ExternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref HostedZoneId RecordSets - Name Join [

CHAPTER 1 INSTALLING ON AWS

185

[api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt ExtApiElbCanonicalHostedZoneID DNSName GetAtt ExtApiElbDNSName

InternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref IntDns RecordSets - Name Join [ [api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName - Name Join [ [api-int Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName

ExternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref ExternalApiTargetGroup LoadBalancerArn Ref ExtApiElb Port 6443 Protocol TCP

ExternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP

OpenShift Container Platform 46 Installing on AWS

186

TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalApiTargetGroup LoadBalancerArn Ref IntApiElb Port 6443 Protocol TCP

InternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalServiceInternalListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalServiceTargetGroup LoadBalancerArn Ref IntApiElb Port 22623 Protocol TCP

InternalServiceTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath healthz HealthCheckPort 22623 HealthCheckProtocol HTTPS

CHAPTER 1 INSTALLING ON AWS

187

HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 22623 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

RegisterTargetLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName nlb lambda role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalApiTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalServiceTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref ExternalApiTargetGroup

RegisterNlbIpTargets Type AWSLambdaFunction Properties Handler indexhandler

OpenShift Container Platform 46 Installing on AWS

188

Role FnGetAtt - RegisterTargetLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) elb = boto3client(elbv2) if event[RequestType] == Delete elbderegister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) elif event[RequestType] == Create elbregister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][TargetArn]+event[ResourceProperties][TargetIp]) Runtime python37 Timeout 120

RegisterSubnetTagsLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName subnet-tags-lambda-role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName subnet-tagging-policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ ec2DeleteTags ec2CreateTags ] Resource arnawsec2subnet - Effect Allow Action [ ec2DescribeSubnets ec2DescribeTags ] Resource

CHAPTER 1 INSTALLING ON AWS

189

RegisterSubnetTags Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterSubnetTagsLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) ec2_client = boto3client(ec2) if event[RequestType] == Delete for subnet_id in event[ResourceProperties][Subnets] ec2_clientdelete_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName]]) elif event[RequestType] == Create for subnet_id in event[ResourceProperties][Subnets] ec2_clientcreate_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName] Value shared]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][InfrastructureName]+event[ResourceProperties][Subnets][0]) Runtime python37 Timeout 120

RegisterPublicSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PublicSubnets

RegisterPrivateSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PrivateSubnets

Outputs PrivateHostedZoneId Description Hosted zone ID for the private DNS which is required for private records Value Ref IntDns ExternalApiLoadBalancerName Description Full name of the external API load balancer Value GetAtt ExtApiElbLoadBalancerFullName InternalApiLoadBalancerName Description Full name of the internal API load balancer Value GetAtt IntApiElbLoadBalancerFullName ApiServerDnsName Description Full hostname of the API server which is required for the Ignition config files

OpenShift Container Platform 46 Installing on AWS

190

IMPORTANT

If you are deploying your cluster to an AWS government region you must update the InternalApiServerRecord to use CNAME records Records of type ALIAS are notsupported for AWS government regions For example

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

You can view details about your hosted zones by navigating to the AWS Route 53 console

See Listing public hosted zones in the AWS documentation for more information about listingpublic hosted zones

1710 Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift ContainerPlatform cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the security groups and roles that your OpenShift ContainerPlatform cluster requires

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Value Join [ [api-int Ref ClusterName Ref HostedZoneName]] RegisterNlbIpTargetsLambda Description Lambda ARN useful to help register or deregister IP targets for these load balancers Value GetAtt RegisterNlbIpTargetsArn ExternalApiTargetGroupArn Description ARN of the external API target group Value Ref ExternalApiTargetGroup InternalApiTargetGroupArn Description ARN of the internal API target group Value Ref InternalApiTargetGroup InternalServiceTargetGroupArn Description ARN of the internal service target group Value Ref InternalServiceTargetGroup

Type CNAMETTL 10ResourceRecords- GetAtt IntApiElbDNSName

CHAPTER 1 INSTALLING ON AWS

191

1

2

3

4

5

6

7

8

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Create a JSON file that contains the parameter values that the template requires

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The CIDR block for the VPC

Specify the CIDR block parameter that you used for the VPC that you defined in the form xxxx16-24

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

2 Copy the template from the CloudFormation template for security objects section of this

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey VpcCidr 3 ParameterValue 1000016 4 ParameterKey PrivateSubnets 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey VpcId 7 ParameterValue vpc-ltrandom_stringgt 8 ]

OpenShift Container Platform 46 Installing on AWS

192

1

2

3

4

2 Copy the template from the CloudFormation template for security objects section of thistopic and save it as a YAML file on your computer This template describes the security groupsand roles that your cluster requires

3 Launch the CloudFormation template to create a stack of AWS resources that represent thesecurity groups and roles

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-sec You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

MasterSecurityGroupId

Master Security Group ID

WorkerSecurityGroupId

Worker Security Group ID

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-sec03bd4210-2ed7-11eb-6d7a-13fc0b61e9db

$ aws cloudformation describe-stacks --stack-name ltnamegt

CHAPTER 1 INSTALLING ON AWS

193

MasterInstanceProfile

Master IAM Instance Profile

WorkerInstanceProfile

Worker IAM Instance Profile

17101 CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need foryour OpenShift Container Platform cluster

Example 125 CloudFormation template for security objects

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Security Elements (Security Groups amp IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Network Configuration Parameters

OpenShift Container Platform 46 Installing on AWS

194

- VpcId - VpcCidr - PrivateSubnets ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID VpcCidr default VPC CIDR PrivateSubnets default Private Subnets

Resources MasterSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Master Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr - IpProtocol tcp ToPort 6443 FromPort 6443 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22623 ToPort 22623 CidrIp Ref VpcCidr VpcId Ref VpcId

WorkerSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Worker Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr VpcId Ref VpcId

MasterIngressEtcd Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId

CHAPTER 1 INSTALLING ON AWS

195

SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description etcd FromPort 2379 ToPort 2380 IpProtocol tcp

MasterIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressWorkerVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressWorkerGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

OpenShift Container Platform 46 Installing on AWS

196

MasterIngressWorkerInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressWorkerInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services

CHAPTER 1 INSTALLING ON AWS

197

FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressWorkerIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIngressWorkerIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressMasterVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressGeneve Type AWSEC2SecurityGroupIngress

OpenShift Container Platform 46 Installing on AWS

198

Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressMasterGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressMasterInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressMasterInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999

CHAPTER 1 INSTALLING ON AWS

199

IpProtocol udp

WorkerIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes secure kubelet port FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal Kubernetes communication FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressMasterIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressMasterIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId

OpenShift Container Platform 46 Installing on AWS

200

SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2AttachVolume - ec2AuthorizeSecurityGroupIngress - ec2CreateSecurityGroup - ec2CreateTags - ec2CreateVolume - ec2DeleteSecurityGroup - ec2DeleteVolume - ec2Describe - ec2DetachVolume - ec2ModifyInstanceAttribute - ec2ModifyVolume - ec2RevokeSecurityGroupIngress - elasticloadbalancingAddTags - elasticloadbalancingAttachLoadBalancerToSubnets - elasticloadbalancingApplySecurityGroupsToLoadBalancer - elasticloadbalancingCreateListener - elasticloadbalancingCreateLoadBalancer - elasticloadbalancingCreateLoadBalancerPolicy - elasticloadbalancingCreateLoadBalancerListeners - elasticloadbalancingCreateTargetGroup - elasticloadbalancingConfigureHealthCheck - elasticloadbalancingDeleteListener - elasticloadbalancingDeleteLoadBalancer - elasticloadbalancingDeleteLoadBalancerListeners - elasticloadbalancingDeleteTargetGroup - elasticloadbalancingDeregisterInstancesFromLoadBalancer - elasticloadbalancingDeregisterTargets - elasticloadbalancingDescribe - elasticloadbalancingDetachLoadBalancerFromSubnets - elasticloadbalancingModifyListener - elasticloadbalancingModifyLoadBalancerAttributes

CHAPTER 1 INSTALLING ON AWS

201

- elasticloadbalancingModifyTargetGroup - elasticloadbalancingModifyTargetGroupAttributes - elasticloadbalancingRegisterInstancesWithLoadBalancer - elasticloadbalancingRegisterTargets - elasticloadbalancingSetLoadBalancerPoliciesForBackendServer - elasticloadbalancingSetLoadBalancerPoliciesOfListener - kmsDescribeKey Resource

MasterInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref MasterIamRole

WorkerIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName worker policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2DescribeInstances - ec2DescribeRegions Resource

WorkerInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref WorkerIamRole

Outputs MasterSecurityGroupId Description Master Security Group ID Value GetAtt MasterSecurityGroupGroupId

WorkerSecurityGroupId Description Worker Security Group ID Value GetAtt WorkerSecurityGroupGroupId

MasterInstanceProfile Description Master IAM Instance Profile Value Ref MasterInstanceProfile

OpenShift Container Platform 46 Installing on AWS

202

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1711 RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs valid for the various Amazon WebServices (AWS) zones you can specify for your OpenShift Container Platform nodes

NOTE

You can also install to regions that do not have a RHCOS AMI published by importing yourown AMI

Table 118 RHCOS AMIs

AWS zone AWS AMI

af-south-1 ami-09921c9c1c36e695c

ap-east-1 ami-01ee8446e9af6b197

ap-northeast-1 ami-04e5b5722a55846ea

ap-northeast-2 ami-0fdc25c8a0273a742

ap-south-1 ami-09e3deb397cc526a8

ap-southeast-1 ami-0630e03f75e02eec4

ap-southeast-2 ami-069450613262ba03c

ca-central-1 ami-012518cdbd3057dfd

eu-central-1 ami-0bd7175ff5b1aef0c

eu-north-1 ami-06c9ec42d0a839ad2

eu-south-1 ami-0614d7440a0363d71

eu-west-1 ami-01b89df58b5d4d5fa

WorkerInstanceProfile Description Worker IAM Instance Profile Value Ref WorkerInstanceProfile

CHAPTER 1 INSTALLING ON AWS

203

eu-west-2 ami-06f6e31ddd554f89d

eu-west-3 ami-0dc82e2517ded15a1

me-south-1 ami-07d181e3aa0f76067

sa-east-1 ami-0cd44e6dd20e6c7fa

us-east-1 ami-04a16d506e5b0e246

us-east-2 ami-0a1f868ad58ea59a7

us-west-1 ami-0a65d76e3a6f6622f

us-west-2 ami-0dd9008abadc519f1

AWS zone AWS AMI

17111 AWS regions without a published RHCOS AMI

You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regionswithout native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI)or the AWS software development kit (SDK) If a published AMI is not available for an AWS region youcan upload a custom AMI prior to installing the cluster This is required if you are deploying your clusterto an AWS government region

If you are deploying to a non-government region that does not have a published RHCOS AMI and youdo not specify a custom AMI the installation program copies the us-east-1 AMI to the user accountautomatically Then the installation program creates the control plane machines with encrypted EBSvolumes using the default or user-specified Key Management Service (KMS) key This allows the AMI tofollow the same process workflow as published RHCOS AMIs

A region without native support for an RHCOS AMI is not available to select from the terminal duringcluster creation because it is not published However you can install to this region by configuring thecustom AMI in the install-configyaml file

17112 Uploading a custom RHCOS AMI in AWS

If you are deploying to a custom Amazon Web Services (AWS) region you must upload a custom RedHat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region

Prerequisites

You configured an AWS account

You created an Amazon S3 bucket with the required IAM service role

You uploaded your RHCOS VMDK file to Amazon S3 The RHCOS VMDK file must be thehighest version that is less than or equal to the OpenShift Container Platform version you areinstalling

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Using

OpenShift Container Platform 46 Installing on AWS

204

1

1

1

1

2

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer

Procedure

1 Export your AWS profile as an environment variable

The AWS profile name that holds your AWS credentials like govcloud

2 Export the region to associate with your custom AMI as an environment variable

The AWS region like us-gov-east-1

3 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable

The RHCOS VMDK version like 460

4 Export the Amazon S3 bucket name as an environment variable

5 Create the containersjson file and define your RHCOS VMDK file

6 Import the RHCOS disk as an Amazon EBS snapshot

The description of your RHCOS disk being imported like rhcos-$RHCOS_VERSION-x86_64-awsx86_64

The file path to the JSON file describing your RHCOS disk The JSON file should containyour Amazon S3 bucket name and key

$ export AWS_PROFILE=ltaws_profilegt 1

$ export AWS_DEFAULT_REGION=ltaws_regiongt 1

$ export RHCOS_VERSION=ltversiongt 1

$ export VMIMPORT_BUCKET_NAME=lts3_bucket_namegt

$ cat ltltEOF gt containersjson Description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 Format vmdk UserBucket S3Bucket $VMIMPORT_BUCKET_NAME S3Key rhcos-$RHCOS_VERSION-x86_64-awsx86_64vmdk EOF

$ aws ec2 import-snapshot --region $AWS_DEFAULT_REGION --description ltdescriptiongt 1 --disk-container ltfile_pathgtcontainersjson 2

CHAPTER 1 INSTALLING ON AWS

205

1

2

3

4

7 Check the status of the image import

Example output

Copy the SnapshotId to register the image

8 Create a custom RHCOS AMI from the RHCOS snapshot

The RHCOS VMDK architecture type like x86_64 s390x or ppc64le

The Description from the imported snapshot

The name of the RHCOS AMI

The SnapshotID from the imported snapshot

To learn more about these APIs see the AWS documentation for importing snapshots and creatingEBS-backed AMIs

1712 Creating the bootstrap node in AWS

$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region $AWS_DEFAULT_REGION

ImportSnapshotTasks [ Description rhcos-460-x86_64-awsx86_64 ImportTaskId import-snap-fh6i8uil SnapshotTaskDetail Description rhcos-460-x86_64-awsx86_64 DiskImageSize 8190566400 Format VMDK SnapshotId snap-06331325870076318 Status completed UserBucket S3Bucket external-images S3Key rhcos-460-x86_64-awsx86_64vmdk ]

$ aws ec2 register-image --region $AWS_DEFAULT_REGION --architecture x86_64 1 --description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 2 --ena-support --name rhcos-$RHCOS_VERSION-x86_64-awsx86_64 3 --virtualization-type hvm --root-device-name devxvda --block-device-mappings DeviceName=devxvdaEbs=DeleteOnTermination=trueSnapshotId=ltsnapshot_IDgt 4

OpenShift Container Platform 46 Installing on AWS

206

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift ContainerPlatform cluster initialization

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the bootstrap node that your OpenShift Container Platforminstallation requires

NOTE

If you do not use the provided CloudFormation template to create your bootstrap nodeyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

Procedure

1 Provide a location to serve the bootstrapign Ignition config file to your cluster This file islocated in your installation directory One way to do this is to create an S3 bucket in yourclusterrsquos region and upload the Ignition config file to it

IMPORTANT

The provided CloudFormation Template assumes that the Ignition config filesfor your cluster are served from an S3 bucket If you choose to serve the filesfrom another location you must modify the templates

IMPORTANT

If you are deploying to a region that has endpoints that differ from the AWS SDKor you are providing your own custom endpoints you must use a presigned URLfor your S3 bucket instead of the s3 schema

NOTE

The bootstrap Ignition config file does contain secrets like X509 keys Thefollowing steps provide basic security for the S3 bucket To provide additionalsecurity you can enable an S3 bucket policy to allow only certain users such asthe OpenShift IAM user to access objects that the bucket contains You canavoid S3 entirely and serve your bootstrap Ignition config file from any addressthat the bootstrap machine can reach

CHAPTER 1 INSTALLING ON AWS

207

1

1

a Create the bucket

ltcluster-namegt-infra is the bucket name When creating the install-configyaml filereplace ltcluster-namegt with the name specified for the cluster

b Upload the bootstrapign Ignition config file to the bucket

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

c Verify that the file uploaded

Example output

2 Create a JSON file that contains the parameter values that the template requires

$ aws s3 mb s3ltcluster-namegt-infra 1

$ aws s3 cp ltinstallation_directorygtbootstrapign s3ltcluster-namegt-infrabootstrapign 1

$ aws s3 ls s3ltcluster-namegt-infra

2019-04-03 161516 314878 bootstrapign

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AllowedBootstrapSshCidr 5 ParameterValue 00000 6 ParameterKey PublicSubnet 7 ParameterValue subnet-ltrandom_stringgt 8 ParameterKey MasterSecurityGroupId 9 ParameterValue sg-ltrandom_stringgt 10 ParameterKey VpcId 11 ParameterValue vpc-ltrandom_stringgt 12

OpenShift Container Platform 46 Installing on AWS

208

1

2

3

4

5

6

7

8

9

10

11

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node

Specify a valid AWSEC2ImageId value

CIDR block to allow SSH access to the bootstrap node

Specify a CIDR block in the format xxxx16-24

The public subnet that is associated with your VPC to launch the bootstrap node into

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The master security group ID (for registering temporary rules)

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The VPC created resources will belong to

ParameterKey BootstrapIgnitionLocation 13 ParameterValue s3ltbucket_namegtbootstrapign 14 ParameterKey AutoRegisterELB 15 ParameterValue yes 16 ParameterKey RegisterNlbIpTargetsLambdaArn 17 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 18 ParameterKey ExternalApiTargetGroupArn 19 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 20 ParameterKey InternalApiTargetGroupArn 21 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 22 ParameterKey InternalServiceTargetGroupArn 23 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 24 ]

CHAPTER 1 INSTALLING ON AWS

209

12

13

14

15

16

17

18

19

20

21

22

23

24

1

Specify the VpcId value from the output of the CloudFormation template for the VPC

Location to fetch bootstrap Ignition config file from

Specify the S3 bucket and file name in the form s3ltbucket_namegtbootstrapign

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

3 Copy the template from the CloudFormation template for the bootstrap machine section ofthis topic and save it as a YAML file on your computer This template describes the bootstrapmachine that your cluster requires

4 Launch the CloudFormation template to create a stack of AWS resources that represent thebootstrap node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-bootstrap You needthe name of this stack if you remove the cluster

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

OpenShift Container Platform 46 Installing on AWS

210

2

3

4

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

BootstrapInstanceId

The bootstrap Instance ID

BootstrapPublicIp

The bootstrap node public IP address

BootstrapPrivateIp

The bootstrap node private IP address

17121 CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need foryour OpenShift Container Platform cluster

Example 126 CloudFormation template for the bootstrap machine

arnawscloudformationus-east-1269333783861stackcluster-bootstrap12944486-2add-11eb-9dee-12dace8e3a83

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Bootstrap (EC2 Instance Security Groups and IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String

CHAPTER 1 INSTALLING ON AWS

211

RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AllowedBootstrapSshCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])(([0-9]|1[0-9]|2[0-9]|3[0-2]))$ ConstraintDescription CIDR block parameter must be in the form xxxx0-32 Default 00000 Description CIDR block to allow SSH access to the bootstrap node Type String PublicSubnet Description The public subnet to launch the bootstrap node into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID for registering temporary rules Type AWSEC2SecurityGroupId VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId BootstrapIgnitionLocation Default s3my-s3-bucketbootstrapign Description Ignition config file location Type String AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId

OpenShift Container Platform 46 Installing on AWS

212

- Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID AllowedBootstrapSshCidr default Allowed SSH Source PublicSubnet default Public Subnet RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Bootstrap Ignition Source MasterSecurityGroupId default Master Security Group ID AutoRegisterELB default Use Provided ELB Automation

Conditions DoRegistration Equals [yes Ref AutoRegisterELB]

Resources BootstrapIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName bootstrap policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action ec2Describe

CHAPTER 1 INSTALLING ON AWS

213

Resource - Effect Allow Action ec2AttachVolume Resource - Effect Allow Action ec2DetachVolume Resource - Effect Allow Action s3GetObject Resource

BootstrapInstanceProfile Type AWSIAMInstanceProfile Properties Path Roles - Ref BootstrapIamRole

BootstrapSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Bootstrap Security Group SecurityGroupIngress - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref AllowedBootstrapSshCidr - IpProtocol tcp ToPort 19531 FromPort 19531 CidrIp 00000 VpcId Ref VpcId

BootstrapInstance Type AWSEC2Instance Properties ImageId Ref RhcosAmi IamInstanceProfile Ref BootstrapInstanceProfile InstanceType i3large NetworkInterfaces - AssociatePublicIpAddress true DeviceIndex 0 GroupSet - Ref BootstrapSecurityGroup - Ref MasterSecurityGroupId SubnetId Ref PublicSubnet UserData FnBase64 Sub - ignitionconfigreplacesource$S3Locversion310 - S3Loc Ref BootstrapIgnitionLocation

RegisterBootstrapApiTarget Condition DoRegistration Type CustomNLBRegister

OpenShift Container Platform 46 Installing on AWS

214

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise LinuxCoreOS (RHCOS) AMIs for the AWS zones

1713 Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the control plane nodes

IMPORTANT

The CloudFormation template creates a stack that represents three control plane nodes

NOTE

Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

Outputs BootstrapInstanceId Description Bootstrap Instance ID Value Ref BootstrapInstance

BootstrapPublicIp Description The bootstrap node public IP address Value GetAtt BootstrapInstancePublicIp

BootstrapPrivateIp Description The bootstrap node private IP address Value GetAtt BootstrapInstancePrivateIp

CHAPTER 1 INSTALLING ON AWS

215

NOTE

If you do not use the provided CloudFormation template to create your control planenodes you must review the provided information and manually create the infrastructureIf your cluster does not initialize correctly you might have to contact Red Hat supportwith your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

Procedure

1 Create a JSON file that contains the parameter values that the template requires

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AutoRegisterDNS 5 ParameterValue yes 6 ParameterKey PrivateHostedZoneId 7 ParameterValue ltrandom_stringgt 8 ParameterKey PrivateHostedZoneName 9 ParameterValue myclusterexamplecom 10 ParameterKey Master0Subnet 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey Master1Subnet 13

OpenShift Container Platform 46 Installing on AWS

216

ParameterValue subnet-ltrandom_stringgt 14 ParameterKey Master2Subnet 15 ParameterValue subnet-ltrandom_stringgt 16 ParameterKey MasterSecurityGroupId 17 ParameterValue sg-ltrandom_stringgt 18 ParameterKey IgnitionLocation 19 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configmaster 20 ParameterKey CertificateAuthorities 21 ParameterValue datatextplaincharset=utf-8base64ABCxYz== 22 ParameterKey MasterInstanceProfileName 23 ParameterValue ltroles_stackgt-MasterInstanceProfile-ltrandom_stringgt 24 ParameterKey MasterInstanceType 25 ParameterValue m4xlarge 26 ParameterKey AutoRegisterELB 27 ParameterValue yes 28 ParameterKey RegisterNlbIpTargetsLambdaArn 29 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 30 ParameterKey ExternalApiTargetGroupArn 31 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 32 ParameterKey InternalApiTargetGroupArn 33 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 34 ParameterKey InternalServiceTargetGroupArn 35 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 36 ]

CHAPTER 1 INSTALLING ON AWS

217

1

2

3

4

5

6

7

8

9

10

11 13 15

12 14 16

17

18

19

20

21

22

23

24

25

26

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control planemachines

Specify an AWSEC2ImageId value

Whether or not to perform DNS etcd registration

Specify yes or no If you specify yes you must provide hosted zone information

The Route 53 private zone ID to register the etcd targets with

Specify the PrivateHostedZoneId value from the output of the CloudFormation templatefor DNS and load balancing

The Route 53 zone to register the targets with

Specify ltcluster_namegtltdomain_namegt where ltdomain_namegt is the Route 53 basedomain that you used when you generated install-configyaml file for the cluster Do notinclude the trailing period () that is displayed in the AWS console

A subnet preferably private to launch the control plane machines on

Specify a subnet from the PrivateSubnets value from the output of theCloudFormation template for DNS and load balancing

The master security group ID to associate with master nodes

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch control plane Ignition config file from

Specify the generated Ignition config file location httpsapi-intltcluster_namegtltdomain_namegt22623configmaster

The base64 encoded certificate authority string to use

Specify the value from the masterign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with master nodes

Specify the MasterInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4xlarge

m42xlarge

OpenShift Container Platform 46 Installing on AWS

218

27

28

29

30

31

32

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c42xlarge

c44xlarge

c48xlarge

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 specify an m5 type such as m5xlarge instead

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

CHAPTER 1 INSTALLING ON AWS

219

33

34

35

36

1

2

3

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

2 Copy the template from the CloudFormation template for control plane machines section ofthis topic and save it as a YAML file on your computer This template describes the control planemachines that your cluster requires

3 If you specified an m5 instance type as the value for MasterInstanceType add that instancetype to the MasterInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent thecontrol plane nodes

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-control-plane Youneed the name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents three control planenodes

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-control-plane21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

220

17131 CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you needfor your OpenShift Container Platform cluster

Example 127 CloudFormation template for control plane machines

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 master instances)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AutoRegisterDNS Default yes AllowedValues - yes - no Description Do you want to invoke DNS etcd registration which requires Hosted Zone information Type String PrivateHostedZoneId Description The Route53 private zone ID to register the etcd targets with such as Z21IXYZABCZ2A4 Type String PrivateHostedZoneName Description The Route53 zone to register the targets with such as clusterexamplecom Omit the trailing period Type String Master0Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master1Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master2Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configmaster Description Ignition config file location Type String

CHAPTER 1 INSTALLING ON AWS

221

CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String MasterInstanceProfileName Description IAM profile to associate with master nodes Type String MasterInstanceType Default m5xlarge Type String AllowedValues - m4xlarge - m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c42xlarge - c44xlarge - c48xlarge - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String

Metadata

OpenShift Container Platform 46 Installing on AWS

222

AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label default DNS Parameters - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID Master0Subnet default Master-0 Subnet Master1Subnet default Master-1 Subnet Master2Subnet default Master-2 Subnet MasterInstanceType default Master Instance Type MasterInstanceProfileName default Master Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Master Ignition Source

CHAPTER 1 INSTALLING ON AWS

223

CertificateAuthorities default Ignition CA String MasterSecurityGroupId default Master Security Group ID AutoRegisterDNS default Use Provided DNS Automation AutoRegisterELB default Use Provided ELB Automation PrivateHostedZoneName default Private Hosted Zone Name PrivateHostedZoneId default Private Hosted Zone ID

Conditions DoRegistration Equals [yes Ref AutoRegisterELB] DoDns Equals [yes Ref AutoRegisterDNS]

Resources Master0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master0Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster0 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalApiTarget

OpenShift Container Platform 46 Installing on AWS

224

Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master0PrivateIp

Master1 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master1Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster1 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties

CHAPTER 1 INSTALLING ON AWS

225

ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master1PrivateIp

Master2 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master2Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster2 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

OpenShift Container Platform 46 Installing on AWS

226

RegisterMaster2InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master2PrivateIp

EtcdSrvRecords Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [_etcd-server-ssl_tcp Ref PrivateHostedZoneName]] ResourceRecords - Join [ [0 10 2380 Join [ [etcd-0 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-1 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-2 Ref PrivateHostedZoneName]]] ] TTL 60 Type SRV

Etcd0Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-0 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master0PrivateIp TTL 60 Type A

Etcd1Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-1 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master1PrivateIp TTL 60 Type A

Etcd2Record Condition DoDns Type AWSRoute53RecordSet

CHAPTER 1 INSTALLING ON AWS

227

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1714 Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent a worker node

IMPORTANT

The CloudFormation template creates a stack that represents one worker node Youmust create a stack for each worker node

NOTE

If you do not use the provided CloudFormation template to create your worker nodesyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-2 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master2PrivateIp TTL 60 Type A

Outputs PrivateIPs Description The control-plane node private IP addresses Value Join [ [GetAtt Master0PrivateIp GetAtt Master1PrivateIp GetAtt Master2PrivateIp] ]

OpenShift Container Platform 46 Installing on AWS

228

1

2

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

Procedure

1 Create a JSON file that contains the parameter values that the CloudFormation templaterequires

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey Subnet 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey WorkerSecurityGroupId 7 ParameterValue sg-ltrandom_stringgt 8 ParameterKey IgnitionLocation 9 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configworker 10 ParameterKey CertificateAuthorities 11 ParameterValue 12 ParameterKey WorkerInstanceProfileName 13 ParameterValue 14 ParameterKey WorkerInstanceType 15 ParameterValue m4large 16 ]

CHAPTER 1 INSTALLING ON AWS

229

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes

Specify an AWSEC2ImageId value

A subnet preferably private to launch the worker nodes on

Specify a subnet from the PrivateSubnets value from the output of the CloudFormationtemplate for DNS and load balancing

The worker security group ID to associate with worker nodes

Specify the WorkerSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch bootstrap Ignition config file from

Specify the generated Ignition config location httpsapi-intltcluster_namegtltdomain_namegt22623configworker

Base64 encoded certificate authority string to use

Specify the value from the workerign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with worker nodes

Specify the WorkerInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4large

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5large

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

OpenShift Container Platform 46 Installing on AWS

230

1

2

3

m516xlarge

c4large

c4xlarge

c42xlarge

c44xlarge

c48xlarge

r4large

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5 types instead

2 Copy the template from the CloudFormation template for worker machines section of thistopic and save it as a YAML file on your computer This template describes the networkingobjects and load balancers that your cluster requires

3 If you specified an m5 instance type as the value for WorkerInstanceType add that instancetype to the WorkerInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent aworker node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-worker-1 You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

231

file

Example output

NOTE

The CloudFormation template creates a stack that represents one worker node

5 Confirm that the template components exist

6 Continue to create worker stacks until you have created enough worker machines for yourcluster You can create additional worker stacks by referencing the same template andparameter files and specifying a different stack name

IMPORTANT

You must create at least two worker machines so you must create at least twostacks that use this CloudFormation template

17141 CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need foryour OpenShift Container Platform cluster

Example 128 CloudFormation template for worker machines

arnawscloudformationus-east-1269333783861stackcluster-worker-1729ee301-1c2a-11eb-348f-sd9888c65b59

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId WorkerSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation

OpenShift Container Platform 46 Installing on AWS

232

Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configworker Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String WorkerInstanceProfileName Description IAM profile to associate with master nodes Type String WorkerInstanceType Default m5large Type String AllowedValues - m4large - m4xlarge - m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5large - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c4large - c4xlarge - c42xlarge - c44xlarge - c48xlarge - r4large - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName

CHAPTER 1 INSTALLING ON AWS

233

- Label default Network Configuration Parameters - Subnet ParameterLabels Subnet default Subnet InfrastructureName default Infrastructure Name WorkerInstanceType default Worker Instance Type WorkerInstanceProfileName default Worker Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID IgnitionLocation default Worker Ignition Source CertificateAuthorities default Ignition CA String WorkerSecurityGroupId default Worker Security Group ID

Resources Worker0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref WorkerInstanceProfileName InstanceType Ref WorkerInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref WorkerSecurityGroupId SubnetId Ref Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

Outputs PrivateIP Description The compute node private IP address Value GetAtt Worker0PrivateIp

OpenShift Container Platform 46 Installing on AWS

234

1

2

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1715 Initializing the bootstrap sequence on AWS with user-provisionedinfrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS) you can start thebootstrap sequence that initializes the OpenShift Container Platform control plane

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

You created the worker nodes

Procedure

1 Change to the directory that contains the installation program and start the bootstrap processthat initializes the OpenShift Container Platform control plane

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different installation details specify warn debug or error instead of info

Example output

$ openshift-install wait-for bootstrap-complete --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Waiting up to 20m0s for the Kubernetes API at httpsapimyclusterexamplecom6443INFO API v1190+9f84db3 upINFO Waiting up to 30m0s for bootstrapping to completeINFO It is now safe to remove the bootstrap resourcesINFO Time elapsed 1s

CHAPTER 1 INSTALLING ON AWS

235

If the command exits without a FATAL warning your OpenShift Container Platform controlplane has initialized

NOTE

After the control plane initializes it sets up the compute nodes and installsadditional services in the form of Operators

Additional resources

See Monitoring installation progress for details about monitoring the installation bootstrap andcontrol plane logs as an OpenShift Container Platform installation progresses

See Gathering bootstrap node diagnostic data for information about troubleshooting issuesrelated to the bootstrap process

You can view details about the running instances that are created by using the AWS EC2console

1716 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

17161 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

$ tar xvzf ltfilegt

$ echo $PATH

OpenShift Container Platform 46 Installing on AWS

236

17162 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

17163 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1717 Logging in to the cluster by using the CLI

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

$ echo $PATH

$ oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

237

1

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1718 Approving the certificate signing requests for your machines

When you add machines to a cluster two pending certificate signing requests (CSRs) are generated foreach machine that you added You must confirm that these CSRs are approved or if necessary approvethem yourself The client requests must be approved first followed by the server requests

Prerequisites

You added machines to your cluster

Procedure

1 Confirm that the cluster recognizes the machines

Example output

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 63m v1190master-1 Ready master 63m v1190master-2 Ready master 64m v1190worker-0 NotReady worker 76s v1190worker-1 NotReady worker 70s v1190

OpenShift Container Platform 46 Installing on AWS

238

The output lists all of the machines that you created

NOTE

The preceding output might not include the compute nodes also known asworker nodes until some CSRs are approved

2 Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster

Example output

In this example two machines are joining the cluster You might see more approved CSRs in thelist

3 If the CSRs were not approved after all of the pending CSRs for the machines you added are in Pending status approve the CSRs for your cluster machines

NOTE

Because the CSRs rotate automatically approve your CSRs within an hour ofadding the machines to the cluster If you do not approve them within an hour thecertificates will rotate and more than two certificates will be present for eachnode You must approve all of these certificates After you approve the initialCSRs the subsequent node client CSRs are automatically approved by thecluster kube-controller-manager

NOTE

For clusters running on platforms that are not machine API enabled such as baremetal and other user-provisioned infrastructure you must implement a methodof automatically approving the kubelet serving certificate requests (CSRs) If arequest is not approved then the oc exec oc rsh and oc logs commandscannot succeed because a serving certificate is required when the API serverconnects to the kubelet Any operation that contacts the Kubelet endpointrequires this certificate approval to be in place The method must watch for newCSRs confirm that the CSR was submitted by the node-bootstrapper serviceaccount in the systemnode or systemadmin groups and confirm the identityof the node

To approve them individually run the following command for each valid CSR

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-8b2br 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pendingcsr-8vnps 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pending

$ oc adm certificate approve ltcsr_namegt 1

CHAPTER 1 INSTALLING ON AWS

239

1

1

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

NOTE

Some Operators might not become available until some CSRs are approved

4 Now that your client requests are approved you must review the server requests for eachmachine that you added to the cluster

Example output

5 If the remaining CSRs are not approved and are in the Pending status approve the CSRs foryour cluster machines

To approve them individually run the following command for each valid CSR

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

6 After all client and server CSRs have been approved the machines have the Ready statusVerify this by running the following command

Example output

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs --no-run-if-empty oc adm certificate approve

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-bfd72 5m26s systemnodeip-10-0-50-126us-east-2computeinternal Pendingcsr-c57lv 5m26s systemnodeip-10-0-95-157us-east-2computeinternal Pending

$ oc adm certificate approve ltcsr_namegt 1

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs oc adm certificate approve

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 73m v1200master-1 Ready master 73m v1200

OpenShift Container Platform 46 Installing on AWS

240

NOTE

It can take a few minutes after approval of the server CSRs for the machines totransition to the Ready status

Additional information

For more information on CSRs see Certificate Signing Requests

1719 Initial Operator configuration

After the control plane initializes you must immediately configure some Operators so that they allbecome available

Prerequisites

Your control plane has initialized

Procedure

1 Watch the cluster components come online

Example output

master-2 Ready master 74m v1200worker-0 Ready worker 11m v1200worker-1 Ready worker 11m v1200

$ watch -n5 oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCEauthentication 460 True False False 3h56mcloud-credential 460 True False False 29hcluster-autoscaler 460 True False False 29hconfig-operator 460 True False False 6h39mconsole 460 True False False 3h59mcsi-snapshot-controller 460 True False False 4h12mdns 460 True False False 4h15metcd 460 True False False 29himage-registry 460 True False False 3h59mingress 460 True False False 4h30minsights 460 True False False 29hkube-apiserver 460 True False False 29hkube-controller-manager 460 True False False 29hkube-scheduler 460 True False False 29hkube-storage-version-migrator 460 True False False 4h2mmachine-api 460 True False False 29hmachine-approver 460 True False False 6h34mmachine-config 460 True False False 3h56mmarketplace 460 True False False 4h2mmonitoring 460 True False False 6h31mnetwork 460 True False False 29hnode-tuning 460 True False False 4h30m

CHAPTER 1 INSTALLING ON AWS

241

2 Configure the Operators that are not available

17191 Image registry storage configuration

Amazon Web Services provides default storage which means the Image Registry Operator is availableafter installation However if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage you must manually configure registry storage

Instructions are shown for configuring a persistent volume which is required for production clustersWhere applicable instructions are shown for configuring an empty directory as the storage locationwhich is available for only non-production clusters

Additional instructions are provided for allowing the image registry to use block storage types by usingthe Recreate rollout strategy during upgrades

You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShiftContainer Platform to hidden regions See Configuring the registry for AWS user-provisionedinfrastructure for more information

171911 Configuring registry storage for AWS with user-provisioned infrastructure

During installation your cloud credentials are sufficient to create an Amazon S3 bucket and the RegistryOperator will automatically configure storage

If the Registry Operator cannot create an S3 bucket and automatically configure storage you cancreate an S3 bucket and configure storage with the following procedure

Prerequisites

You have a cluster on AWS with user-provisioned infrastructure

For Amazon S3 storage the secret is expected to contain two keys

REGISTRY_STORAGE_S3_ACCESSKEY

REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage

1 Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old

2 Fill in the storage configuration in configsimageregistryoperatoropenshiftiocluster

openshift-apiserver 460 True False False 3h56mopenshift-controller-manager 460 True False False 4h36mopenshift-samples 460 True False False 4h30moperator-lifecycle-manager 460 True False False 29hoperator-lifecycle-manager-catalog 460 True False False 29hoperator-lifecycle-manager-packageserver 460 True False False 3h59mservice-ca 460 True False False 29hstorage 460 True False False 4h30m

$ oc edit configsimageregistryoperatoropenshiftiocluster

OpenShift Container Platform 46 Installing on AWS

242

Example configuration

WARNING

To secure your registry images in AWS block public access to the S3 bucket

171912 Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator For non-production clusters you can setthe image registry to an empty directory If you do so all images are lost if you restart the registry

Procedure

To set the image registry storage to an empty directory

WARNING

Configure this option for only non-production clusters

If you run this command before the Image Registry Operator initializes its components the oc patch command fails with the following error

Wait a few minutes and run the command again

1720 Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster remove the bootstrap resourcesfrom Amazon Web Services (AWS)

Prerequisites

You completed the initial Operator configuration for your cluster

storage s3 bucket ltbucket-namegt region ltregion-namegt

$ oc patch configsimageregistryoperatoropenshiftio cluster --type merge --patch specstorageemptyDir

Error from server (NotFound) configsimageregistryoperatoropenshiftio cluster not found

CHAPTER 1 INSTALLING ON AWS

243

1

Procedure

1 Delete the bootstrap resources If you used the CloudFormation template delete its stack

Delete the stack by using the AWS CLI

ltnamegt is the name of your bootstrap stack

Delete the stack by using the AWS CloudFormation console

1721 Creating the Ingress DNS Records

If you removed the DNS Zone configuration manually create DNS records that point to the Ingress loadbalancer You can create either a wildcard record or specific records While the following procedure usesA records you can use other record types that you require such as CNAME or alias

Prerequisites

You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) thatuses infrastructure that you provisioned

You installed the OpenShift CLI (oc)

You installed the jq package

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix)

Procedure

1 Determine the routes to create

To create a wildcard record use appsltcluster_namegtltdomain_namegt where ltcluster_namegt is your cluster name and ltdomain_namegt is the Route 53 base domainfor your OpenShift Container Platform cluster

To create specific records you must create a record for each route that your cluster uses asshown in the output of the following command

Example output

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP address

$ aws cloudformation delete-stack --stack-name ltnamegt 1

$ oc get --all-namespaces -o jsonpath=range items[]range statusingress[]hostnendend routes

oauth-openshiftappsltcluster_namegtltdomain_namegtconsole-openshift-consoleappsltcluster_namegtltdomain_namegtdownloads-openshift-consoleappsltcluster_namegtltdomain_namegtalertmanager-main-openshift-monitoringappsltcluster_namegtltdomain_namegtgrafana-openshift-monitoringappsltcluster_namegtltdomain_namegtprometheus-k8s-openshift-monitoringappsltcluster_namegtltdomain_namegt

OpenShift Container Platform 46 Installing on AWS

244

1

1 2

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP addressthat it uses which is shown in the EXTERNAL-IP column

Example output

3 Locate the hosted zone ID for the load balancer

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer that you obtained

Example output

The output of this command is the load balancer hosted zone ID

4 Obtain the public hosted zone ID for your clusterrsquos domain

For ltdomain_namegt specify the Route 53 base domain for your OpenShift ContainerPlatform cluster

Example output

The public hosted zone ID for your domain is shown in the command output In this example it is Z3URY6TWQ91KVV

5 Add the alias records to your private zone

$ oc -n openshift-ingress get service router-default

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGErouter-default LoadBalancer 1723062215 ab328us-east-2elbamazonawscom 8031499TCP44330693TCP 5m

$ aws elb describe-load-balancers | jq -r LoadBalancerDescriptions[] | select(DNSName == ltexternal_ipgt)CanonicalHostedZoneNameID 1

Z3AADJGX6KTTL2

$ aws route53 list-hosted-zones-by-name --dns-name ltdomain_namegt 1 --query HostedZones[ ConfigPrivateZone = `true` ampamp Name == `ltdomain_namegt`]Id 2 --output text

hostedzoneZ3URY6TWQ91KVV

$ aws route53 change-resource-record-sets --hosted-zone-id ltprivate_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATE

CHAPTER 1 INSTALLING ON AWS

245

1

2

3

4

1

2

3

4

For ltprivate_hosted_zone_idgt specify the value from the output of the CloudFormationtemplate for DNS and load balancing

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

6 Add the records to your public zone

For ltpublic_hosted_zone_idgt specify the public hosted zone for your domain

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operator

gt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

$ aws route53 change-resource-record-sets --hosted-zone-id ltpublic_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

OpenShift Container Platform 46 Installing on AWS

246

1

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

1722 Completing an AWS installation on user-provisioned infrastructure

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure monitor the deployment to completion

Prerequisites

You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure

You installed the oc CLI

Procedure

From the directory that contains the installation program complete the cluster installation

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

1723 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can log

$ openshift-install --dir=ltinstallation_directorygt wait-for install-complete 1

INFO Waiting up to 40m0s for the cluster at httpsapimyclusterexamplecom6443 to initializeINFO Waiting up to 10m0s for the openshift-console route to be createdINFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Fe5en-ymBEc-Wt6NLINFO Time elapsed 1s

CHAPTER 1 INSTALLING ON AWS

247

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

1724 Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

See Working with stacks in the AWS documentation for more information about AWSCloudFormation stacks

1725 Next steps

Validating an installation

Customize your cluster

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

OpenShift Container Platform 46 Installing on AWS

248

If necessary you can opt out of remote health reporting

18 INSTALLING A CLUSTER ON AWS THAT USES MIRROREDINSTALLATION CONTENT

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)using infrastructure that you provide and an internal mirror of the installation release content

IMPORTANT

While you can install an OpenShift Container Platform cluster by using mirroredinstallation release content your cluster still requires Internet access to use the AWSAPIs

One way to create this infrastructure is to use the provided CloudFormation templates You can modifythe templates to customize your infrastructure or use the information that they contain to create AWSobjects according to your companyrsquos policies

IMPORTANT

The steps for performing a user-provisioned infrastructure installation are provided as anexample only Installing a cluster with infrastructure you provide requires knowledge ofthe cloud provider and the installation process of OpenShift Container Platform SeveralCloudFormation templates are provided to assist in completing these steps or to helpmodel your own You are also free to create the required resources through othermethods the templates are just an example

181 Prerequisites

You created a mirror registry on your mirror host and obtained the imageContentSources datafor your version of OpenShift Container Platform

IMPORTANT

Because the installation media is on the mirror host you can use that computerto complete all installation steps

You reviewed details about the OpenShift Container Platform installation and updateprocesses

You configured an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

CHAPTER 1 INSTALLING ON AWS

249

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix) in the AWS documentation

If you use a firewall and plan to use the Telemetry service you configured the firewall to allowthe sites that your cluster requires access to

NOTE

Be sure to also review this site list if you are configuring a proxy

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

182 About installations in restricted networks

In OpenShift Container Platform 46 you can perform an installation that does not require an activeconnection to the Internet to obtain software components Restricted network installations can becompleted using installer-provisioned infrastructure or user-provisioned infrastructure depending onthe cloud platform to which you are installing the cluster

If you choose to perform a restricted network installation on a cloud platform you still require access toits cloud APIs Some cloud functions like Amazon Web Servicersquos IAM service require Internet access soyou might still require Internet access Depending on your network you might require less Internetaccess for an installation on bare metal hardware or on VMware vSphere

To complete a restricted network installation you must create a registry that mirrors the contents of theOpenShift Container Platform registry and contains the installation media You can create this registryon a mirror host which can access both the Internet and your closed network or by using other methodsthat meet your restrictions

IMPORTANT

Because of the complexity of the configuration for user-provisioned installationsconsider completing a standard user-provisioned infrastructure installation before youattempt a restricted network installation using user-provisioned infrastructureCompleting this test installation might make it easier to isolate and troubleshoot anyissues that might arise during your installation in a restricted network

1821 Additional limits

Clusters in restricted networks have the following additional limitations and restrictions

The ClusterVersion status includes an Unable to retrieve available updates error

By default you cannot use the contents of the Developer Catalog because you cannot accessthe required image stream tags

183 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

OpenShift Container Platform 46 Installing on AWS

250

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

184 Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services(AWS) you must manually create both the machines and their supporting infrastructure

For more information about the integration testing for different platforms see the OpenShift ContainerPlatform 4x Tested Integrations page

By using the provided CloudFormation templates you can create stacks of AWS resources thatrepresent the following components

An AWS Virtual Private Cloud (VPC)

Networking and load balancing components

Security groups and roles

An OpenShift Container Platform bootstrap node

OpenShift Container Platform control plane nodes

An OpenShift Container Platform compute node

Alternatively you can manually create the components or you can reuse existing infrastructure thatmeets the cluster requirements Review the CloudFormation templates for more details about how thecomponents interrelate

1841 Cluster machines

CHAPTER 1 INSTALLING ON AWS

251

You need AWSEC2Instance objects for the following machines

A bootstrap machine This machine is required during installation but you can remove it afteryour cluster deploys

Three control plane machines The control plane machines are not governed by a machine set

Compute machines You must create at least two compute machines which are also known asworker machines during installation These machines are not governed by a machine set

You can use the following instance types for the cluster machines with the provided CloudFormationtemplates

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5types instead

Table 119 Instance types for machines

Instance type Bootstrap Control plane Compute

i3large x

m4large x

m4xlarge x x

m42xlarge x x

m44xlarge x x

m48xlarge x x

m410xlarge x x

m416xlarge x x

m5large x

m5xlarge x x

m52xlarge x x

m54xlarge x x

m58xlarge x x

m510xlarge x x

OpenShift Container Platform 46 Installing on AWS

252

m516xlarge x x

c4large x

c4xlarge x

c42xlarge x x

c44xlarge x x

c48xlarge x x

r4large x

r4xlarge x x

r42xlarge x x

r44xlarge x x

r48xlarge x x

r416xlarge x x

Instance type Bootstrap Control plane Compute

You might be able to use other instance types that meet the specifications of these instance types

1842 Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructurethat you provision you must provide a mechanism for approving cluster certificate signing requests(CSRs) after installation The kube-controller-manager only approves the kubelet client CSRs The machine-approver cannot guarantee the validity of a serving certificate that is requested by usingkubelet credentials because it cannot confirm that the correct machine issued the request You mustdetermine and implement a method of verifying the validity of the kubelet serving certificate requestsand approving them

1843 Other infrastructure components

A VPC

DNS entries

Load balancers (classic or network) and listeners

A public and a private Route 53 zone

Security groups

CHAPTER 1 INSTALLING ON AWS

253

IAM roles

S3 buckets

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

OpenShift Container Platform 46 Installing on AWS

254

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a privatehosted zone similar to the one that the installation program uses if it provisions the clusterrsquosinfrastructure You must create a DNS entry that resolves to your load balancer An entry for apiltcluster_namegtltdomaingt must point to the external load balancer and an entry for api-intltcluster_namegtltdomaingt must point to the internal load balancer

The cluster also requires load balancers and listeners for port 6443 which are required for theKubernetes API and its extensions and port 22623 which are required for the Ignition config files fornew machines The targets will be the master nodes Port 6443 must be accessible to both clientsexternal to the cluster and nodes within the cluster Port 22623 must be accessible to nodes within thecluster

Component AWS type Description

DNS AWSRoute53HostedZone

The hosted zone for your internal DNS

etcd recordsets

AWSRoute53RecordSet

The registration records for etcd for your control plane machines

CHAPTER 1 INSTALLING ON AWS

255

Public loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your public subnets

External APIserver record

AWSRoute53RecordSetGroup

Alias records for the external API server

Externallistener

AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the external load balancer

External targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the external load balancer

Private loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your private subnets

Internal APIserver record

AWSRoute53RecordSetGroup

Alias records for the internal API server

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 22623 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Component AWS type Description

OpenShift Container Platform 46 Installing on AWS

256

Security groups

The control plane and worker machines require access to the following ports

Group Type IP Protocol Port range

MasterSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

tcp 6443

tcp 22623

WorkerSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

BootstrapSecurityGroup

AWSEC2SecurityGroup

tcp 22

tcp 19531

Control plane Ingress

The control plane machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd tcp 2379- 2380

MasterIngressVxlan

Vxlan packets udp 4789

MasterIngressWorkerVxlan

Vxlan packets udp 4789

MasterIngressInternal

Internal cluster communication and Kubernetesproxy metrics

tcp 9000 - 9999

MasterIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

MasterIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

CHAPTER 1 INSTALLING ON AWS

257

MasterIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Worker Ingress

The worker machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets udp 4789

WorkerIngressWorkerVxlan

Vxlan packets udp 4789

WorkerIngressInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

WorkerIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Roles and instance profiles

OpenShift Container Platform 46 Installing on AWS

258

You must grant the machines permissions in AWS The provided CloudFormation templates grant themachines Allow permissions for the following AWSIAMRole objects and provide a AWSIAMInstanceProfile for each set of roles If you do not use the templates you can grant themachines the following broad permissions or the following individual permissions

Role Effect Action Resource

Master Allow ec2

Allow elasticloadbalancing

Allow iamPassRole

Allow s3GetObject

Worker Allow ec2Describe

Bootstrap Allow ec2Describe

Allow ec2AttachVolume

Allow ec2DetachVolume

1844 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 129 Required EC2 permissions for installation

tagTagResources

tagUntagResources

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

CHAPTER 1 INSTALLING ON AWS

259

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

OpenShift Container Platform 46 Installing on AWS

260

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 130 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 131 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

CHAPTER 1 INSTALLING ON AWS

261

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

elasticloadbalancingSetLoadBalancerPoliciesOfListener

Example 132 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

OpenShift Container Platform 46 Installing on AWS

262

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 133 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

route53ListHostedZonesByName

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 134 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

CHAPTER 1 INSTALLING ON AWS

263

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 135 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

s3GetObjectAcl

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 136 Required permissions to delete base cluster resources

OpenShift Container Platform 46 Installing on AWS

264

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 137 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

ec2DeleteVpc

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

CHAPTER 1 INSTALLING ON AWS

265

Example 138 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 139 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

185 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster the

OpenShift Container Platform 46 Installing on AWS

266

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram If you install a cluster on infrastructure that you provision you must provide this key toyour clusterrsquos machines

186 Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisionedinfrastructure you must generate the files that the installation program needs to deploy your clusterand modify them so that the cluster creates only the machines that it will use You generate and

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

267

customize the install-configyaml file Kubernetes manifests and Ignition config files You also have theoption to first set up a separate var partition during the preparation phases of installation

1861 Optional Creating a separate var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installerHowever there are cases where you might want to create separate partitions in a part of the filesystemthat you expect to grow

OpenShift Container Platform supports the addition of a single partition to attach storage to either the var partition or a subdirectory of var For example

varlibcontainers Holds container-related content that can grow as more images andcontainers are added to a system

varlibetcd Holds data that you might want to keep separate for purposes such asperformance optimization of etcd storage

var Holds data that you might want to keep separate for purposes such as auditing

Storing the contents of a var directory separately makes it easier to grow storage for those areas asneeded and reinstall OpenShift Container Platform at a later date and keep that data intact With thismethod you will not have to pull all your containers again nor will you have to copy massive log fileswhen you update systems

Because var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS)the following procedure sets up the separate var partition by creating a machine config that is insertedduring the openshift-install preparation phases of an OpenShift Container Platform installation

IMPORTANT

If you follow the steps to create a separate var partition in this procedure it is notnecessary to create the Kubernetes manifest and Ignition config files again as describedlater in this section

Procedure

1 Create a directory to hold the OpenShift Container Platform installation files

2 Run openshift-install to create a set of files in the manifest and openshift subdirectoriesAnswer the system questions as you are prompted

3 Create a MachineConfig object and add it to a file in the openshift directory For examplename the file 98-var-partitionyaml change the disk device name to the name of the storage

$ mkdir $HOMEclusterconfig

$ openshift-install create manifests --dir $HOMEclusterconfig SSH Public Key $ ls $HOMEclusterconfigopenshift99_kubeadmin-password-secretyaml99_openshift-cluster-api_master-machines-0yaml99_openshift-cluster-api_master-machines-1yaml99_openshift-cluster-api_master-machines-2yaml

OpenShift Container Platform 46 Installing on AWS

268

1

2

device on the worker systems and set the storage size as appropriate This attaches storage toa separate var directory

The storage device name of the disk that you want to partition

When adding a data partition to the boot disk a minimum value of 25000 MiB (Mebibytes)is recommended The root file system is automatically resized to fill all available space upto the specified offset If no value is specified or if the specified value is smaller than therecommended minimum the resulting root file system will be too small and futurereinstalls of RHCOS might overwrite the beginning of the data partition

4 Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories

Now you can use the Ignition config files as input to the installation procedures to install Red HatEnterprise Linux CoreOS (RHCOS) systems

apiVersion machineconfigurationopenshiftiov1kind MachineConfigmetadata labels machineconfigurationopenshiftiorole worker name 98-var-partitionspec config ignition version 310 storage disks - device devltdevice_namegt 1 partitions - sizeMiB ltpartition_sizegt startMiB ltpartition_start_offsetgt 2 label var filesystems - path var device devdiskby-partlabelvar format xfs systemd units - name varmount enabled true contents | [Unit] Before=local-fstarget [Mount] Where=var What=devdiskby-partlabelvar [Install] WantedBy=local-fstarget

$ openshift-install create ignition-configs --dir $HOMEclusterconfig$ ls $HOMEclusterconfigauth bootstrapign masterign metadatajson workerign

CHAPTER 1 INSTALLING ON AWS

269

1

1862 Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deployyour cluster

Prerequisites

You obtained the OpenShift Container Platform installation program for user-provisionedinfrastructure and the pull secret for your cluster For a restricted network installation thesefiles are on your mirror host

You checked that you are deploying your cluster to a region with an accompanying Red HatEnterprise Linux CoreOS (RHCOS) AMI published by Red Hat If you are deploying to a regionthat requires a custom AMI such as an AWS GovCloud region you must create the install-configyaml file manually

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select aws as the platform to target

iii If you do not have an AWS profile stored on your computer enter the AWS access keyID and secret access key for the user that you configured to run the installationprogram

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

OpenShift Container Platform 46 Installing on AWS

270

NOTE

The AWS access key ID and secret access key are stored in ~awscredentials in the home directory of the current user on theinstallation host You are prompted for the credentials by the installationprogram if the credentials for the exported profile are not present in thefile Any credentials that you provide to the installation program arestored in the file

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Edit the install-configyaml file to provide the additional information that is required for aninstallation in a restricted network

a Update the pullSecret value to contain the authentication information for your registry

For ltlocal_registrygt specify the registry domain name and optionally the port that yourmirror registry uses to serve content For example registryexamplecom or registryexamplecom5000 For ltcredentialsgt specify the base64-encoded user nameand password for your mirror registry

b Add the additionalTrustBundle parameter and value The value must be the contents ofthe certificate file that you used for your mirror registry which can be an exiting trustedcertificate authority or the self-signed certificate that you generated for the mirror registry

c Add the image content resources

Use the imageContentSources section from the output of the command to mirror therepository or the values that you used when you mirrored the content from the media thatyou brought into your restricted network

pullSecret authsltlocal_registrygt auth ltcredentialsgtemail youexamplecom

additionalTrustBundle | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----

imageContentSources- mirrors - ltlocal_registrygtltlocal_repository_namegtrelease source quayioopenshift-release-devocp-release- mirrors - ltlocal_registrygtltlocal_repository_namegtrelease source registrysvcciopenshiftorgocprelease

CHAPTER 1 INSTALLING ON AWS

271

d Optional Set the publishing strategy to Internal

By setting this option you create an internal Ingress Controller and a private load balancer

3 Optional Back up the install-configyaml file

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

Additional resources

See Configuration and credential file settings in the AWS documentation for more informationabout AWS profile and credential configuration

1863 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

publish Internal

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3

OpenShift Container Platform 46 Installing on AWS

272

1

2

3

4

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

1864 Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines you mustgenerate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines

The installation configuration file transforms into the Kubernetes manifests The manifests wrap into theIgnition configuration files which are later used to create the cluster

additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

273

1

IMPORTANT

The Ignition config files that the installation program generates contain certificates thatexpire after 24 hours which are then renewed at that time If the cluster is shut downbefore renewing the certificates and the cluster is later restarted after the 24 hours haveelapsed the cluster automatically recovers the expired certificates The exception is thatyou must manually approve the pending node-bootstrapper certificate signing requests(CSRs) to recover kubelet certificates See the documentation for Recovering fromexpired control plane certificates for more information

Prerequisites

You obtained the OpenShift Container Platform installation program For a restricted networkinstallation these files are on your mirror host

You created the install-configyaml installation configuration file

Procedure

1 Change to the directory that contains the installation program and generate the Kubernetesmanifests for the cluster

Example output

For ltinstallation_directorygt specify the installation directory that contains the install-configyaml file you created

2 Remove the Kubernetes manifest files that define the control plane machines

By removing these files you prevent the cluster from automatically generating control planemachines

3 Remove the Kubernetes manifest files that define the worker machines

Because you create and manage the worker machines yourself you do not need to initializethese machines

4 Check that the mastersSchedulable parameter in the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml Kubernetes manifestfile is set to false This setting prevents pods from being scheduled on the control planemachines

a Open the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml file

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

INFO Credentials loaded from the myprofile profile in file homemyuserawscredentialsINFO Consuming Install Config from target directoryINFO Manifests created in install_dirmanifests and install_diropenshift

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_master-machines-yaml

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_worker-machineset-yaml

OpenShift Container Platform 46 Installing on AWS

274

1 2

1

b Locate the mastersSchedulable parameter and ensure that it is set to false

c Save and exit the file

5 Optional If you do not want the Ingress Operator to create DNS records on your behalf removethe privateZone and publicZone sections from the ltinstallation_directorygtmanifestscluster-dns-02-configyml DNS configuration file

Remove this section completely

If you do so you must add ingress DNS records manually in a later step

6 To create the Ignition configuration files run the following command from the directory thatcontains the installation program

For ltinstallation_directorygt specify the same installation directory

The following files are generated in the directory

auth kubeadmin-password kubeconfig bootstrapign masterign metadatajson workerign

187 Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify yourcluster in Amazon Web Services (AWS) The infrastructure name is also used to locate the appropriateAWS resources during an OpenShift Container Platform installation The provided CloudFormationtemplates contain references to this infrastructure name so you must extract it

Prerequisites

You obtained the OpenShift Container Platform installation program and the pull secret for

apiVersion configopenshiftiov1kind DNSmetadata creationTimestamp null name clusterspec baseDomain exampleopenshiftcom privateZone 1 id mycluster-100419-private-zone publicZone 2 id exampleopenshiftcomstatus

$ openshift-install create ignition-configs --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

275

1

1

You obtained the OpenShift Container Platform installation program and the pull secret foryour cluster

You generated the Ignition config files for your cluster

You installed the jq package

Procedure

To extract and view the infrastructure name from the Ignition config file metadata run thefollowing command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

The output of this command is your cluster name and a random string

188 Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShiftContainer Platform cluster to use You can customize the VPC to meet your requirements includingVPN and route tables

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the VPC

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

Procedure

1 Create a JSON file that contains the parameter values that the template requires

$ jq -r infraID ltinstallation_directorygtmetadatajson 1

openshift-vw9j6 1

[

OpenShift Container Platform 46 Installing on AWS

276

1

2

3

4

5

6

1

2

3

The CIDR block for the VPC

Specify a CIDR block in the format xxxx16-24

The number of availability zones to deploy the VPC in

Specify an integer between 1 and 3

The size of each subnet in each availability zone

Specify an integer between 5 and 13 where 5 is 27 and 13 is 19

2 Copy the template from the CloudFormation template for the VPC section of this topic andsave it as a YAML file on your computer This template describes the VPC that your clusterrequires

3 Launch the CloudFormation template to create a stack of AWS resources that represent theVPC

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-vpc You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

ParameterKey VpcCidr 1 ParameterValue 1000016 2 ParameterKey AvailabilityZoneCount 3 ParameterValue 1 4 ParameterKey SubnetBits 5 ParameterValue 12 6 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

277

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

VpcId The ID of your VPC

PublicSubnetIds

The IDs of the new public subnets

PrivateSubnetIds

The IDs of the new private subnets

1881 CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for yourOpenShift Container Platform cluster

Example 140 CloudFormation template for the VPC

arnawscloudformationus-east-1269333783861stackcluster-vpcdbedae40-2fd3-11eb-820e-12a48460849f

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for Best Practice VPC with 1-3 AZs

Parameters VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String AvailabilityZoneCount ConstraintDescription The number of availability zones (Min 1 Max 3) MinValue 1 MaxValue 3 Default 1 Description How many AZs to create VPC subnets for (Min 1 Max 3) Type Number SubnetBits ConstraintDescription CIDR block parameter must be in the form xxxx19-27 MinValue 5 MaxValue 13 Default 12 Description Size of each subnet to create within the availability zones (Min 5 = 27 Max 13 = 19) Type Number

OpenShift Container Platform 46 Installing on AWS

278

Metadata AWSCloudFormationInterface ParameterGroups - Label default Network Configuration Parameters - VpcCidr - SubnetBits - Label default Availability Zones Parameters - AvailabilityZoneCount ParameterLabels AvailabilityZoneCount default Availability Zone Count VpcCidr default VPC CIDR SubnetBits default Bits Per Subnet

Conditions DoAz3 Equals [3 Ref AvailabilityZoneCount] DoAz2 Or [Equals [2 Ref AvailabilityZoneCount] Condition DoAz3]

Resources VPC Type AWSEC2VPC Properties EnableDnsSupport true EnableDnsHostnames true CidrBlock Ref VpcCidr PublicSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [0 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PublicSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [1 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PublicSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [2 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select

CHAPTER 1 INSTALLING ON AWS

279

- 2 - FnGetAZs Ref AWSRegion InternetGateway Type AWSEC2InternetGateway GatewayToInternet Type AWSEC2VPCGatewayAttachment Properties VpcId Ref VPC InternetGatewayId Ref InternetGateway PublicRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PublicRoute Type AWSEC2Route DependsOn GatewayToInternet Properties RouteTableId Ref PublicRouteTable DestinationCidrBlock 00000 GatewayId Ref InternetGateway PublicSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PublicSubnet2 RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation3 Condition DoAz3 Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet3 RouteTableId Ref PublicRouteTable PrivateSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [3 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PrivateRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PrivateSubnet RouteTableId Ref PrivateRouteTable NAT DependsOn

OpenShift Container Platform 46 Installing on AWS

280

- GatewayToInternet Type AWSEC2NatGateway Properties AllocationId FnGetAtt - EIP - AllocationId SubnetId Ref PublicSubnet EIP Type AWSEC2EIP Properties Domain vpc Route Type AWSEC2Route Properties RouteTableId Ref PrivateRouteTable DestinationCidrBlock 00000 NatGatewayId Ref NAT PrivateSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [4 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PrivateRouteTable2 Type AWSEC2RouteTable Condition DoAz2 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PrivateSubnet2 RouteTableId Ref PrivateRouteTable2 NAT2 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz2 Properties AllocationId FnGetAtt - EIP2 - AllocationId SubnetId Ref PublicSubnet2 EIP2 Type AWSEC2EIP Condition DoAz2 Properties Domain vpc

CHAPTER 1 INSTALLING ON AWS

281

Route2 Type AWSEC2Route Condition DoAz2 Properties RouteTableId Ref PrivateRouteTable2 DestinationCidrBlock 00000 NatGatewayId Ref NAT2 PrivateSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [5 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion PrivateRouteTable3 Type AWSEC2RouteTable Condition DoAz3 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation3 Type AWSEC2SubnetRouteTableAssociation Condition DoAz3 Properties SubnetId Ref PrivateSubnet3 RouteTableId Ref PrivateRouteTable3 NAT3 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz3 Properties AllocationId FnGetAtt - EIP3 - AllocationId SubnetId Ref PublicSubnet3 EIP3 Type AWSEC2EIP Condition DoAz3 Properties Domain vpc Route3 Type AWSEC2Route Condition DoAz3 Properties RouteTableId Ref PrivateRouteTable3 DestinationCidrBlock 00000 NatGatewayId Ref NAT3 S3Endpoint Type AWSEC2VPCEndpoint

OpenShift Container Platform 46 Installing on AWS

282

189 Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS)that your OpenShift Container Platform cluster can use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the networking and load balancing components that yourOpenShift Container Platform cluster requires The template also creates a hosted zone and subnettags

You can run the template multiple times within a single Virtual Private Cloud (VPC)

Properties PolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Action - Resource - RouteTableIds - Ref PublicRouteTable - Ref PrivateRouteTable - If [DoAz2 Ref PrivateRouteTable2 Ref AWSNoValue] - If [DoAz3 Ref PrivateRouteTable3 Ref AWSNoValue] ServiceName Join - - - comamazonaws - Ref AWSRegion - s3 VpcId Ref VPC

Outputs VpcId Description ID of the new VPC Value Ref VPC PublicSubnetIds Description Subnet IDs of the public subnets Value Join [ [Ref PublicSubnet If [DoAz2 Ref PublicSubnet2 Ref AWSNoValue] If [DoAz3 Ref PublicSubnet3 Ref AWSNoValue]] ] PrivateSubnetIds Description Subnet IDs of the private subnets Value Join [ [Ref PrivateSubnet If [DoAz2 Ref PrivateSubnet2 Ref AWSNoValue] If [DoAz3 Ref PrivateSubnet3 Ref AWSNoValue]] ]

CHAPTER 1 INSTALLING ON AWS

283

1

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-configyaml file for your cluster You can obtain details about your hosted zone by running thefollowing command

For the ltroute53_domaingt specify the Route 53 base domain that you used when yougenerated the install-configyaml file for the cluster

Example output

In the example output the hosted zone ID is Z21IXYZABCZ2A4

2 Create a JSON file that contains the parameter values that the template requires

$ aws route53 list-hosted-zones-by-name --dns-name ltroute53_domaingt 1

myclusterexamplecom False 100HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA hostedzoneZ21IXYZABCZ2A4 myclusterexamplecom 10

[ ParameterKey ClusterName 1 ParameterValue mycluster 2 ParameterKey InfrastructureName 3 ParameterValue mycluster-ltrandom_stringgt 4 ParameterKey HostedZoneId 5 ParameterValue ltrandom_stringgt 6 ParameterKey HostedZoneName 7

OpenShift Container Platform 46 Installing on AWS

284

1

2

3

4

5

6

7

8

9

10

11

12

13

14

A short representative cluster name to use for host names etc

Specify the cluster name that you used when you generated the install-configyaml filefor the cluster

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The Route 53 public zone ID to register the targets with

Specify the Route 53 public zone ID which as a format similar to Z21IXYZABCZ2A4 Youcan obtain this value from the AWS console

The Route 53 zone to register the targets with

Specify the Route 53 base domain that you used when you generated the install-configyaml file for the cluster Do not include the trailing period () that is displayed in theAWS console

The public subnets that you created for your VPC

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

3 Copy the template from the CloudFormation template for the network and load balancerssection of this topic and save it as a YAML file on your computer This template describes thenetworking and load balancing objects that your cluster requires

ParameterValue examplecom 8 ParameterKey PublicSubnets 9 ParameterValue subnet-ltrandom_stringgt 10 ParameterKey PrivateSubnets 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey VpcId 13 ParameterValue vpc-ltrandom_stringgt 14 ]

CHAPTER 1 INSTALLING ON AWS

285

1

2

3

4

IMPORTANT

If you are deploying your cluster to an AWS government region you must updatethe InternalApiServerRecord in the CloudFormation template to use CNAMErecords Records of type ALIAS are not supported for AWS government regions

4 Launch the CloudFormation template to create a stack of AWS resources that provide thenetworking and load balancing components

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-dns You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole resources

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

PrivateHostedZoneId

Hosted zone ID for the private DNS

ExternalApiLoadBalancerName

Full name of the external API load balancer

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-dnscd3e5de0-2fd4-11eb-5cf0-12be5c33a183

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

286

InternalApiLoadBalancerName

Full name of the internal API load balancer

ApiServerDnsName

Full host name of the API server

RegisterNlbIpTargetsLambda

Lambda ARN useful to help registerderegister IP targets for these load balancers

ExternalApiTargetGroupArn

ARN of external API target group

InternalApiTargetGroupArn

ARN of internal API target group

InternalServiceTargetGroupArn

ARN of internal service target group

1891 CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and loadbalancers that you need for your OpenShift Container Platform cluster

Example 141 CloudFormation template for the network and load balancers

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Network Elements (Route53 amp LBs)

Parameters ClusterName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Cluster name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short representative cluster name to use for host names and other identifying names Type String InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster

CHAPTER 1 INSTALLING ON AWS

287

Type String HostedZoneId Description The Route53 public zone ID to register the targets with such as Z21IXYZABCZ2A4 Type String HostedZoneName Description The Route53 zone to register the targets with such as examplecom Omit the trailing period Type String Default examplecom PublicSubnets Description The internet-facing subnets Type ListltAWSEC2SubnetIdgt PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - ClusterName - InfrastructureName - Label default Network Configuration Parameters - VpcId - PublicSubnets - PrivateSubnets - Label default DNS Parameters - HostedZoneName - HostedZoneId ParameterLabels ClusterName default Cluster Name InfrastructureName default Infrastructure Name VpcId default VPC ID PublicSubnets default Public Subnets PrivateSubnets default Private Subnets HostedZoneName default Public Hosted Zone Name HostedZoneId default Public Hosted Zone ID

Resources

OpenShift Container Platform 46 Installing on AWS

288

ExtApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName ext]] IpAddressType ipv4 Subnets Ref PublicSubnets Type network

IntApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName int]] Scheme internal IpAddressType ipv4 Subnets Ref PrivateSubnets Type network

IntDns Type AWSRoute53HostedZone Properties HostedZoneConfig Comment Managed by CloudFormation Name Join [ [Ref ClusterName Ref HostedZoneName]] HostedZoneTags - Key Name Value Join [- [Ref InfrastructureName int]] - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value owned VPCs - VPCId Ref VpcId VPCRegion Ref AWSRegion

ExternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref HostedZoneId RecordSets - Name Join [ [api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt ExtApiElbCanonicalHostedZoneID DNSName GetAtt ExtApiElbDNSName

InternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref IntDns RecordSets - Name Join [

CHAPTER 1 INSTALLING ON AWS

289

[api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName - Name Join [ [api-int Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName

ExternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref ExternalApiTargetGroup LoadBalancerArn Ref ExtApiElb Port 6443 Protocol TCP

ExternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalApiTargetGroup LoadBalancerArn Ref IntApiElb

OpenShift Container Platform 46 Installing on AWS

290

Port 6443 Protocol TCP

InternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalServiceInternalListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalServiceTargetGroup LoadBalancerArn Ref IntApiElb Port 22623 Protocol TCP

InternalServiceTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath healthz HealthCheckPort 22623 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 22623 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

RegisterTargetLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName nlb lambda role]] AssumeRolePolicyDocument

CHAPTER 1 INSTALLING ON AWS

291

Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalApiTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalServiceTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref ExternalApiTargetGroup

RegisterNlbIpTargets Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterTargetLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) elb = boto3client(elbv2) if event[RequestType] == Delete elbderegister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) elif event[RequestType] == Create elbregister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=

OpenShift Container Platform 46 Installing on AWS

292

[Id event[ResourceProperties][TargetIp]]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][TargetArn]+event[ResourceProperties][TargetIp]) Runtime python37 Timeout 120

RegisterSubnetTagsLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName subnet-tags-lambda-role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName subnet-tagging-policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ ec2DeleteTags ec2CreateTags ] Resource arnawsec2subnet - Effect Allow Action [ ec2DescribeSubnets ec2DescribeTags ] Resource

RegisterSubnetTags Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterSubnetTagsLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) ec2_client = boto3client(ec2)

CHAPTER 1 INSTALLING ON AWS

293

if event[RequestType] == Delete for subnet_id in event[ResourceProperties][Subnets] ec2_clientdelete_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName]]) elif event[RequestType] == Create for subnet_id in event[ResourceProperties][Subnets] ec2_clientcreate_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName] Value shared]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][InfrastructureName]+event[ResourceProperties][Subnets][0]) Runtime python37 Timeout 120

RegisterPublicSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PublicSubnets

RegisterPrivateSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PrivateSubnets

Outputs PrivateHostedZoneId Description Hosted zone ID for the private DNS which is required for private records Value Ref IntDns ExternalApiLoadBalancerName Description Full name of the external API load balancer Value GetAtt ExtApiElbLoadBalancerFullName InternalApiLoadBalancerName Description Full name of the internal API load balancer Value GetAtt IntApiElbLoadBalancerFullName ApiServerDnsName Description Full hostname of the API server which is required for the Ignition config files Value Join [ [api-int Ref ClusterName Ref HostedZoneName]] RegisterNlbIpTargetsLambda Description Lambda ARN useful to help register or deregister IP targets for these load balancers Value GetAtt RegisterNlbIpTargetsArn ExternalApiTargetGroupArn Description ARN of the external API target group Value Ref ExternalApiTargetGroup InternalApiTargetGroupArn Description ARN of the internal API target group Value Ref InternalApiTargetGroup InternalServiceTargetGroupArn Description ARN of the internal service target group Value Ref InternalServiceTargetGroup

OpenShift Container Platform 46 Installing on AWS

294

IMPORTANT

If you are deploying your cluster to an AWS government region you must update the InternalApiServerRecord to use CNAME records Records of type ALIAS are notsupported for AWS government regions For example

Additional resources

See Listing public hosted zones in the AWS documentation for more information about listingpublic hosted zones

1810 Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift ContainerPlatform cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the security groups and roles that your OpenShift ContainerPlatform cluster requires

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Create a JSON file that contains the parameter values that the template requires

Type CNAMETTL 10ResourceRecords- GetAtt IntApiElbDNSName

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey VpcCidr 3 ParameterValue 1000016 4

CHAPTER 1 INSTALLING ON AWS

295

1

2

3

4

5

6

7

8

1

2

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The CIDR block for the VPC

Specify the CIDR block parameter that you used for the VPC that you defined in the form xxxx16-24

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

2 Copy the template from the CloudFormation template for security objects section of thistopic and save it as a YAML file on your computer This template describes the security groupsand roles that your cluster requires

3 Launch the CloudFormation template to create a stack of AWS resources that represent thesecurity groups and roles

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-sec You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ParameterKey PrivateSubnets 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey VpcId 7 ParameterValue vpc-ltrandom_stringgt 8 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

OpenShift Container Platform 46 Installing on AWS

296

3

4

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

MasterSecurityGroupId

Master Security Group ID

WorkerSecurityGroupId

Worker Security Group ID

MasterInstanceProfile

Master IAM Instance Profile

WorkerInstanceProfile

Worker IAM Instance Profile

18101 CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need foryour OpenShift Container Platform cluster

Example 142 CloudFormation template for security objects

arnawscloudformationus-east-1269333783861stackcluster-sec03bd4210-2ed7-11eb-6d7a-13fc0b61e9db

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Security Elements (Security Groups amp IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1

CHAPTER 1 INSTALLING ON AWS

297

ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Network Configuration Parameters - VpcId - VpcCidr - PrivateSubnets ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID VpcCidr default VPC CIDR PrivateSubnets default Private Subnets

Resources MasterSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Master Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr

OpenShift Container Platform 46 Installing on AWS

298

- IpProtocol tcp ToPort 6443 FromPort 6443 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22623 ToPort 22623 CidrIp Ref VpcCidr VpcId Ref VpcId

WorkerSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Worker Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr VpcId Ref VpcId

MasterIngressEtcd Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description etcd FromPort 2379 ToPort 2380 IpProtocol tcp

MasterIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressWorkerVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressGeneve

CHAPTER 1 INSTALLING ON AWS

299

Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressWorkerGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressWorkerInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressWorkerInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000

OpenShift Container Platform 46 Installing on AWS

300

ToPort 9999 IpProtocol udp

MasterIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressWorkerIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIngressWorkerIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties

CHAPTER 1 INSTALLING ON AWS

301

GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressMasterVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressMasterGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

OpenShift Container Platform 46 Installing on AWS

302

WorkerIngressMasterInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressMasterInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes secure kubelet port FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal Kubernetes communication FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId

CHAPTER 1 INSTALLING ON AWS

303

Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressMasterIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressMasterIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2AttachVolume - ec2AuthorizeSecurityGroupIngress

OpenShift Container Platform 46 Installing on AWS

304

- ec2CreateSecurityGroup - ec2CreateTags - ec2CreateVolume - ec2DeleteSecurityGroup - ec2DeleteVolume - ec2Describe - ec2DetachVolume - ec2ModifyInstanceAttribute - ec2ModifyVolume - ec2RevokeSecurityGroupIngress - elasticloadbalancingAddTags - elasticloadbalancingAttachLoadBalancerToSubnets - elasticloadbalancingApplySecurityGroupsToLoadBalancer - elasticloadbalancingCreateListener - elasticloadbalancingCreateLoadBalancer - elasticloadbalancingCreateLoadBalancerPolicy - elasticloadbalancingCreateLoadBalancerListeners - elasticloadbalancingCreateTargetGroup - elasticloadbalancingConfigureHealthCheck - elasticloadbalancingDeleteListener - elasticloadbalancingDeleteLoadBalancer - elasticloadbalancingDeleteLoadBalancerListeners - elasticloadbalancingDeleteTargetGroup - elasticloadbalancingDeregisterInstancesFromLoadBalancer - elasticloadbalancingDeregisterTargets - elasticloadbalancingDescribe - elasticloadbalancingDetachLoadBalancerFromSubnets - elasticloadbalancingModifyListener - elasticloadbalancingModifyLoadBalancerAttributes - elasticloadbalancingModifyTargetGroup - elasticloadbalancingModifyTargetGroupAttributes - elasticloadbalancingRegisterInstancesWithLoadBalancer - elasticloadbalancingRegisterTargets - elasticloadbalancingSetLoadBalancerPoliciesForBackendServer - elasticloadbalancingSetLoadBalancerPoliciesOfListener - kmsDescribeKey Resource

MasterInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref MasterIamRole

WorkerIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole

CHAPTER 1 INSTALLING ON AWS

305

1811 RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs valid for the various Amazon WebServices (AWS) zones you can specify for your OpenShift Container Platform nodes

NOTE

You can also install to regions that do not have a RHCOS AMI published by importing yourown AMI

Table 120 RHCOS AMIs

AWS zone AWS AMI

af-south-1 ami-09921c9c1c36e695c

ap-east-1 ami-01ee8446e9af6b197

Policies - PolicyName Join [- [Ref InfrastructureName worker policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2DescribeInstances - ec2DescribeRegions Resource

WorkerInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref WorkerIamRole

Outputs MasterSecurityGroupId Description Master Security Group ID Value GetAtt MasterSecurityGroupGroupId

WorkerSecurityGroupId Description Worker Security Group ID Value GetAtt WorkerSecurityGroupGroupId

MasterInstanceProfile Description Master IAM Instance Profile Value Ref MasterInstanceProfile

WorkerInstanceProfile Description Worker IAM Instance Profile Value Ref WorkerInstanceProfile

OpenShift Container Platform 46 Installing on AWS

306

ap-northeast-1 ami-04e5b5722a55846ea

ap-northeast-2 ami-0fdc25c8a0273a742

ap-south-1 ami-09e3deb397cc526a8

ap-southeast-1 ami-0630e03f75e02eec4

ap-southeast-2 ami-069450613262ba03c

ca-central-1 ami-012518cdbd3057dfd

eu-central-1 ami-0bd7175ff5b1aef0c

eu-north-1 ami-06c9ec42d0a839ad2

eu-south-1 ami-0614d7440a0363d71

eu-west-1 ami-01b89df58b5d4d5fa

eu-west-2 ami-06f6e31ddd554f89d

eu-west-3 ami-0dc82e2517ded15a1

me-south-1 ami-07d181e3aa0f76067

sa-east-1 ami-0cd44e6dd20e6c7fa

us-east-1 ami-04a16d506e5b0e246

us-east-2 ami-0a1f868ad58ea59a7

us-west-1 ami-0a65d76e3a6f6622f

us-west-2 ami-0dd9008abadc519f1

AWS zone AWS AMI

1812 Creating the bootstrap node in AWS

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift ContainerPlatform cluster initialization

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the bootstrap node that your OpenShift Container Platforminstallation requires

NOTE

CHAPTER 1 INSTALLING ON AWS

307

1

NOTE

If you do not use the provided CloudFormation template to create your bootstrap nodeyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

Procedure

1 Provide a location to serve the bootstrapign Ignition config file to your cluster This file islocated in your installation directory One way to do this is to create an S3 bucket in yourclusterrsquos region and upload the Ignition config file to it

IMPORTANT

The provided CloudFormation Template assumes that the Ignition config filesfor your cluster are served from an S3 bucket If you choose to serve the filesfrom another location you must modify the templates

IMPORTANT

If you are deploying to a region that has endpoints that differ from the AWS SDKor you are providing your own custom endpoints you must use a presigned URLfor your S3 bucket instead of the s3 schema

NOTE

The bootstrap Ignition config file does contain secrets like X509 keys Thefollowing steps provide basic security for the S3 bucket To provide additionalsecurity you can enable an S3 bucket policy to allow only certain users such asthe OpenShift IAM user to access objects that the bucket contains You canavoid S3 entirely and serve your bootstrap Ignition config file from any addressthat the bootstrap machine can reach

a Create the bucket

ltcluster-namegt-infra is the bucket name When creating the install-configyaml filereplace ltcluster-namegt with the name specified for the cluster

$ aws s3 mb s3ltcluster-namegt-infra 1

OpenShift Container Platform 46 Installing on AWS

308

1

b Upload the bootstrapign Ignition config file to the bucket

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

c Verify that the file uploaded

Example output

2 Create a JSON file that contains the parameter values that the template requires

$ aws s3 cp ltinstallation_directorygtbootstrapign s3ltcluster-namegt-infrabootstrapign 1

$ aws s3 ls s3ltcluster-namegt-infra

2019-04-03 161516 314878 bootstrapign

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AllowedBootstrapSshCidr 5 ParameterValue 00000 6 ParameterKey PublicSubnet 7 ParameterValue subnet-ltrandom_stringgt 8 ParameterKey MasterSecurityGroupId 9 ParameterValue sg-ltrandom_stringgt 10 ParameterKey VpcId 11 ParameterValue vpc-ltrandom_stringgt 12 ParameterKey BootstrapIgnitionLocation 13 ParameterValue s3ltbucket_namegtbootstrapign 14 ParameterKey AutoRegisterELB 15 ParameterValue yes 16

CHAPTER 1 INSTALLING ON AWS

309

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node

Specify a valid AWSEC2ImageId value

CIDR block to allow SSH access to the bootstrap node

Specify a CIDR block in the format xxxx16-24

The public subnet that is associated with your VPC to launch the bootstrap node into

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The master security group ID (for registering temporary rules)

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The VPC created resources will belong to

Specify the VpcId value from the output of the CloudFormation template for the VPC

Location to fetch bootstrap Ignition config file from

Specify the S3 bucket and file name in the form s3ltbucket_namegtbootstrapign

Whether or not to register a network load balancer (NLB)

ParameterKey RegisterNlbIpTargetsLambdaArn 17 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 18 ParameterKey ExternalApiTargetGroupArn 19 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 20 ParameterKey InternalApiTargetGroupArn 21 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 22 ParameterKey InternalServiceTargetGroupArn 23 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 24 ]

OpenShift Container Platform 46 Installing on AWS

310

16

17

18

19

20

21

22

23

24

1

2

3

4

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to an

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

3 Copy the template from the CloudFormation template for the bootstrap machine section ofthis topic and save it as a YAML file on your computer This template describes the bootstrapmachine that your cluster requires

4 Launch the CloudFormation template to create a stack of AWS resources that represent thebootstrap node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-bootstrap You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

CHAPTER 1 INSTALLING ON AWS

311

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

BootstrapInstanceId

The bootstrap Instance ID

BootstrapPublicIp

The bootstrap node public IP address

BootstrapPrivateIp

The bootstrap node private IP address

18121 CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need foryour OpenShift Container Platform cluster

Example 143 CloudFormation template for the bootstrap machine

arnawscloudformationus-east-1269333783861stackcluster-bootstrap12944486-2add-11eb-9dee-12dace8e3a83

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Bootstrap (EC2 Instance Security Groups and IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AllowedBootstrapSshCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])(([0-9]|1[0-9]|2[0-9]|3[0-2]))$ ConstraintDescription CIDR block parameter must be in the form xxxx0-32 Default 00000 Description CIDR block to allow SSH access to the bootstrap node

OpenShift Container Platform 46 Installing on AWS

312

Type String PublicSubnet Description The public subnet to launch the bootstrap node into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID for registering temporary rules Type AWSEC2SecurityGroupId VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId BootstrapIgnitionLocation Default s3my-s3-bucketbootstrapign Description Ignition config file location Type String AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label default Load Balancer Automation Parameters

CHAPTER 1 INSTALLING ON AWS

313

- AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID AllowedBootstrapSshCidr default Allowed SSH Source PublicSubnet default Public Subnet RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Bootstrap Ignition Source MasterSecurityGroupId default Master Security Group ID AutoRegisterELB default Use Provided ELB Automation

Conditions DoRegistration Equals [yes Ref AutoRegisterELB]

Resources BootstrapIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName bootstrap policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action ec2Describe Resource - Effect Allow Action ec2AttachVolume Resource - Effect Allow Action ec2DetachVolume Resource - Effect Allow Action s3GetObject

OpenShift Container Platform 46 Installing on AWS

314

Resource

BootstrapInstanceProfile Type AWSIAMInstanceProfile Properties Path Roles - Ref BootstrapIamRole

BootstrapSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Bootstrap Security Group SecurityGroupIngress - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref AllowedBootstrapSshCidr - IpProtocol tcp ToPort 19531 FromPort 19531 CidrIp 00000 VpcId Ref VpcId

BootstrapInstance Type AWSEC2Instance Properties ImageId Ref RhcosAmi IamInstanceProfile Ref BootstrapInstanceProfile InstanceType i3large NetworkInterfaces - AssociatePublicIpAddress true DeviceIndex 0 GroupSet - Ref BootstrapSecurityGroup - Ref MasterSecurityGroupId SubnetId Ref PublicSubnet UserData FnBase64 Sub - ignitionconfigreplacesource$S3Locversion310 - S3Loc Ref BootstrapIgnitionLocation

RegisterBootstrapApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties

CHAPTER 1 INSTALLING ON AWS

315

Additional resources

See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise LinuxCoreOS (RHCOS) AMIs for the AWS zones

1813 Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the control plane nodes

IMPORTANT

The CloudFormation template creates a stack that represents three control plane nodes

NOTE

If you do not use the provided CloudFormation template to create your control planenodes you must review the provided information and manually create the infrastructureIf your cluster does not initialize correctly you might have to contact Red Hat supportwith your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

Outputs BootstrapInstanceId Description Bootstrap Instance ID Value Ref BootstrapInstance

BootstrapPublicIp Description The bootstrap node public IP address Value GetAtt BootstrapInstancePublicIp

BootstrapPrivateIp Description The bootstrap node private IP address Value GetAtt BootstrapInstancePrivateIp

OpenShift Container Platform 46 Installing on AWS

316

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

Procedure

1 Create a JSON file that contains the parameter values that the template requires

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AutoRegisterDNS 5 ParameterValue yes 6 ParameterKey PrivateHostedZoneId 7 ParameterValue ltrandom_stringgt 8 ParameterKey PrivateHostedZoneName 9 ParameterValue myclusterexamplecom 10 ParameterKey Master0Subnet 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey Master1Subnet 13 ParameterValue subnet-ltrandom_stringgt 14 ParameterKey Master2Subnet 15 ParameterValue subnet-ltrandom_stringgt 16 ParameterKey MasterSecurityGroupId 17 ParameterValue sg-ltrandom_stringgt 18 ParameterKey IgnitionLocation 19

CHAPTER 1 INSTALLING ON AWS

317

1

2

3

4

5

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control planemachines

Specify an AWSEC2ImageId value

Whether or not to perform DNS etcd registration

ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configmaster 20 ParameterKey CertificateAuthorities 21 ParameterValue datatextplaincharset=utf-8base64ABCxYz== 22 ParameterKey MasterInstanceProfileName 23 ParameterValue ltroles_stackgt-MasterInstanceProfile-ltrandom_stringgt 24 ParameterKey MasterInstanceType 25 ParameterValue m4xlarge 26 ParameterKey AutoRegisterELB 27 ParameterValue yes 28 ParameterKey RegisterNlbIpTargetsLambdaArn 29 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 30 ParameterKey ExternalApiTargetGroupArn 31 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 32 ParameterKey InternalApiTargetGroupArn 33 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 34 ParameterKey InternalServiceTargetGroupArn 35 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 36 ]

OpenShift Container Platform 46 Installing on AWS

318

6

7

8

9

10

11 13 15

12 14 16

17

18

19

20

21

22

23

24

25

26

Specify yes or no If you specify yes you must provide hosted zone information

The Route 53 private zone ID to register the etcd targets with

Specify the PrivateHostedZoneId value from the output of the CloudFormation templatefor DNS and load balancing

The Route 53 zone to register the targets with

Specify ltcluster_namegtltdomain_namegt where ltdomain_namegt is the Route 53 basedomain that you used when you generated install-configyaml file for the cluster Do notinclude the trailing period () that is displayed in the AWS console

A subnet preferably private to launch the control plane machines on

Specify a subnet from the PrivateSubnets value from the output of theCloudFormation template for DNS and load balancing

The master security group ID to associate with master nodes

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch control plane Ignition config file from

Specify the generated Ignition config file location httpsapi-intltcluster_namegtltdomain_namegt22623configmaster

The base64 encoded certificate authority string to use

Specify the value from the masterign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with master nodes

Specify the MasterInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5xlarge

m52xlarge

CHAPTER 1 INSTALLING ON AWS

319

27

28

29

30

31

32

33

34

35

36

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c42xlarge

c44xlarge

c48xlarge

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 specify an m5 type such as m5xlarge instead

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

OpenShift Container Platform 46 Installing on AWS

320

1

2

3

2 Copy the template from the CloudFormation template for control plane machines section ofthis topic and save it as a YAML file on your computer This template describes the control planemachines that your cluster requires

3 If you specified an m5 instance type as the value for MasterInstanceType add that instancetype to the MasterInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent thecontrol plane nodes

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-control-plane Youneed the name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents three control planenodes

5 Confirm that the template components exist

18131 CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you needfor your OpenShift Container Platform cluster

Example 144 CloudFormation template for control plane machines

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-control-plane21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 master instances)

CHAPTER 1 INSTALLING ON AWS

321

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AutoRegisterDNS Default yes AllowedValues - yes - no Description Do you want to invoke DNS etcd registration which requires Hosted Zone information Type String PrivateHostedZoneId Description The Route53 private zone ID to register the etcd targets with such as Z21IXYZABCZ2A4 Type String PrivateHostedZoneName Description The Route53 zone to register the targets with such as clusterexamplecom Omit the trailing period Type String Master0Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master1Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master2Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configmaster Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String MasterInstanceProfileName Description IAM profile to associate with master nodes Type String MasterInstanceType Default m5xlarge Type String AllowedValues - m4xlarge

OpenShift Container Platform 46 Installing on AWS

322

- m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c42xlarge - c44xlarge - c48xlarge - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - MasterInstanceType - RhcosAmi - IgnitionLocation

CHAPTER 1 INSTALLING ON AWS

323

- CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label default DNS Parameters - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID Master0Subnet default Master-0 Subnet Master1Subnet default Master-1 Subnet Master2Subnet default Master-2 Subnet MasterInstanceType default Master Instance Type MasterInstanceProfileName default Master Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Master Ignition Source CertificateAuthorities default Ignition CA String MasterSecurityGroupId default Master Security Group ID AutoRegisterDNS default Use Provided DNS Automation AutoRegisterELB default Use Provided ELB Automation PrivateHostedZoneName default Private Hosted Zone Name PrivateHostedZoneId default Private Hosted Zone ID

OpenShift Container Platform 46 Installing on AWS

324

Conditions DoRegistration Equals [yes Ref AutoRegisterELB] DoDns Equals [yes Ref AutoRegisterDNS]

Resources Master0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master0Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster0 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn

CHAPTER 1 INSTALLING ON AWS

325

TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master0PrivateIp

Master1 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master1Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster1 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master1PrivateIp

OpenShift Container Platform 46 Installing on AWS

326

Master2 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master2Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster2 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master2PrivateIp

EtcdSrvRecords Condition DoDns Type AWSRoute53RecordSet

CHAPTER 1 INSTALLING ON AWS

327

Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [_etcd-server-ssl_tcp Ref PrivateHostedZoneName]] ResourceRecords - Join [ [0 10 2380 Join [ [etcd-0 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-1 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-2 Ref PrivateHostedZoneName]]] ] TTL 60 Type SRV

Etcd0Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-0 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master0PrivateIp TTL 60 Type A

Etcd1Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-1 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master1PrivateIp TTL 60 Type A

Etcd2Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-2 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master2PrivateIp TTL 60 Type A

Outputs PrivateIPs Description The control-plane node private IP addresses Value

OpenShift Container Platform 46 Installing on AWS

328

1814 Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent a worker node

IMPORTANT

The CloudFormation template creates a stack that represents one worker node Youmust create a stack for each worker node

NOTE

If you do not use the provided CloudFormation template to create your worker nodesyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

Procedure

1 Create a JSON file that contains the parameter values that the CloudFormation templaterequires

Join [ [GetAtt Master0PrivateIp GetAtt Master1PrivateIp GetAtt Master2PrivateIp] ]

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2

CHAPTER 1 INSTALLING ON AWS

329

1

2

3

4

5

6

7

8

9

10

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes

Specify an AWSEC2ImageId value

A subnet preferably private to launch the worker nodes on

Specify a subnet from the PrivateSubnets value from the output of the CloudFormationtemplate for DNS and load balancing

The worker security group ID to associate with worker nodes

Specify the WorkerSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch bootstrap Ignition config file from

Specify the generated Ignition config location httpsapi-intltcluster_namegtltdomain_namegt22623configworker

ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey Subnet 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey WorkerSecurityGroupId 7 ParameterValue sg-ltrandom_stringgt 8 ParameterKey IgnitionLocation 9 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configworker 10 ParameterKey CertificateAuthorities 11 ParameterValue 12 ParameterKey WorkerInstanceProfileName 13 ParameterValue 14 ParameterKey WorkerInstanceType 15 ParameterValue m4large 16 ]

OpenShift Container Platform 46 Installing on AWS

330

11

12

13

14

15

16

Base64 encoded certificate authority string to use

Specify the value from the workerign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with worker nodes

Specify the WorkerInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4large

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5large

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c4large

c4xlarge

c42xlarge

c44xlarge

c48xlarge

r4large

r4xlarge

r42xlarge

CHAPTER 1 INSTALLING ON AWS

331

1

2

3

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5 types instead

2 Copy the template from the CloudFormation template for worker machines section of thistopic and save it as a YAML file on your computer This template describes the networkingobjects and load balancers that your cluster requires

3 If you specified an m5 instance type as the value for WorkerInstanceType add that instancetype to the WorkerInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent aworker node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-worker-1 You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents one worker node

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-worker-1729ee301-1c2a-11eb-348f-sd9888c65b59

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

332

6 Continue to create worker stacks until you have created enough worker machines for yourcluster You can create additional worker stacks by referencing the same template andparameter files and specifying a different stack name

IMPORTANT

You must create at least two worker machines so you must create at least twostacks that use this CloudFormation template

18141 CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need foryour OpenShift Container Platform cluster

Example 145 CloudFormation template for worker machines

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId WorkerSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configworker Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String WorkerInstanceProfileName Description IAM profile to associate with master nodes Type String WorkerInstanceType Default m5large Type String AllowedValues - m4large - m4xlarge - m42xlarge

CHAPTER 1 INSTALLING ON AWS

333

- m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5large - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c4large - c4xlarge - c42xlarge - c44xlarge - c48xlarge - r4large - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label default Network Configuration Parameters - Subnet ParameterLabels Subnet default Subnet InfrastructureName default Infrastructure Name WorkerInstanceType default Worker Instance Type WorkerInstanceProfileName default Worker Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID IgnitionLocation default Worker Ignition Source

OpenShift Container Platform 46 Installing on AWS

334

1815 Initializing the bootstrap sequence on AWS with user-provisionedinfrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS) you can start thebootstrap sequence that initializes the OpenShift Container Platform control plane

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

CertificateAuthorities default Ignition CA String WorkerSecurityGroupId default Worker Security Group ID

Resources Worker0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref WorkerInstanceProfileName InstanceType Ref WorkerInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref WorkerSecurityGroupId SubnetId Ref Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

Outputs PrivateIP Description The compute node private IP address Value GetAtt Worker0PrivateIp

CHAPTER 1 INSTALLING ON AWS

335

1

2

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

You created the worker nodes

Procedure

1 Change to the directory that contains the installation program and start the bootstrap processthat initializes the OpenShift Container Platform control plane

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different installation details specify warn debug or error instead of info

Example output

If the command exits without a FATAL warning your OpenShift Container Platform controlplane has initialized

NOTE

After the control plane initializes it sets up the compute nodes and installsadditional services in the form of Operators

Additional resources

See Monitoring installation progress for details about monitoring the installation bootstrap andcontrol plane logs as an OpenShift Container Platform installation progresses

See Gathering bootstrap node diagnostic data for information about troubleshooting issuesrelated to the bootstrap process

1816 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The

$ openshift-install wait-for bootstrap-complete --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Waiting up to 20m0s for the Kubernetes API at httpsapimyclusterexamplecom6443INFO API v1190+9f84db3 upINFO Waiting up to 30m0s for bootstrapping to completeINFO It is now safe to remove the bootstrap resourcesINFO Time elapsed 1s

OpenShift Container Platform 46 Installing on AWS

336

1

kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1817 Approving the certificate signing requests for your machines

When you add machines to a cluster two pending certificate signing requests (CSRs) are generated foreach machine that you added You must confirm that these CSRs are approved or if necessary approvethem yourself The client requests must be approved first followed by the server requests

Prerequisites

You added machines to your cluster

Procedure

1 Confirm that the cluster recognizes the machines

Example output

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 63m v1190master-1 Ready master 63m v1190master-2 Ready master 64m v1190worker-0 NotReady worker 76s v1190worker-1 NotReady worker 70s v1190

CHAPTER 1 INSTALLING ON AWS

337

The output lists all of the machines that you created

NOTE

The preceding output might not include the compute nodes also known asworker nodes until some CSRs are approved

2 Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster

Example output

In this example two machines are joining the cluster You might see more approved CSRs in thelist

3 If the CSRs were not approved after all of the pending CSRs for the machines you added are in Pending status approve the CSRs for your cluster machines

NOTE

Because the CSRs rotate automatically approve your CSRs within an hour ofadding the machines to the cluster If you do not approve them within an hour thecertificates will rotate and more than two certificates will be present for eachnode You must approve all of these certificates After you approve the initialCSRs the subsequent node client CSRs are automatically approved by thecluster kube-controller-manager

NOTE

For clusters running on platforms that are not machine API enabled such as baremetal and other user-provisioned infrastructure you must implement a methodof automatically approving the kubelet serving certificate requests (CSRs) If arequest is not approved then the oc exec oc rsh and oc logs commandscannot succeed because a serving certificate is required when the API serverconnects to the kubelet Any operation that contacts the Kubelet endpointrequires this certificate approval to be in place The method must watch for newCSRs confirm that the CSR was submitted by the node-bootstrapper serviceaccount in the systemnode or systemadmin groups and confirm the identityof the node

To approve them individually run the following command for each valid CSR

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-8b2br 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pendingcsr-8vnps 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pending

$ oc adm certificate approve ltcsr_namegt 1

OpenShift Container Platform 46 Installing on AWS

338

1

1

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

NOTE

Some Operators might not become available until some CSRs are approved

4 Now that your client requests are approved you must review the server requests for eachmachine that you added to the cluster

Example output

5 If the remaining CSRs are not approved and are in the Pending status approve the CSRs foryour cluster machines

To approve them individually run the following command for each valid CSR

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

6 After all client and server CSRs have been approved the machines have the Ready statusVerify this by running the following command

Example output

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs --no-run-if-empty oc adm certificate approve

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-bfd72 5m26s systemnodeip-10-0-50-126us-east-2computeinternal Pendingcsr-c57lv 5m26s systemnodeip-10-0-95-157us-east-2computeinternal Pending

$ oc adm certificate approve ltcsr_namegt 1

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs oc adm certificate approve

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 73m v1200master-1 Ready master 73m v1200

CHAPTER 1 INSTALLING ON AWS

339

NOTE

It can take a few minutes after approval of the server CSRs for the machines totransition to the Ready status

Additional information

For more information on CSRs see Certificate Signing Requests

1818 Initial Operator configuration

After the control plane initializes you must immediately configure some Operators so that they allbecome available

Prerequisites

Your control plane has initialized

Procedure

1 Watch the cluster components come online

Example output

master-2 Ready master 74m v1200worker-0 Ready worker 11m v1200worker-1 Ready worker 11m v1200

$ watch -n5 oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCEauthentication 460 True False False 3h56mcloud-credential 460 True False False 29hcluster-autoscaler 460 True False False 29hconfig-operator 460 True False False 6h39mconsole 460 True False False 3h59mcsi-snapshot-controller 460 True False False 4h12mdns 460 True False False 4h15metcd 460 True False False 29himage-registry 460 True False False 3h59mingress 460 True False False 4h30minsights 460 True False False 29hkube-apiserver 460 True False False 29hkube-controller-manager 460 True False False 29hkube-scheduler 460 True False False 29hkube-storage-version-migrator 460 True False False 4h2mmachine-api 460 True False False 29hmachine-approver 460 True False False 6h34mmachine-config 460 True False False 3h56mmarketplace 460 True False False 4h2mmonitoring 460 True False False 6h31mnetwork 460 True False False 29hnode-tuning 460 True False False 4h30m

OpenShift Container Platform 46 Installing on AWS

340

2 Configure the Operators that are not available

18181 Image registry storage configuration

Amazon Web Services provides default storage which means the Image Registry Operator is availableafter installation However if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage you must manually configure registry storage

Instructions are shown for configuring a persistent volume which is required for production clustersWhere applicable instructions are shown for configuring an empty directory as the storage locationwhich is available for only non-production clusters

Additional instructions are provided for allowing the image registry to use block storage types by usingthe Recreate rollout strategy during upgrades

181811 Configuring registry storage for AWS with user-provisioned infrastructure

During installation your cloud credentials are sufficient to create an Amazon S3 bucket and the RegistryOperator will automatically configure storage

If the Registry Operator cannot create an S3 bucket and automatically configure storage you cancreate an S3 bucket and configure storage with the following procedure

Prerequisites

You have a cluster on AWS with user-provisioned infrastructure

For Amazon S3 storage the secret is expected to contain two keys

REGISTRY_STORAGE_S3_ACCESSKEY

REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage

1 Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old

2 Fill in the storage configuration in configsimageregistryoperatoropenshiftiocluster

Example configuration

openshift-apiserver 460 True False False 3h56mopenshift-controller-manager 460 True False False 4h36mopenshift-samples 460 True False False 4h30moperator-lifecycle-manager 460 True False False 29hoperator-lifecycle-manager-catalog 460 True False False 29hoperator-lifecycle-manager-packageserver 460 True False False 3h59mservice-ca 460 True False False 29hstorage 460 True False False 4h30m

$ oc edit configsimageregistryoperatoropenshiftiocluster

storage

CHAPTER 1 INSTALLING ON AWS

341

WARNING

To secure your registry images in AWS block public access to the S3 bucket

181812 Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator For non-production clusters you can setthe image registry to an empty directory If you do so all images are lost if you restart the registry

Procedure

To set the image registry storage to an empty directory

WARNING

Configure this option for only non-production clusters

If you run this command before the Image Registry Operator initializes its components the oc patch command fails with the following error

Wait a few minutes and run the command again

1819 Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster remove the bootstrap resourcesfrom Amazon Web Services (AWS)

Prerequisites

You completed the initial Operator configuration for your cluster

Procedure

1 Delete the bootstrap resources If you used the CloudFormation template delete its stack

s3 bucket ltbucket-namegt region ltregion-namegt

$ oc patch configsimageregistryoperatoropenshiftio cluster --type merge --patch specstorageemptyDir

Error from server (NotFound) configsimageregistryoperatoropenshiftio cluster not found

OpenShift Container Platform 46 Installing on AWS

342

1

Delete the stack by using the AWS CLI

ltnamegt is the name of your bootstrap stack

Delete the stack by using the AWS CloudFormation console

1820 Creating the Ingress DNS Records

If you removed the DNS Zone configuration manually create DNS records that point to the Ingress loadbalancer You can create either a wildcard record or specific records While the following procedure usesA records you can use other record types that you require such as CNAME or alias

Prerequisites

You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) thatuses infrastructure that you provisioned

You installed the OpenShift CLI (oc)

You installed the jq package

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix)

Procedure

1 Determine the routes to create

To create a wildcard record use appsltcluster_namegtltdomain_namegt where ltcluster_namegt is your cluster name and ltdomain_namegt is the Route 53 base domainfor your OpenShift Container Platform cluster

To create specific records you must create a record for each route that your cluster uses asshown in the output of the following command

Example output

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP addressthat it uses which is shown in the EXTERNAL-IP column

$ aws cloudformation delete-stack --stack-name ltnamegt 1

$ oc get --all-namespaces -o jsonpath=range items[]range statusingress[]hostnendend routes

oauth-openshiftappsltcluster_namegtltdomain_namegtconsole-openshift-consoleappsltcluster_namegtltdomain_namegtdownloads-openshift-consoleappsltcluster_namegtltdomain_namegtalertmanager-main-openshift-monitoringappsltcluster_namegtltdomain_namegtgrafana-openshift-monitoringappsltcluster_namegtltdomain_namegtprometheus-k8s-openshift-monitoringappsltcluster_namegtltdomain_namegt

$ oc -n openshift-ingress get service router-default

CHAPTER 1 INSTALLING ON AWS

343

1

1 2

Example output

3 Locate the hosted zone ID for the load balancer

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer that you obtained

Example output

The output of this command is the load balancer hosted zone ID

4 Obtain the public hosted zone ID for your clusterrsquos domain

For ltdomain_namegt specify the Route 53 base domain for your OpenShift ContainerPlatform cluster

Example output

The public hosted zone ID for your domain is shown in the command output In this example it is Z3URY6TWQ91KVV

5 Add the alias records to your private zone

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGErouter-default LoadBalancer 1723062215 ab328us-east-2elbamazonawscom 8031499TCP44330693TCP 5m

$ aws elb describe-load-balancers | jq -r LoadBalancerDescriptions[] | select(DNSName == ltexternal_ipgt)CanonicalHostedZoneNameID 1

Z3AADJGX6KTTL2

$ aws route53 list-hosted-zones-by-name --dns-name ltdomain_namegt 1 --query HostedZones[ ConfigPrivateZone = `true` ampamp Name == `ltdomain_namegt`]Id 2 --output text

hostedzoneZ3URY6TWQ91KVV

$ aws route53 change-resource-record-sets --hosted-zone-id ltprivate_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3

OpenShift Container Platform 46 Installing on AWS

344

1

2

3

4

1

2

3

4

For ltprivate_hosted_zone_idgt specify the value from the output of the CloudFormationtemplate for DNS and load balancing

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

6 Add the records to your public zone

For ltpublic_hosted_zone_idgt specify the public hosted zone for your domain

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

1821 Completing an AWS installation on user-provisioned infrastructure

gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

$ aws route53 change-resource-record-sets --hosted-zone-id ltpublic_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

CHAPTER 1 INSTALLING ON AWS

345

1

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure monitor the deployment to completion

Prerequisites

You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure

You installed the oc CLI

Procedure

1 From the directory that contains the installation program complete the cluster installation

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

2 Register your cluster on the Cluster registration page

1822 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

$ openshift-install --dir=ltinstallation_directorygt wait-for install-complete 1

INFO Waiting up to 40m0s for the cluster at httpsapimyclusterexamplecom6443 to initializeINFO Waiting up to 10m0s for the openshift-console route to be createdINFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Fe5en-ymBEc-Wt6NLINFO Time elapsed 1s

OpenShift Container Platform 46 Installing on AWS

346

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

1823 Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

See Working with stacks in the AWS documentation for more information about AWSCloudFormation stacks

1824 Next steps

Validate an installation

Customize your cluster

If the mirror registry that you used to install your cluster has a trusted CA add it to the cluster byconfiguring additional trust stores

If necessary you can opt out of remote health reporting

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

347

1

2

19 UNINSTALLING A CLUSTER ON AWS

You can remove a cluster that you deployed to Amazon Web Services (AWS)

191 Removing a cluster that uses installer-provisioned infrastructure

You can remove a cluster that uses installer-provisioned infrastructure from your cloud

Prerequisites

Have a copy of the installation program that you used to deploy the cluster

Have the files that the installation program generated when you created your cluster

Procedure

1 From the directory that contains the installation program on the computer that you used toinstall the cluster run the following command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different details specify warn debug or error instead of info

NOTE

You must specify the directory that contains the cluster definition files for yourcluster The installation program requires the metadatajson file in this directoryto delete the cluster

2 Optional Delete the ltinstallation_directorygt directory and the OpenShift Container Platforminstallation program

$ openshift-install destroy cluster --dir=ltinstallation_directorygt --log-level=info 1 2

OpenShift Container Platform 46 Installing on AWS

348

  • Table of Contents
  • CHAPTER 1 INSTALLING ON AWS
    • 11 CONFIGURING AN AWS ACCOUNT
      • 111 Configuring Route 53
        • 1111 Ingress Operator endpoint configuration for AWS Route 53
          • 112 AWS account limits
          • 113 Required AWS permissions
          • 114 Creating an IAM user
          • 115 Supported AWS regions
          • 116 Next steps
            • 12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS
              • 121 Prerequisites
              • 122 Internet and Telemetry access for OpenShift Container Platform
              • 123 Generating an SSH private key and adding it to the agent
              • 124 Obtaining the installation program
              • 125 Creating the installation configuration file
                • 1251 Installation configuration parameters
                • 1252 Sample customized install-configyaml file for AWS
                  • 126 Deploying the cluster
                  • 127 Installing the OpenShift CLI by downloading the binary
                    • 1271 Installing the OpenShift CLI on Linux
                    • 1272 Installing the OpenShift CLI on Windows
                    • 1273 Installing the OpenShift CLI on macOS
                      • 128 Logging in to the cluster by using the CLI
                      • 129 Logging in to the cluster by using the web console
                      • 1210 Next steps
                        • 13 INSTALLING A CLUSTER ON AWS WITH NETWORK CUSTOMIZATIONS
                          • 131 Prerequisites
                          • 132 Internet and Telemetry access for OpenShift Container Platform
                          • 133 Generating an SSH private key and adding it to the agent
                          • 134 Obtaining the installation program
                          • 135 Creating the installation configuration file
                            • 1351 Installation configuration parameters
                            • 1352 Network configuration parameters
                            • 1353 Sample customized install-configyaml file for AWS
                              • 136 Modifying advanced network configuration parameters
                              • 137 Configuring an Ingress Controller Network Load Balancer on a new AWS cluster
                              • 138 Cluster Network Operator configuration
                                • 1381 Configuration parameters for the OpenShift SDN default CNI network provider
                                • 1382 Configuration parameters for the OVN-Kubernetes default CNI network provider
                                • 1383 Cluster Network Operator example configuration
                                  • 139 Configuring hybrid networking with OVN-Kubernetes
                                  • 1310 Deploying the cluster
                                  • 1311 Installing the OpenShift CLI by downloading the binary
                                    • 13111 Installing the OpenShift CLI on Linux
                                    • 13112 Installing the OpenShift CLI on Windows
                                    • 13113 Installing the OpenShift CLI on macOS
                                      • 1312 Logging in to the cluster by using the CLI
                                      • 1313 Logging in to the cluster by using the web console
                                      • 1314 Next steps
                                        • 14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC
                                          • 141 Prerequisites
                                          • 142 About using a custom VPC
                                            • 1421 Requirements for using your VPC
                                            • 1422 VPC validation
                                            • 1423 Division of permissions
                                            • 1424 Isolation between clusters
                                              • 143 Internet and Telemetry access for OpenShift Container Platform
                                              • 144 Generating an SSH private key and adding it to the agent
                                              • 145 Obtaining the installation program
                                              • 146 Creating the installation configuration file
                                                • 1461 Installation configuration parameters
                                                • 1462 Sample customized install-configyaml file for AWS
                                                • 1463 Configuring the cluster-wide proxy during installation
                                                  • 147 Deploying the cluster
                                                  • 148 Installing the OpenShift CLI by downloading the binary
                                                    • 1481 Installing the OpenShift CLI on Linux
                                                    • 1482 Installing the OpenShift CLI on Windows
                                                    • 1483 Installing the OpenShift CLI on macOS
                                                      • 149 Logging in to the cluster by using the CLI
                                                      • 1410 Logging in to the cluster by using the web console
                                                      • 1411 Next steps
                                                        • 15 INSTALLING A PRIVATE CLUSTER ON AWS
                                                          • 151 Prerequisites
                                                          • 152 Private clusters
                                                            • 1521 Private clusters in AWS
                                                              • 153 About using a custom VPC
                                                                • 1531 Requirements for using your VPC
                                                                • 1532 VPC validation
                                                                • 1533 Division of permissions
                                                                • 1534 Isolation between clusters
                                                                  • 154 Internet and Telemetry access for OpenShift Container Platform
                                                                  • 155 Generating an SSH private key and adding it to the agent
                                                                  • 156 Obtaining the installation program
                                                                  • 157 Manually creating the installation configuration file
                                                                    • 1571 Installation configuration parameters
                                                                    • 1572 Sample customized install-configyaml file for AWS
                                                                    • 1573 Configuring the cluster-wide proxy during installation
                                                                      • 158 Deploying the cluster
                                                                      • 159 Installing the OpenShift CLI by downloading the binary
                                                                        • 1591 Installing the OpenShift CLI on Linux
                                                                        • 1592 Installing the OpenShift CLI on Windows
                                                                        • 1593 Installing the OpenShift CLI on macOS
                                                                          • 1510 Logging in to the cluster by using the CLI
                                                                          • 1511 Logging in to the cluster by using the web console
                                                                          • 1512 Next steps
                                                                            • 16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION
                                                                              • 161 Prerequisites
                                                                              • 162 AWS government regions
                                                                              • 163 Private clusters
                                                                                • 1631 Private clusters in AWS
                                                                                  • 164 About using a custom VPC
                                                                                    • 1641 Requirements for using your VPC
                                                                                    • 1642 VPC validation
                                                                                    • 1643 Division of permissions
                                                                                    • 1644 Isolation between clusters
                                                                                      • 165 Internet and Telemetry access for OpenShift Container Platform
                                                                                      • 166 Generating an SSH private key and adding it to the agent
                                                                                      • 167 Obtaining the installation program
                                                                                      • 168 Manually creating the installation configuration file
                                                                                        • 1681 Installation configuration parameters
                                                                                        • 1682 Sample customized install-configyaml file for AWS
                                                                                        • 1683 AWS regions without a published RHCOS AMI
                                                                                        • 1684 Uploading a custom RHCOS AMI in AWS
                                                                                        • 1685 Configuring the cluster-wide proxy during installation
                                                                                          • 169 Deploying the cluster
                                                                                          • 1610 Installing the OpenShift CLI by downloading the binary
                                                                                            • 16101 Installing the OpenShift CLI on Linux
                                                                                            • 16102 Installing the OpenShift CLI on Windows
                                                                                            • 16103 Installing the OpenShift CLI on macOS
                                                                                              • 1611 Logging in to the cluster by using the CLI
                                                                                              • 1612 Logging in to the cluster by using the web console
                                                                                              • 1613 Next steps
                                                                                                • 17 INSTALLING A CLUSTER ON USER-PROVISIONED INFRASTRUCTURE IN AWS BY USING CLOUDFORMATION TEMPLATES
                                                                                                  • 171 Prerequisites
                                                                                                  • 172 Internet and Telemetry access for OpenShift Container Platform
                                                                                                  • 173 Required AWS infrastructure components
                                                                                                    • 1731 Cluster machines
                                                                                                    • 1732 Certificate signing requests management
                                                                                                    • 1733 Other infrastructure components
                                                                                                    • 1734 Required AWS permissions
                                                                                                      • 174 Obtaining the installation program
                                                                                                      • 175 Generating an SSH private key and adding it to the agent
                                                                                                      • 176 Creating the installation files for AWS
                                                                                                        • 1761 Optional Creating a separate var partition
                                                                                                        • 1762 Creating the installation configuration file
                                                                                                        • 1763 Configuring the cluster-wide proxy during installation
                                                                                                        • 1764 Creating the Kubernetes manifest and Ignition config files
                                                                                                          • 177 Extracting the infrastructure name
                                                                                                          • 178 Creating a VPC in AWS
                                                                                                            • 1781 CloudFormation template for the VPC
                                                                                                              • 179 Creating networking and load balancing components in AWS
                                                                                                                • 1791 CloudFormation template for the network and load balancers
                                                                                                                  • 1710 Creating security group and roles in AWS
                                                                                                                    • 17101 CloudFormation template for security objects
                                                                                                                      • 1711 RHCOS AMIs for the AWS infrastructure
                                                                                                                        • 17111 AWS regions without a published RHCOS AMI
                                                                                                                        • 17112 Uploading a custom RHCOS AMI in AWS
                                                                                                                          • 1712 Creating the bootstrap node in AWS
                                                                                                                            • 17121 CloudFormation template for the bootstrap machine
                                                                                                                              • 1713 Creating the control plane machines in AWS
                                                                                                                                • 17131 CloudFormation template for control plane machines
                                                                                                                                  • 1714 Creating the worker nodes in AWS
                                                                                                                                    • 17141 CloudFormation template for worker machines
                                                                                                                                      • 1715 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
                                                                                                                                      • 1716 Installing the OpenShift CLI by downloading the binary
                                                                                                                                        • 17161 Installing the OpenShift CLI on Linux
                                                                                                                                        • 17162 Installing the OpenShift CLI on Windows
                                                                                                                                        • 17163 Installing the OpenShift CLI on macOS
                                                                                                                                          • 1717 Logging in to the cluster by using the CLI
                                                                                                                                          • 1718 Approving the certificate signing requests for your machines
                                                                                                                                          • 1719 Initial Operator configuration
                                                                                                                                            • 17191 Image registry storage configuration
                                                                                                                                              • 1720 Deleting the bootstrap resources
                                                                                                                                              • 1721 Creating the Ingress DNS Records
                                                                                                                                              • 1722 Completing an AWS installation on user-provisioned infrastructure
                                                                                                                                              • 1723 Logging in to the cluster by using the web console
                                                                                                                                              • 1724 Additional resources
                                                                                                                                              • 1725 Next steps
                                                                                                                                                • 18 INSTALLING A CLUSTER ON AWS THAT USES MIRRORED INSTALLATION CONTENT
                                                                                                                                                  • 181 Prerequisites
                                                                                                                                                  • 182 About installations in restricted networks
                                                                                                                                                    • 1821 Additional limits
                                                                                                                                                      • 183 Internet and Telemetry access for OpenShift Container Platform
                                                                                                                                                      • 184 Required AWS infrastructure components
                                                                                                                                                        • 1841 Cluster machines
                                                                                                                                                        • 1842 Certificate signing requests management
                                                                                                                                                        • 1843 Other infrastructure components
                                                                                                                                                        • 1844 Required AWS permissions
                                                                                                                                                          • 185 Generating an SSH private key and adding it to the agent
                                                                                                                                                          • 186 Creating the installation files for AWS
                                                                                                                                                            • 1861 Optional Creating a separate var partition
                                                                                                                                                            • 1862 Creating the installation configuration file
                                                                                                                                                            • 1863 Configuring the cluster-wide proxy during installation
                                                                                                                                                            • 1864 Creating the Kubernetes manifest and Ignition config files
                                                                                                                                                              • 187 Extracting the infrastructure name
                                                                                                                                                              • 188 Creating a VPC in AWS
                                                                                                                                                                • 1881 CloudFormation template for the VPC
                                                                                                                                                                  • 189 Creating networking and load balancing components in AWS
                                                                                                                                                                    • 1891 CloudFormation template for the network and load balancers
                                                                                                                                                                      • 1810 Creating security group and roles in AWS
                                                                                                                                                                        • 18101 CloudFormation template for security objects
                                                                                                                                                                          • 1811 RHCOS AMIs for the AWS infrastructure
                                                                                                                                                                          • 1812 Creating the bootstrap node in AWS
                                                                                                                                                                            • 18121 CloudFormation template for the bootstrap machine
                                                                                                                                                                              • 1813 Creating the control plane machines in AWS
                                                                                                                                                                                • 18131 CloudFormation template for control plane machines
                                                                                                                                                                                  • 1814 Creating the worker nodes in AWS
                                                                                                                                                                                    • 18141 CloudFormation template for worker machines
                                                                                                                                                                                      • 1815 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
                                                                                                                                                                                      • 1816 Logging in to the cluster by using the CLI
                                                                                                                                                                                      • 1817 Approving the certificate signing requests for your machines
                                                                                                                                                                                      • 1818 Initial Operator configuration
                                                                                                                                                                                        • 18181 Image registry storage configuration
                                                                                                                                                                                          • 1819 Deleting the bootstrap resources
                                                                                                                                                                                          • 1820 Creating the Ingress DNS Records
                                                                                                                                                                                          • 1821 Completing an AWS installation on user-provisioned infrastructure
                                                                                                                                                                                          • 1822 Logging in to the cluster by using the web console
                                                                                                                                                                                          • 1823 Additional resources
                                                                                                                                                                                          • 1824 Next steps
                                                                                                                                                                                            • 19 UNINSTALLING A CLUSTER ON AWS
                                                                                                                                                                                              • 191 Removing a cluster that uses installer-provisioned infrastructure
Page 2: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6

OpenShift Container Platform 46 Installing on AWS

Installing OpenShift Container Platform AWS clusters

Legal Notice

Copyright copy 2021 Red Hat Inc

The text of and illustrations in this document are licensed by Red Hat under a Creative CommonsAttributionndashShare Alike 30 Unported license (CC-BY-SA) An explanation of CC-BY-SA isavailable athttpcreativecommonsorglicensesby-sa30 In accordance with CC-BY-SA if you distribute this document or an adaptation of it you mustprovide the URL for the original version

Red Hat as the licensor of this document waives the right to enforce and agrees not to assertSection 4d of CC-BY-SA to the fullest extent permitted by applicable law

Red Hat Red Hat Enterprise Linux the Shadowman logo the Red Hat logo JBoss OpenShiftFedora the Infinity logo and RHCE are trademarks of Red Hat Inc registered in the United Statesand other countries

Linux reg is the registered trademark of Linus Torvalds in the United States and other countries

Java reg is a registered trademark of Oracle andor its affiliates

XFS reg is a trademark of Silicon Graphics International Corp or its subsidiaries in the United Statesandor other countries

MySQL reg is a registered trademark of MySQL AB in the United States the European Union andother countries

Nodejs reg is an official trademark of Joyent Red Hat is not formally related to or endorsed by theofficial Joyent Nodejs open source or commercial project

The OpenStack reg Word Mark and OpenStack logo are either registered trademarksservice marksor trademarksservice marks of the OpenStack Foundation in the United States and othercountries and are used with the OpenStack Foundations permission We are not affiliated withendorsed or sponsored by the OpenStack Foundation or the OpenStack community

All other trademarks are the property of their respective owners

Abstract

This document provides instructions for installing and uninstalling OpenShift Container Platformclusters on Amazon Web Services

Table of Contents

CHAPTER 1 INSTALLING ON AWS11 CONFIGURING AN AWS ACCOUNT

111 Configuring Route 531111 Ingress Operator endpoint configuration for AWS Route 53

112 AWS account limits113 Required AWS permissions114 Creating an IAM user115 Supported AWS regions116 Next steps

12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS121 Prerequisites122 Internet and Telemetry access for OpenShift Container Platform123 Generating an SSH private key and adding it to the agent124 Obtaining the installation program125 Creating the installation configuration file

1251 Installation configuration parameters1252 Sample customized install-configyaml file for AWS

126 Deploying the cluster127 Installing the OpenShift CLI by downloading the binary

1271 Installing the OpenShift CLI on Linux1272 Installing the OpenShift CLI on Windows1273 Installing the OpenShift CLI on macOS

128 Logging in to the cluster by using the CLI129 Logging in to the cluster by using the web console1210 Next steps

13 INSTALLING A CLUSTER ON AWS WITH NETWORK CUSTOMIZATIONS131 Prerequisites132 Internet and Telemetry access for OpenShift Container Platform133 Generating an SSH private key and adding it to the agent134 Obtaining the installation program135 Creating the installation configuration file

1351 Installation configuration parameters1352 Network configuration parameters1353 Sample customized install-configyaml file for AWS

136 Modifying advanced network configuration parameters137 Configuring an Ingress Controller Network Load Balancer on a new AWS cluster138 Cluster Network Operator configuration

1381 Configuration parameters for the OpenShift SDN default CNI network provider1382 Configuration parameters for the OVN-Kubernetes default CNI network provider1383 Cluster Network Operator example configuration

139 Configuring hybrid networking with OVN-Kubernetes1310 Deploying the cluster1311 Installing the OpenShift CLI by downloading the binary

13111 Installing the OpenShift CLI on Linux13112 Installing the OpenShift CLI on Windows13113 Installing the OpenShift CLI on macOS

1312 Logging in to the cluster by using the CLI1313 Logging in to the cluster by using the web console1314 Next steps

14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC141 Prerequisites

77778

10181819

2020202122232432353636373738383939404041

424244525355575859606061

62646465656666676767

Table of Contents

1

142 About using a custom VPC1421 Requirements for using your VPC1422 VPC validation1423 Division of permissions1424 Isolation between clusters

143 Internet and Telemetry access for OpenShift Container Platform144 Generating an SSH private key and adding it to the agent145 Obtaining the installation program146 Creating the installation configuration file

1461 Installation configuration parameters1462 Sample customized install-configyaml file for AWS1463 Configuring the cluster-wide proxy during installation

147 Deploying the cluster148 Installing the OpenShift CLI by downloading the binary

1481 Installing the OpenShift CLI on Linux1482 Installing the OpenShift CLI on Windows1483 Installing the OpenShift CLI on macOS

149 Logging in to the cluster by using the CLI1410 Logging in to the cluster by using the web console1411 Next steps

15 INSTALLING A PRIVATE CLUSTER ON AWS151 Prerequisites152 Private clusters

1521 Private clusters in AWS15211 Limitations

153 About using a custom VPC1531 Requirements for using your VPC1532 VPC validation1533 Division of permissions1534 Isolation between clusters

154 Internet and Telemetry access for OpenShift Container Platform155 Generating an SSH private key and adding it to the agent156 Obtaining the installation program157 Manually creating the installation configuration file

1571 Installation configuration parameters1572 Sample customized install-configyaml file for AWS1573 Configuring the cluster-wide proxy during installation

158 Deploying the cluster159 Installing the OpenShift CLI by downloading the binary

1591 Installing the OpenShift CLI on Linux1592 Installing the OpenShift CLI on Windows1593 Installing the OpenShift CLI on macOS

1510 Logging in to the cluster by using the CLI1511 Logging in to the cluster by using the web console1512 Next steps

16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION161 Prerequisites162 AWS government regions163 Private clusters

1631 Private clusters in AWS16311 Limitations

164 About using a custom VPC1641 Requirements for using your VPC

686870717171727374758486888990909191

929393939394949494979797979899

100101

109111

113114114115115116116117117117118118119119119

120

OpenShift Container Platform 46 Installing on AWS

2

1642 VPC validation1643 Division of permissions1644 Isolation between clusters

165 Internet and Telemetry access for OpenShift Container Platform166 Generating an SSH private key and adding it to the agent167 Obtaining the installation program168 Manually creating the installation configuration file

1681 Installation configuration parameters1682 Sample customized install-configyaml file for AWS1683 AWS regions without a published RHCOS AMI1684 Uploading a custom RHCOS AMI in AWS1685 Configuring the cluster-wide proxy during installation

169 Deploying the cluster1610 Installing the OpenShift CLI by downloading the binary

16101 Installing the OpenShift CLI on Linux16102 Installing the OpenShift CLI on Windows16103 Installing the OpenShift CLI on macOS

1611 Logging in to the cluster by using the CLI1612 Logging in to the cluster by using the web console1613 Next steps

17 INSTALLING A CLUSTER ON USER-PROVISIONED INFRASTRUCTURE IN AWS BY USINGCLOUDFORMATION TEMPLATES

171 Prerequisites172 Internet and Telemetry access for OpenShift Container Platform173 Required AWS infrastructure components

1731 Cluster machines1732 Certificate signing requests management1733 Other infrastructure components1734 Required AWS permissions

174 Obtaining the installation program175 Generating an SSH private key and adding it to the agent176 Creating the installation files for AWS

1761 Optional Creating a separate var partition1762 Creating the installation configuration file1763 Configuring the cluster-wide proxy during installation1764 Creating the Kubernetes manifest and Ignition config files

177 Extracting the infrastructure name178 Creating a VPC in AWS

1781 CloudFormation template for the VPC179 Creating networking and load balancing components in AWS

1791 CloudFormation template for the network and load balancers1710 Creating security group and roles in AWS

17101 CloudFormation template for security objects1711 RHCOS AMIs for the AWS infrastructure

17111 AWS regions without a published RHCOS AMI17112 Uploading a custom RHCOS AMI in AWS

1712 Creating the bootstrap node in AWS17121 CloudFormation template for the bootstrap machine

1713 Creating the control plane machines in AWS17131 CloudFormation template for control plane machines

1714 Creating the worker nodes in AWS17141 CloudFormation template for worker machines

1715 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure

122122122123123124125126134136137139141

142142143143144144145

146146146147148149149155163163164165167168170172172174180183191

194203204204206

211215221

228232235

Table of Contents

3

1716 Installing the OpenShift CLI by downloading the binary17161 Installing the OpenShift CLI on Linux17162 Installing the OpenShift CLI on Windows17163 Installing the OpenShift CLI on macOS

1717 Logging in to the cluster by using the CLI1718 Approving the certificate signing requests for your machines1719 Initial Operator configuration

17191 Image registry storage configuration171911 Configuring registry storage for AWS with user-provisioned infrastructure171912 Configuring storage for the image registry in non-production clusters

1720 Deleting the bootstrap resources1721 Creating the Ingress DNS Records1722 Completing an AWS installation on user-provisioned infrastructure1723 Logging in to the cluster by using the web console1724 Additional resources1725 Next steps

18 INSTALLING A CLUSTER ON AWS THAT USES MIRRORED INSTALLATION CONTENT181 Prerequisites182 About installations in restricted networks

1821 Additional limits183 Internet and Telemetry access for OpenShift Container Platform184 Required AWS infrastructure components

1841 Cluster machines1842 Certificate signing requests management1843 Other infrastructure components1844 Required AWS permissions

185 Generating an SSH private key and adding it to the agent186 Creating the installation files for AWS

1861 Optional Creating a separate var partition1862 Creating the installation configuration file1863 Configuring the cluster-wide proxy during installation1864 Creating the Kubernetes manifest and Ignition config files

187 Extracting the infrastructure name188 Creating a VPC in AWS

1881 CloudFormation template for the VPC189 Creating networking and load balancing components in AWS

1891 CloudFormation template for the network and load balancers1810 Creating security group and roles in AWS

18101 CloudFormation template for security objects1811 RHCOS AMIs for the AWS infrastructure1812 Creating the bootstrap node in AWS

18121 CloudFormation template for the bootstrap machine1813 Creating the control plane machines in AWS

18131 CloudFormation template for control plane machines1814 Creating the worker nodes in AWS

18141 CloudFormation template for worker machines1815 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure1816 Logging in to the cluster by using the CLI1817 Approving the certificate signing requests for your machines1818 Initial Operator configuration

18181 Image registry storage configuration181811 Configuring registry storage for AWS with user-provisioned infrastructure181812 Configuring storage for the image registry in non-production clusters

236236237237237238241242242243243244247247248248249249250250250251251

253253259266267268270272273275276278283287295297306307312316321

329333335336337340341341

342

OpenShift Container Platform 46 Installing on AWS

4

1819 Deleting the bootstrap resources1820 Creating the Ingress DNS Records1821 Completing an AWS installation on user-provisioned infrastructure1822 Logging in to the cluster by using the web console1823 Additional resources1824 Next steps

19 UNINSTALLING A CLUSTER ON AWS191 Removing a cluster that uses installer-provisioned infrastructure

342343345346347347348348

Table of Contents

5

OpenShift Container Platform 46 Installing on AWS

6

CHAPTER 1 INSTALLING ON AWS

11 CONFIGURING AN AWS ACCOUNT

Before you can install OpenShift Container Platform you must configure an Amazon Web Services(AWS) account

111 Configuring Route 53

To install OpenShift Container Platform the Amazon Web Services (AWS) account you use must have adedicated public hosted zone in your Route 53 service This zone must be authoritative for the domainThe Route 53 service provides cluster DNS resolution and name lookup for external connections to thecluster

Procedure

1 Identify your domain or subdomain and registrar You can transfer an existing domain andregistrar or obtain a new one through AWS or another source

NOTE

If you purchase a new domain through AWS it takes time for the relevant DNSchanges to propagate For more information about purchasing domains throughAWS see Registering Domain Names Using Amazon Route 53 in the AWSdocumentation

2 If you are using an existing domain and registrar migrate its DNS to AWS See Making AmazonRoute 53 the DNS Service for an Existing Domain in the AWS documentation

3 Create a public hosted zone for your domain or subdomain See Creating a Public Hosted Zonein the AWS documentationUse an appropriate root domain such as openshiftcorpcom or subdomain such as clustersopenshiftcorpcom

4 Extract the new authoritative name servers from the hosted zone records See Getting theName Servers for a Public Hosted Zone in the AWS documentation

5 Update the registrar records for the AWS Route 53 name servers that your domain uses Forexample if you registered your domain to a Route 53 service in a different accounts see thefollowing topic in the AWS documentation Adding or Changing Name Servers or Glue Records

6 If you are using a subdomain add its delegation records to the parent domain This givesAmazon Route 53 responsibility for the subdomain Follow the delegation procedure outlined bythe DNS provider of the parent domain See Creating a subdomain that uses Amazon Route 53as the DNS service without migrating the parent domain in the AWS documentation for anexample high level procedure

1111 Ingress Operator endpoint configuration for AWS Route 53

If you install in either Amazon Web Services (AWS) GovCloud (US) US-West or US-East region theIngress Operator uses us-gov-west-1 region for Route53 and tagging API clients

The Ingress Operator uses httpstaggingus-gov-west-1amazonawscom as the tagging APIendpoint if a tagging custom endpoint is configured that includes the string us-gov-east-1

CHAPTER 1 INSTALLING ON AWS

7

1

2

For more information on AWS GovCloud (US) endpoints see the Service Endpoints in the AWSdocumentation about GovCloud (US)

IMPORTANT

Private disconnected installations are not supported for AWS GovCloud when you installin the us-gov-east-1 region

Example Route 53 configuration

Route 53 defaults to httpsroute53us-govamazonawscom for both AWS GovCloud (US)regions

Only the US-West region has endpoints for tagging Omit this parameter if your cluster is inanother region

112 AWS account limits

The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) componentsand the default Service Limits affect your ability to install OpenShift Container Platform clusters If youuse certain cluster configurations deploy your cluster in certain AWS regions or run multiple clustersfrom your account you might need to request additional resources for your AWS account

The following table summarizes the AWS components whose limits can impact your ability to install andrun OpenShift Container Platform clusters

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

platform aws region us-gov-west-1 serviceEndpoints - name ec2 url httpsec2us-gov-west-1amazonawscom - name elasticloadbalancing url httpselasticloadbalancingus-gov-west-1amazonawscom - name route53 url httpsroute53us-govamazonawscom 1 - name tagging url httpstaggingus-gov-west-1amazonawscom 2

OpenShift Container Platform 46 Installing on AWS

8

InstanceLimits

Varies Varies By default each cluster creates the followinginstances

One bootstrap machine which is removedafter installation

Three master nodes

Three worker nodes

These instance type counts are within a newaccountrsquos default limit To deploy more workernodes enable autoscaling deploy large workloadsor use a different instance type review your accountlimits to ensure that your cluster can deploy themachines that you need

In most regions the bootstrap and worker machinesuses an m4large machines and the mastermachines use m4xlarge instances In some regionsincluding all regions that do not support theseinstance types m5large and m5xlarge instancesare used instead

Elastic IPs(EIPs)

0 to 1 5 EIPs peraccount

To provision the cluster in a highly availableconfiguration the installation program creates apublic and private subnet for each availability zonewithin a region Each private subnet requires a NATGateway and each NAT gateway requires aseparate elastic IP Review the AWS region map todetermine how many availability zones are in eachregion To take advantage of the default highavailability install the cluster in a region with at leastthree availability zones To install a cluster in aregion with more than five availability zones youmust increase the EIP limit

IMPORTANT

To use the us-east-1 region youmust increase the EIP limit for youraccount

VirtualPrivateClouds(VPCs)

5 5 VPCs perregion

Each cluster creates its own VPC

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

CHAPTER 1 INSTALLING ON AWS

9

ElasticLoadBalancing(ELBNLB)

3 20 per region By default each cluster creates internal andexternal network load balancers for the master APIserver and a single classic elastic load balancer forthe router Deploying more Kubernetes Serviceobjects with type LoadBalancer will createadditional load balancers

NATGateways

5 5 per availabilityzone

The cluster deploys one NAT gateway in eachavailability zone

ElasticNetworkInterfaces(ENIs)

At least 12 350 per region The default installation creates 21 ENIs and an ENIfor each availability zone in your region Forexample the us-east-1 region contains sixavailability zones so a cluster that is deployed inthat zone uses 27 ENIs Review the AWS region mapto determine how many availability zones are in eachregion

Additional ENIs are created for additional machinesand elastic load balancers that are created bycluster usage and deployed workloads

VPCGateway

20 20 per account Each cluster creates a single VPC Gateway for S3access

S3 buckets 99 100 buckets peraccount

Because the installation process creates atemporary bucket and the registry component ineach cluster creates a bucket you can create only99 OpenShift Container Platform clusters per AWSaccount

SecurityGroups

250 2500 peraccount

Each cluster creates 10 distinct security groups

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

113 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 11 Required EC2 permissions for installation

tagTagResources

tagUntagResources

OpenShift Container Platform 46 Installing on AWS

10

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

CHAPTER 1 INSTALLING ON AWS

11

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 12 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

OpenShift Container Platform 46 Installing on AWS

12

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 13 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

CHAPTER 1 INSTALLING ON AWS

13

elasticloadbalancingSetLoadBalancerPoliciesOfListener

Example 14 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 15 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

OpenShift Container Platform 46 Installing on AWS

14

route53ListHostedZonesByName

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 16 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 17 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

CHAPTER 1 INSTALLING ON AWS

15

s3GetObjectAcl

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 18 Required permissions to delete base cluster resources

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 19 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

OpenShift Container Platform 46 Installing on AWS

16

ec2DeleteVpc

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

Example 110 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 111 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

CHAPTER 1 INSTALLING ON AWS

17

114 Creating an IAM user

Each Amazon Web Services (AWS) account contains a root user account that is based on the emailaddress you used to create the account This is a highly-privileged account and it is recommended touse it for only initial account and billing configuration creating an initial set of users and securing theaccount

Before you install OpenShift Container Platform create a secondary IAM administrative user As youcomplete the Creating an IAM User in Your AWS Account procedure in the AWS documentation set thefollowing options

Procedure

1 Specify the IAM user name and select Programmatic access

2 Attach the AdministratorAccess policy to ensure that the account has sufficient permission tocreate the cluster This policy provides the cluster with the ability to grant credentials to eachOpenShift Container Platform component The cluster grants the components only thecredentials that they require

NOTE

While it is possible to create a policy that grants the all of the required AWSpermissions and attach it to the user this is not the preferred option The clusterwill not have the ability to grant additional credentials to individual componentsso the same credentials are used by all components

3 Optional Add metadata to the user by attaching tags

4 Confirm that the user name that you specified is granted the AdministratorAccess policy

5 Record the access key ID and secret access key values You must use these values when youconfigure your local machine to run the installation program

IMPORTANT

You cannot use a temporary session token that you generated while using amulti-factor authentication device to authenticate to AWS when you deploy acluster The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials

Additional resources

See Manually creating IAM for AWS for steps to set the Cloud Credential Operator (CCO) tomanual mode prior to installation Use this mode in environments where the cloud identity andaccess management (IAM) APIs are not reachable or if you prefer not to store an administrator-level credential secret in the cluster kube-system project

115 Supported AWS regions

You can deploy an OpenShift Container Platform cluster to the following public regions

OpenShift Container Platform 46 Installing on AWS

18

af-south-1 (Cape Town)

ap-east-1 (Hong Kong)

ap-northeast-1 (Tokyo)

ap-northeast-2 (Seoul)

ap-south-1 (Mumbai)

ap-southeast-1 (Singapore)

ap-southeast-2 (Sydney)

ca-central-1 (Central)

eu-central-1 (Frankfurt)

eu-north-1 (Stockholm)

eu-south-1 (Milan)

eu-west-1 (Ireland)

eu-west-2 (London)

eu-west-3 (Paris)

me-south-1 (Bahrain)

sa-east-1 (Satildeo Paulo)

us-east-1 (N Virginia)

us-east-2 (Ohio)

us-west-1 (N California)

us-west-2 (Oregon)

The following AWS GovCloud regions are supported

us-gov-west-1

us-gov-east-1

116 Next steps

Install an OpenShift Container Platform cluster

Quickly install a cluster with default options on installer-provisioned infrastructure

Install a cluster with cloud customizations on installer-provisioned infrastructure

Install a cluster with network customizations on installer-provisioned infrastructure

Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormationtemplates

CHAPTER 1 INSTALLING ON AWS

19

12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS

In OpenShift Container Platform version 46 you can install a customized cluster on infrastructure thatthe installation program provisions on Amazon Web Services (AWS) To customize the installation youmodify parameters in the install-configyaml file before you install the cluster

121 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

122 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

OpenShift Container Platform 46 Installing on AWS

20

1

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

123 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

CHAPTER 1 INSTALLING ON AWS

21

1

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

124 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operating

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

22

1

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

125 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

$ tar xvf openshift-install-linuxtargz

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

23

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1251 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 11 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

OpenShift Container Platform 46 Installing on AWS

24

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

25

Table 12 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

OpenShift Container Platform 46 Installing on AWS

26

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

27

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

28

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

29

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 13 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

OpenShift Container Platform 46 Installing on AWS

30

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

31

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1252 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

OpenShift Container Platform 46 Installing on AWS

32

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 amiID ami-96c6f8f7 12 serviceEndpoints 13

CHAPTER 1 INSTALLING ON AWS

33

1 10 11 14

2

3 7

4

5 8

6 9

12

13

15

16

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

- name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 14fips false 15sshKey ssh-ed25519 AAAA 16

OpenShift Container Platform 46 Installing on AWS

34

1

2

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

126 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-

CHAPTER 1 INSTALLING ON AWS

35

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

127 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1271 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

36

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1272 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

1273 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

37

1

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

128 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

129 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

38

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1210 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

13 INSTALLING A CLUSTER ON AWS WITH NETWORKCUSTOMIZATIONS

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)with customized network configuration options By customizing your network configuration your clustercan coexist with existing IP address allocations in your environment and integrate with existing MTU andVXLAN configurations

You must set most of the network configuration parameters during installation and you can modify only kubeProxy configuration parameters in a running cluster

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

39

131 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

132 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

OpenShift Container Platform 46 Installing on AWS

40

1

1

See About remote health monitoring for more information about the Telemetry service

133 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

41

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

134 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

135 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

$ tar xvf openshift-install-linuxtargz

OpenShift Container Platform 46 Installing on AWS

42

1

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

43

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1351 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 14 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

OpenShift Container Platform 46 Installing on AWS

44

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

Table 15 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

45

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

46

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

47

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

48

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

49

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 16 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

50

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

51

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1352 Network configuration parameters

You can modify your cluster network configuration parameters in the install-configyaml configurationfile The following table describes the parameters

NOTE

You cannot modify these parameters in the install-configyaml file after installation

Table 17 Required network parameters

Parameter Description Value

networkingnetworkType

The default Container Network Interface (CNI)network provider plug-in to deploy

Either OpenShiftSDN or OVNKubernetes Thedefault value is OpenShiftSDN

OpenShift Container Platform 46 Installing on AWS

52

networkingclusterNetwork[]cidr

A block of IP addresses from which pod IP addressesare allocated The OpenShiftSDN network plug-insupports multiple cluster networks The addressblocks for multiple cluster networks must not overlapSelect address pools large enough to fit youranticipated workload

An IP address allocation inCIDR format The defaultvalue is 101280014

networkingclusterNetwork[]hostPrefix

The subnet prefix length to assign to each individualnode For example if hostPrefix is set to 23 theneach node is assigned a 23 subnet out of the given cidr allowing for 510 (2^(32 - 23) - 2) pod IPaddresses

A subnet prefix The defaultvalue is 23

networkingserviceNetwork[]

A block of IP addresses for services OpenShiftSDNallows only one serviceNetwork block The addressblock must not overlap with any other network block

An IP address allocation inCIDR format The defaultvalue is 172300016

networkingmachineNetwork[]cidr

A block of IP addresses assigned to nodes created bythe OpenShift Container Platform installationprogram while installing the cluster The addressblock must not overlap with any other network blockMultiple CIDR ranges may be specified

An IP address allocation inCIDR format The defaultvalue is 1000016

Parameter Description Value

1353 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b rootVolume iops 4000 size 500 type io1 6

CHAPTER 1 INSTALLING ON AWS

53

1 10 12 15

2

3 7 11

4

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

If you do not provide these parameters and values the installation program provides thedefault value

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only one

type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking 11 clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 12 userTags adminContact jdoe costCenter 7536 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17

OpenShift Container Platform 46 Installing on AWS

54

5 8

6 9

13

14

16

17

control plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

136 Modifying advanced network configuration parameters

You can modify the advanced network configuration parameters only before you install the clusterAdvanced configuration customization lets you integrate your cluster into your existing networkenvironment by specifying an MTU or VXLAN port by allowing customization of kube-proxy settingsand by specifying a different mode for the openshiftSDNConfig parameter

IMPORTANT

Modifying the OpenShift Container Platform manifest files directly is not supported

Prerequisites

Create the install-configyaml file and complete any modifications to it

Procedure

CHAPTER 1 INSTALLING ON AWS

55

1

1

1

1 Change to the directory that contains the installation program and create the manifests

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-network-03-configyml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

3 Open the cluster-network-03-configyml file in an editor and enter a CR that describes theOperator configuration you want

The parameters for the spec parameter are only an example Specify your configurationfor the Cluster Network Operator in the CR

The CNO provides default values for the parameters in the CR so you must specify only theparameters that you want to change

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-network-03-configyml 1

$ ls ltinstallation_directorygtmanifestscluster-network-

cluster-network-01-crdymlcluster-network-02-configymlcluster-network-03-configyml

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec 1 clusterNetwork - cidr 101280014 hostPrefix 23 serviceNetwork - 172300016 defaultNetwork type OpenShiftSDN openshiftSDNConfig mode NetworkPolicy mtu 1450 vxlanPort 4789

OpenShift Container Platform 46 Installing on AWS

56

1

1

4 Save the cluster-network-03-configyml file and quit the text editor

5 Optional Back up the manifestscluster-network-03-configyml file The installation programdeletes the manifests directory when creating the cluster

NOTE

For more information on using a Network Load Balancer (NLB) on AWS see ConfiguringIngress cluster traffic on AWS using a Network Load Balancer

137 Configuring an Ingress Controller Network Load Balancer on a new AWScluster

You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster

Prerequisites

Create the install-configyaml file and complete any modifications to it

Procedure

Create an Ingress Controller backed by an AWS NLB on a new cluster

1 Change to the directory that contains the installation program and create the manifests

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-ingress-default-ingresscontrolleryaml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

3 Open the cluster-ingress-default-ingresscontrolleryaml file in an editor and enter a CR thatdescribes the Operator configuration you want

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-ingress-default-ingresscontrolleryaml 1

$ ls ltinstallation_directorygtmanifestscluster-ingress-default-ingresscontrolleryaml

cluster-ingress-default-ingresscontrolleryaml

apiVersion operatoropenshiftiov1kind IngressController

CHAPTER 1 INSTALLING ON AWS

57

1 2

3

4 Save the cluster-ingress-default-ingresscontrolleryaml file and quit the text editor

5 Optional Back up the manifestscluster-ingress-default-ingresscontrolleryaml file Theinstallation program deletes the manifests directory when creating the cluster

138 Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO)configuration and stored in a CR object that is named cluster The CR specifies the parameters for the Network API in the operatoropenshiftio API group

You can specify the cluster network configuration for your OpenShift Container Platform cluster bysetting the parameter values for the defaultNetwork parameter in the CNO CR The following CRdisplays the default configuration for the CNO and explains both the parameters you can configure andthe valid parameter values

Cluster Network Operator CR

Specified in the install-configyaml file

Configures the default Container Network Interface (CNI) network provider for the clusternetwork

metadata creationTimestamp null name default namespace openshift-ingress-operatorspec endpointPublishingStrategy loadBalancer scope External providerParameters type AWS aws type NLB type LoadBalancerService

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec clusterNetwork 1 - cidr 101280014 hostPrefix 23 serviceNetwork 2 - 172300016 defaultNetwork 3 kubeProxyConfig 4 iptablesSyncPeriod 30s 5 proxyArguments iptables-min-sync-period 6 - 0s

OpenShift Container Platform 46 Installing on AWS

58

4

5

6

1

2

3

4

5

The parameters for this object specify the kube-proxy configuration If you do not specify theparameter values the Cluster Network Operator applies the displayed default parameter values If

The refresh period for iptables rules The default value is 30s Valid suffixes include s m and hand are described in the Go time package documentation

NOTE

Because of performance improvements introduced in OpenShift Container Platform43 and greater adjusting the iptablesSyncPeriod parameter is no longernecessary

The minimum duration before refreshing iptables rules This parameter ensures that the refreshdoes not happen too frequently Valid suffixes include s m and h and are described in the Go timepackage

1381 Configuration parameters for the OpenShift SDN default CNI network provider

The following YAML object describes the configuration parameters for the OpenShift SDN defaultContainer Network Interface (CNI) network provider

Specified in the install-configyaml file

Specify only if you want to override part of the OpenShift SDN configuration

Configures the network isolation mode for OpenShift SDN The allowed values are Multitenant Subnet or NetworkPolicy The default value is NetworkPolicy

The maximum transmission unit (MTU) for the VXLAN overlay network This is detectedautomatically based on the MTU of the primary network interface You do not normally need tooverride the detected MTU

If the auto-detected value is not what you expected it to be confirm that the MTU on the primarynetwork interface on your nodes is correct You cannot use this option to change the MTU value ofthe primary network interface on the nodes

If your cluster requires different MTU values for different nodes you must set this value to 50 lessthan the lowest MTU value in your cluster For example if some nodes in your cluster have an MTUof 9001 and some have an MTU of 1500 you must set this value to 1450

The port to use for all VXLAN packets The default value is 4789 If you are running in a virtualizedenvironment with existing nodes that are part of another VXLAN network then you might berequired to change this For example when running an OpenShift SDN overlay on top of VMwareNSX-T you must select an alternate port for VXLAN since both SDNs use the same defaultVXLAN port number

On Amazon Web Services (AWS) you can select an alternate port for the VXLAN between port

defaultNetwork type OpenShiftSDN 1 openshiftSDNConfig 2 mode NetworkPolicy 3 mtu 1450 4 vxlanPort 4789 5

CHAPTER 1 INSTALLING ON AWS

59

1

2

3

4

On Amazon Web Services (AWS) you can select an alternate port for the VXLAN between port 9000 and port 9999

1382 Configuration parameters for the OVN-Kubernetes default CNI network provider

The following YAML object describes the configuration parameters for the OVN-Kubernetes defaultCNI network provider

Specified in the install-configyaml file

Specify only if you want to override part of the OVN-Kubernetes configuration

The maximum transmission unit (MTU) for the Geneve (Generic Network VirtualizationEncapsulation) overlay network This is detected automatically based on the MTU of the primarynetwork interface You do not normally need to override the detected MTU

If the auto-detected value is not what you expected it to be confirm that the MTU on the primarynetwork interface on your nodes is correct You cannot use this option to change the MTU value ofthe primary network interface on the nodes

If your cluster requires different MTU values for different nodes you must set this value to 100 lessthan the lowest MTU value in your cluster For example if some nodes in your cluster have an MTUof 9001 and some have an MTU of 1500 you must set this value to 1400

The UDP port for the Geneve overlay network

1383 Cluster Network Operator example configuration

A complete CR object for the CNO is displayed in the following example

Cluster Network Operator example CR

defaultNetwork type OVNKubernetes 1 ovnKubernetesConfig 2 mtu 1400 3 genevePort 6081 4

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec clusterNetwork - cidr 101280014 hostPrefix 23 serviceNetwork - 172300016 defaultNetwork type OpenShiftSDN openshiftSDNConfig mode NetworkPolicy mtu 1450

OpenShift Container Platform 46 Installing on AWS

60

1

1

139 Configuring hybrid networking with OVN-Kubernetes

You can configure your cluster to use hybrid networking with OVN-Kubernetes This allows a hybridcluster that supports different node networking configurations For example this is necessary to runboth Linux and Windows nodes in a cluster

IMPORTANT

You must configure hybrid networking with OVN-Kubernetes during the installation ofyour cluster You cannot switch to hybrid networking after the installation process

Prerequisites

You defined OVNKubernetes for the networkingnetworkType parameter in the install-configyaml file See the installation documentation for configuring OpenShift ContainerPlatform network customizations on your chosen cloud provider for more information

Procedure

1 Create the manifests from the directory that contains the installation program

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-network-03-configyml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

vxlanPort 4789 kubeProxyConfig iptablesSyncPeriod 30s proxyArguments iptables-min-sync-period - 0s

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-network-03-configyml 1

$ ls -1 ltinstallation_directorygtmanifestscluster-network-

cluster-network-01-crdymlcluster-network-02-configymlcluster-network-03-configyml

CHAPTER 1 INSTALLING ON AWS

61

1

2

3

4

3 Open the cluster-network-03-configyml file and configure OVN-Kubernetes with hybridnetworking For example

The parameters for the spec parameter are only an example Specify your configurationfor the Cluster Network Operator in the custom resource

Specify the CIDR configuration used when adding nodes

Specify OVNKubernetes as the Container Network Interface (CNI) cluster networkprovider

Specify the CIDR configuration used for nodes on the additional overlay network The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR

4 Optional Back up the ltinstallation_directorygtmanifestscluster-network-03-configyml fileThe installation program deletes the manifests directory when creating the cluster

NOTE

For more information on using Linux and Windows nodes in the same cluster seeUnderstanding Windows container workloads

1310 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

apiVersion operatoropenshiftiov1kind Networkmetadata creationTimestamp null name clusterspec 1 clusterNetwork 2 - cidr 101280014 hostPrefix 23 externalIP policy serviceNetwork - 172300016 defaultNetwork type OVNKubernetes 3 ovnKubernetesConfig hybridOverlayConfig hybridClusterNetwork 4 - cidr 101320014 hostPrefix 23status

OpenShift Container Platform 46 Installing on AWS

62

1

2

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

CHAPTER 1 INSTALLING ON AWS

63

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

1311 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

13111 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

$ tar xvzf ltfilegt

OpenShift Container Platform 46 Installing on AWS

64

After you install the CLI it is available using the oc command

13112 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

13113 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

$ echo $PATH

CHAPTER 1 INSTALLING ON AWS

65

1

1312 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1313 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ cat ltinstallation_directorygtauthkubeadmin-password

OpenShift Container Platform 46 Installing on AWS

66

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1314 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC

In OpenShift Container Platform version 46 you can install a cluster into an existing Amazon VirtualPrivate Cloud (VPC) on Amazon Web Services (AWS) The installation program provisions the rest ofthe required infrastructure which you can further customize To customize the installation you modifyparameters in the install-configyaml file before you install the cluster

141 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

67

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

142 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

1421 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

OpenShift Container Platform 46 Installing on AWS

68

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

CHAPTER 1 INSTALLING ON AWS

69

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

1422 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

OpenShift Container Platform 46 Installing on AWS

70

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1423 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1424 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

143 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintained

CHAPTER 1 INSTALLING ON AWS

71

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

144 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

OpenShift Container Platform 46 Installing on AWS

72

1

1

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

145 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

73

1

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

146 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

$ tar xvf openshift-install-linuxtargz

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

OpenShift Container Platform 46 Installing on AWS

74

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1461 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

CHAPTER 1 INSTALLING ON AWS

75

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 18 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

OpenShift Container Platform 46 Installing on AWS

76

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

Table 19 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

77

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

78

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

79

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

80

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

81

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 110 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

82

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

83

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1462 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b

OpenShift Container Platform 46 Installing on AWS

84

1 10 11 15

2

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 subnets 12 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17

CHAPTER 1 INSTALLING ON AWS

85

3 7

4

5 8

6 9

12

13

14

16

17

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

1463 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

OpenShift Container Platform 46 Installing on AWS

86

1

2

3

4

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless the

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

87

1

2

proxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

147 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

OpenShift Container Platform 46 Installing on AWS

88

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

148 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

CHAPTER 1 INSTALLING ON AWS

89

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1481 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1482 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

OpenShift Container Platform 46 Installing on AWS

90

1

After you install the CLI it is available using the oc command

1483 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

149 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Cgt oc ltcommandgt

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

CHAPTER 1 INSTALLING ON AWS

91

Example output

1410 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

$ oc whoami

systemadmin

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

OpenShift Container Platform 46 Installing on AWS

92

1411 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

15 INSTALLING A PRIVATE CLUSTER ON AWS

In OpenShift Container Platform version 46 you can install a private cluster into an existing VPC onAmazon Web Services (AWS) The installation program provisions the rest of the requiredinfrastructure which you can further customize To customize the installation you modify parameters inthe install-configyaml file before you install the cluster

151 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

152 Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose externalendpoints Private clusters are accessible from only an internal network and are not visible to theInternet

By default OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpointsA private cluster sets the DNS Ingress Controller and API server to private when you deploy yourcluster This means that the cluster resources are only accessible from your internal network and are notvisible to the internet

To deploy a private cluster you must use existing networking that meets your requirements Your clusterresources might be shared between other clusters on the network

Additionally you must deploy a private cluster from a machine that has access the API services for thecloud you provision to the hosts on the network that you provision and to the internet to obtaininstallation media You can use any machine that meets these access requirements and follows your

CHAPTER 1 INSTALLING ON AWS

93

companyrsquos guidelines For example this machine can be a bastion host on your cloud network or amachine that has access to the network through a VPN

1521 Private clusters in AWS

To create a private cluster on Amazon Web Services (AWS) you must provide an existing private VPCand subnets to host the cluster The installation program must also be able to resolve the DNS recordsthat the cluster requires The installation program configures the Ingress Operator and API server foraccess from only the private network

The cluster still requires access to Internet to access the AWS APIs

The following items are not required or created when you install a private cluster

Public subnets

Public load balancers which support public ingress

A public Route 53 zone that matches the baseDomain for the cluster

The installation program does use the baseDomain that you specify to create a private Route 53 zoneand the required records for the cluster The cluster is configured so that the Operators do not createpublic records for the cluster and all cluster machines are placed in the private subnets that you specify

15211 Limitations

The ability to add public functionality to a private cluster is limited

You cannot make the Kubernetes API endpoints public after installation without takingadditional actions including creating public subnets in the VPC for each availability zone in usecreating a public load balancer and configuring the control plane security groups to allow trafficfrom Internet on 6443 (Kubernetes API port)

If you use a public Service type load balancer you must tag a public subnet in each availabilityzone with kubernetesioclusterltcluster-infra-idgt shared so that AWS can use them tocreate public load balancers

153 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

1531 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

OpenShift Container Platform 46 Installing on AWS

94

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

CHAPTER 1 INSTALLING ON AWS

95

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

OpenShift Container Platform 46 Installing on AWS

96

1532 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1533 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1534 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

154 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster The

CHAPTER 1 INSTALLING ON AWS

97

Telemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

155 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on your

OpenShift Container Platform 46 Installing on AWS

98

1

1

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

156 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for your

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

99

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

157 Manually creating the installation configuration file

For installations of a private OpenShift Container Platform cluster that are only accessible from aninternal network and are not visible to the Internet you must manually generate your installationconfiguration file

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for yourcluster

Procedure

1 Create an installation directory to store your required installation assets in

IMPORTANT

You must create a directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse an installationdirectory If you want to reuse individual files from another cluster installationyou can copy them into your directory However the file names for theinstallation assets might change between releases Use caution when copyinginstallation files from an earlier OpenShift Container Platform version

$ tar xvf openshift-install-linuxtargz

$ mkdir ltinstallation_directorygt

OpenShift Container Platform 46 Installing on AWS

100

2 Customize the following install-configyaml file template and save it in the ltinstallation_directorygt

NOTE

You must name this configuration file install-configyaml

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the next step of the installationprocess You must back it up now

1571 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 111 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

CHAPTER 1 INSTALLING ON AWS

101

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

OpenShift Container Platform 46 Installing on AWS

102

Table 112 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

CHAPTER 1 INSTALLING ON AWS

103

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

104

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

105

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

106

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 113 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

CHAPTER 1 INSTALLING ON AWS

107

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

108

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1572 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b

CHAPTER 1 INSTALLING ON AWS

109

1 10 11 15

2

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operators

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 subnets 12 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17publish Internal 18

OpenShift Container Platform 46 Installing on AWS

110

3 7

4

5 8

6 9

12

13

14

16

17

18

reference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

How to publish the user-facing endpoints of your cluster Set publish to Internal to deploy aprivate cluster which cannot be accessed from the Internet The default value is External

1573 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS

CHAPTER 1 INSTALLING ON AWS

111

1

2

3

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

OpenShift Container Platform 46 Installing on AWS

112

4

1

2

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificates

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

158 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

CHAPTER 1 INSTALLING ON AWS

113

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

159 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1591 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

114

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1592 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

1593 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and click

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

115

1

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1510 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1511 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

116

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1512 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)into a government region To configure the government region modify parameters in the install-configyaml file before you install the cluster

161 Prerequisites

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

117

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

162 AWS government regions

OpenShift Container Platform supports deploying a cluster to AWS GovCloud (US) regions AWSGovCloud is specifically designed for US government agencies at the federal state and local level aswell as contractors educational institutions and other US customers that must run sensitive workloadsin the cloud

These regions do not have published Red Hat Enterprise Linux CoreOS (RHCOS) Amazon MachineImages (AMI) to select so you must upload a custom AMI that belongs to that region

The following AWS GovCloud partitions are supported

us-gov-west-1

us-gov-east-1

The AWS GovCloud region and custom AMI must be manually configured in the install-configyaml filesince RHCOS AMIs are not provided by Red Hat for those regions

163 Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose externalendpoints Private clusters are accessible from only an internal network and are not visible to theInternet

NOTE

Public zones are not supported in Route 53 in AWS GovCloud Therefore clusters mustbe private if they are deployed to an AWS government region

By default OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpointsA private cluster sets the DNS Ingress Controller and API server to private when you deploy yourcluster This means that the cluster resources are only accessible from your internal network and are notvisible to the internet

OpenShift Container Platform 46 Installing on AWS

118

To deploy a private cluster you must use existing networking that meets your requirements Your clusterresources might be shared between other clusters on the network

Additionally you must deploy a private cluster from a machine that has access the API services for thecloud you provision to the hosts on the network that you provision and to the internet to obtaininstallation media You can use any machine that meets these access requirements and follows yourcompanyrsquos guidelines For example this machine can be a bastion host on your cloud network or amachine that has access to the network through a VPN

1631 Private clusters in AWS

To create a private cluster on Amazon Web Services (AWS) you must provide an existing private VPCand subnets to host the cluster The installation program must also be able to resolve the DNS recordsthat the cluster requires The installation program configures the Ingress Operator and API server foraccess from only the private network

The cluster still requires access to Internet to access the AWS APIs

The following items are not required or created when you install a private cluster

Public subnets

Public load balancers which support public ingress

A public Route 53 zone that matches the baseDomain for the cluster

The installation program does use the baseDomain that you specify to create a private Route 53 zoneand the required records for the cluster The cluster is configured so that the Operators do not createpublic records for the cluster and all cluster machines are placed in the private subnets that you specify

16311 Limitations

The ability to add public functionality to a private cluster is limited

You cannot make the Kubernetes API endpoints public after installation without takingadditional actions including creating public subnets in the VPC for each availability zone in usecreating a public load balancer and configuring the control plane security groups to allow trafficfrom Internet on 6443 (Kubernetes API port)

If you use a public Service type load balancer you must tag a public subnet in each availabilityzone with kubernetesioclusterltcluster-infra-idgt shared so that AWS can use them tocreate public load balancers

164 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

CHAPTER 1 INSTALLING ON AWS

119

1641 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

OpenShift Container Platform 46 Installing on AWS

120

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

CHAPTER 1 INSTALLING ON AWS

121

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

1642 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1643 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1644 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

OpenShift Container Platform 46 Installing on AWS

122

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

165 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

166 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

CHAPTER 1 INSTALLING ON AWS

123

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

167 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

124

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

168 Manually creating the installation configuration file

When installing OpenShift Container Platform on Amazon Web Services (AWS) into a region requiring acustom Red Hat Enterprise Linux CoreOS (RHCOS) AMI you must manually generate your installationconfiguration file

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for yourcluster

Procedure

1 Create an installation directory to store your required installation assets in

IMPORTANT

$ tar xvf openshift-install-linuxtargz

$ mkdir ltinstallation_directorygt

CHAPTER 1 INSTALLING ON AWS

125

IMPORTANT

You must create a directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse an installationdirectory If you want to reuse individual files from another cluster installationyou can copy them into your directory However the file names for theinstallation assets might change between releases Use caution when copyinginstallation files from an earlier OpenShift Container Platform version

2 Customize the following install-configyaml file template and save it in the ltinstallation_directorygt

NOTE

You must name this configuration file install-configyaml

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the next step of the installationprocess You must back it up now

1681 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 114 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

OpenShift Container Platform 46 Installing on AWS

126

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

127

Table 115 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

OpenShift Container Platform 46 Installing on AWS

128

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

129

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

130

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

131

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 116 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

132

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

133

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1682 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-gov-west-1a - us-gov-west-1b

OpenShift Container Platform 46 Installing on AWS

134

1 10 14

2

Required

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operators

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-gov-west-1c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-gov-west-1 userTags adminContact jdoe costCenter 7536 subnets 11 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 12 serviceEndpoints 13 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 14fips false 15sshKey ssh-ed25519 AAAA 16publish Internal 17

CHAPTER 1 INSTALLING ON AWS

135

3 7

4

5 8

6 9

11

12

13

15

16

17

reference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

How to publish the user-facing endpoints of your cluster Set publish to Internal to deploy aprivate cluster which cannot be accessed from the Internet The default value is External

1683 AWS regions without a published RHCOS AMI

You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions

OpenShift Container Platform 46 Installing on AWS

136

1

1

1

without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI)or the AWS software development kit (SDK) If a published AMI is not available for an AWS region youcan upload a custom AMI prior to installing the cluster This is required if you are deploying your clusterto an AWS government region

If you are deploying to a non-government region that does not have a published RHCOS AMI and youdo not specify a custom AMI the installation program copies the us-east-1 AMI to the user accountautomatically Then the installation program creates the control plane machines with encrypted EBSvolumes using the default or user-specified Key Management Service (KMS) key This allows the AMI tofollow the same process workflow as published RHCOS AMIs

A region without native support for an RHCOS AMI is not available to select from the terminal duringcluster creation because it is not published However you can install to this region by configuring thecustom AMI in the install-configyaml file

1684 Uploading a custom RHCOS AMI in AWS

If you are deploying to a custom Amazon Web Services (AWS) region you must upload a custom RedHat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region

Prerequisites

You configured an AWS account

You created an Amazon S3 bucket with the required IAM service role

You uploaded your RHCOS VMDK file to Amazon S3 The RHCOS VMDK file must be thehighest version that is less than or equal to the OpenShift Container Platform version you areinstalling

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer

Procedure

1 Export your AWS profile as an environment variable

The AWS profile name that holds your AWS credentials like govcloud

2 Export the region to associate with your custom AMI as an environment variable

The AWS region like us-gov-east-1

3 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable

The RHCOS VMDK version like 460

$ export AWS_PROFILE=ltaws_profilegt 1

$ export AWS_DEFAULT_REGION=ltaws_regiongt 1

$ export RHCOS_VERSION=ltversiongt 1

CHAPTER 1 INSTALLING ON AWS

137

1

2

4 Export the Amazon S3 bucket name as an environment variable

5 Create the containersjson file and define your RHCOS VMDK file

6 Import the RHCOS disk as an Amazon EBS snapshot

The description of your RHCOS disk being imported like rhcos-$RHCOS_VERSION-x86_64-awsx86_64

The file path to the JSON file describing your RHCOS disk The JSON file should containyour Amazon S3 bucket name and key

7 Check the status of the image import

Example output

$ export VMIMPORT_BUCKET_NAME=lts3_bucket_namegt

$ cat ltltEOF gt containersjson Description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 Format vmdk UserBucket S3Bucket $VMIMPORT_BUCKET_NAME S3Key rhcos-$RHCOS_VERSION-x86_64-awsx86_64vmdk EOF

$ aws ec2 import-snapshot --region $AWS_DEFAULT_REGION --description ltdescriptiongt 1 --disk-container ltfile_pathgtcontainersjson 2

$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region $AWS_DEFAULT_REGION

ImportSnapshotTasks [ Description rhcos-460-x86_64-awsx86_64 ImportTaskId import-snap-fh6i8uil SnapshotTaskDetail Description rhcos-460-x86_64-awsx86_64 DiskImageSize 8190566400 Format VMDK SnapshotId snap-06331325870076318 Status completed UserBucket S3Bucket external-images S3Key rhcos-460-x86_64-awsx86_64vmdk

OpenShift Container Platform 46 Installing on AWS

138

1

2

3

4

Copy the SnapshotId to register the image

8 Create a custom RHCOS AMI from the RHCOS snapshot

The RHCOS VMDK architecture type like x86_64 s390x or ppc64le

The Description from the imported snapshot

The name of the RHCOS AMI

The SnapshotID from the imported snapshot

To learn more about these APIs see the AWS documentation for importing snapshots and creatingEBS-backed AMIs

1685 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

]

$ aws ec2 register-image --region $AWS_DEFAULT_REGION --architecture x86_64 1 --description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 2 --ena-support --name rhcos-$RHCOS_VERSION-x86_64-awsx86_64 3 --virtualization-type hvm --root-device-name devxvda --block-device-mappings DeviceName=devxvdaEbs=DeleteOnTermination=trueSnapshotId=ltsnapshot_IDgt 4

CHAPTER 1 INSTALLING ON AWS

139

1

2

3

4

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

OpenShift Container Platform 46 Installing on AWS

140

1

2

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

169 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

CHAPTER 1 INSTALLING ON AWS

141

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

1610 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

16101 Installing the OpenShift CLI on Linux

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

142

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

16102 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

16103 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

143

1

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1611 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1612 Logging in to the cluster by using the web console

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

144

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1613 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

17 INSTALLING A CLUSTER ON USER-PROVISIONED

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

145

17 INSTALLING A CLUSTER ON USER-PROVISIONEDINFRASTRUCTURE IN AWS BY USING CLOUDFORMATIONTEMPLATES

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)that uses infrastructure that you provide

One way to create this infrastructure is to use the provided CloudFormation templates You can modifythe templates to customize your infrastructure or use the information that they contain to create AWSobjects according to your companyrsquos policies

IMPORTANT

The steps for performing a user-provisioned infrastructure installation are provided as anexample only Installing a cluster with infrastructure you provide requires knowledge ofthe cloud provider and the installation process of OpenShift Container Platform SeveralCloudFormation templates are provided to assist in completing these steps or to helpmodel your own You are also free to create the required resources through othermethods the templates are just an example

171 Prerequisites

You reviewed details about the OpenShift Container Platform installation and updateprocesses

You configured an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix) in the AWS documentation

If you use a firewall you configured it to allow the sites that your cluster requires access to

NOTE

Be sure to also review this site list if you are configuring a proxy

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

172 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster The

OpenShift Container Platform 46 Installing on AWS

146

Telemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

173 Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services(AWS) you must manually create both the machines and their supporting infrastructure

For more information about the integration testing for different platforms see the OpenShift ContainerPlatform 4x Tested Integrations page

By using the provided CloudFormation templates you can create stacks of AWS resources thatrepresent the following components

An AWS Virtual Private Cloud (VPC)

Networking and load balancing components

Security groups and roles

An OpenShift Container Platform bootstrap node

OpenShift Container Platform control plane nodes

An OpenShift Container Platform compute node

Alternatively you can manually create the components or you can reuse existing infrastructure that

CHAPTER 1 INSTALLING ON AWS

147

Alternatively you can manually create the components or you can reuse existing infrastructure thatmeets the cluster requirements Review the CloudFormation templates for more details about how thecomponents interrelate

1731 Cluster machines

You need AWSEC2Instance objects for the following machines

A bootstrap machine This machine is required during installation but you can remove it afteryour cluster deploys

Three control plane machines The control plane machines are not governed by a machine set

Compute machines You must create at least two compute machines which are also known asworker machines during installation These machines are not governed by a machine set

You can use the following instance types for the cluster machines with the provided CloudFormationtemplates

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5types instead

Table 117 Instance types for machines

Instance type Bootstrap Control plane Compute

i3large x

m4large x

m4xlarge x x

m42xlarge x x

m44xlarge x x

m48xlarge x x

m410xlarge x x

m416xlarge x x

m5large x

m5xlarge x x

m52xlarge x x

m54xlarge x x

OpenShift Container Platform 46 Installing on AWS

148

m58xlarge x x

m510xlarge x x

m516xlarge x x

c4large x

c4xlarge x

c42xlarge x x

c44xlarge x x

c48xlarge x x

r4large x

r4xlarge x x

r42xlarge x x

r44xlarge x x

r48xlarge x x

r416xlarge x x

Instance type Bootstrap Control plane Compute

You might be able to use other instance types that meet the specifications of these instance types

1732 Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructurethat you provision you must provide a mechanism for approving cluster certificate signing requests(CSRs) after installation The kube-controller-manager only approves the kubelet client CSRs The machine-approver cannot guarantee the validity of a serving certificate that is requested by usingkubelet credentials because it cannot confirm that the correct machine issued the request You mustdetermine and implement a method of verifying the validity of the kubelet serving certificate requestsand approving them

1733 Other infrastructure components

A VPC

DNS entries

CHAPTER 1 INSTALLING ON AWS

149

Load balancers (classic or network) and listeners

A public and a private Route 53 zone

Security groups

IAM roles

S3 buckets

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

OpenShift Container Platform 46 Installing on AWS

150

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a privatehosted zone similar to the one that the installation program uses if it provisions the clusterrsquosinfrastructure You must create a DNS entry that resolves to your load balancer An entry for apiltcluster_namegtltdomaingt must point to the external load balancer and an entry for api-intltcluster_namegtltdomaingt must point to the internal load balancer

The cluster also requires load balancers and listeners for port 6443 which are required for theKubernetes API and its extensions and port 22623 which are required for the Ignition config files fornew machines The targets will be the master nodes Port 6443 must be accessible to both clientsexternal to the cluster and nodes within the cluster Port 22623 must be accessible to nodes within thecluster

Component AWS type Description

DNS AWSRoute53HostedZone

The hosted zone for your internal DNS

CHAPTER 1 INSTALLING ON AWS

151

etcd recordsets

AWSRoute53RecordSet

The registration records for etcd for your control plane machines

Public loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your public subnets

External APIserver record

AWSRoute53RecordSetGroup

Alias records for the external API server

Externallistener

AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the external load balancer

External targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the external load balancer

Private loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your private subnets

Internal APIserver record

AWSRoute53RecordSetGroup

Alias records for the internal API server

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 22623 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the internal load balancer

Component AWS type Description

OpenShift Container Platform 46 Installing on AWS

152

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Component AWS type Description

Security groups

The control plane and worker machines require access to the following ports

Group Type IP Protocol Port range

MasterSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

tcp 6443

tcp 22623

WorkerSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

BootstrapSecurityGroup

AWSEC2SecurityGroup

tcp 22

tcp 19531

Control plane Ingress

The control plane machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd tcp 2379- 2380

MasterIngressVxlan

Vxlan packets udp 4789

MasterIngressWorkerVxlan

Vxlan packets udp 4789

MasterIngressInternal

Internal cluster communication and Kubernetesproxy metrics

tcp 9000 - 9999

CHAPTER 1 INSTALLING ON AWS

153

MasterIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

MasterIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Worker Ingress

The worker machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets udp 4789

WorkerIngressWorkerVxlan

Vxlan packets udp 4789

WorkerIngressInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

OpenShift Container Platform 46 Installing on AWS

154

WorkerIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Roles and instance profiles

You must grant the machines permissions in AWS The provided CloudFormation templates grant themachines Allow permissions for the following AWSIAMRole objects and provide a AWSIAMInstanceProfile for each set of roles If you do not use the templates you can grant themachines the following broad permissions or the following individual permissions

Role Effect Action Resource

Master Allow ec2

Allow elasticloadbalancing

Allow iamPassRole

Allow s3GetObject

Worker Allow ec2Describe

Bootstrap Allow ec2Describe

Allow ec2AttachVolume

Allow ec2DetachVolume

1734 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 112 Required EC2 permissions for installation

tagTagResources

tagUntagResources

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

CHAPTER 1 INSTALLING ON AWS

155

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

OpenShift Container Platform 46 Installing on AWS

156

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 113 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

CHAPTER 1 INSTALLING ON AWS

157

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 114 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

elasticloadbalancingSetLoadBalancerPoliciesOfListener

OpenShift Container Platform 46 Installing on AWS

158

Example 115 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 116 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

route53ListHostedZonesByName

CHAPTER 1 INSTALLING ON AWS

159

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 117 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 118 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

s3GetObjectAcl

OpenShift Container Platform 46 Installing on AWS

160

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 119 Required permissions to delete base cluster resources

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 120 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

ec2DeleteVpc

CHAPTER 1 INSTALLING ON AWS

161

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

Example 121 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 122 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

OpenShift Container Platform 46 Installing on AWS

162

174 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

175 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster the

$ tar xvf openshift-install-linuxtargz

CHAPTER 1 INSTALLING ON AWS

163

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram If you install a cluster on infrastructure that you provision you must provide this key toyour clusterrsquos machines

176 Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisionedinfrastructure you must generate the files that the installation program needs to deploy your clusterand modify them so that the cluster creates only the machines that it will use You generate and

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

164

customize the install-configyaml file Kubernetes manifests and Ignition config files You also have theoption to first set up a separate var partition during the preparation phases of installation

1761 Optional Creating a separate var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installerHowever there are cases where you might want to create separate partitions in a part of the filesystemthat you expect to grow

OpenShift Container Platform supports the addition of a single partition to attach storage to either the var partition or a subdirectory of var For example

varlibcontainers Holds container-related content that can grow as more images andcontainers are added to a system

varlibetcd Holds data that you might want to keep separate for purposes such asperformance optimization of etcd storage

var Holds data that you might want to keep separate for purposes such as auditing

Storing the contents of a var directory separately makes it easier to grow storage for those areas asneeded and reinstall OpenShift Container Platform at a later date and keep that data intact With thismethod you will not have to pull all your containers again nor will you have to copy massive log fileswhen you update systems

Because var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS)the following procedure sets up the separate var partition by creating a machine config that is insertedduring the openshift-install preparation phases of an OpenShift Container Platform installation

IMPORTANT

If you follow the steps to create a separate var partition in this procedure it is notnecessary to create the Kubernetes manifest and Ignition config files again as describedlater in this section

Procedure

1 Create a directory to hold the OpenShift Container Platform installation files

2 Run openshift-install to create a set of files in the manifest and openshift subdirectoriesAnswer the system questions as you are prompted

3 Create a MachineConfig object and add it to a file in the openshift directory For examplename the file 98-var-partitionyaml change the disk device name to the name of the storage

$ mkdir $HOMEclusterconfig

$ openshift-install create manifests --dir $HOMEclusterconfig SSH Public Key $ ls $HOMEclusterconfigopenshift99_kubeadmin-password-secretyaml99_openshift-cluster-api_master-machines-0yaml99_openshift-cluster-api_master-machines-1yaml99_openshift-cluster-api_master-machines-2yaml

CHAPTER 1 INSTALLING ON AWS

165

1

2

device on the worker systems and set the storage size as appropriate This attaches storage toa separate var directory

The storage device name of the disk that you want to partition

When adding a data partition to the boot disk a minimum value of 25000 MiB (Mebibytes)is recommended The root file system is automatically resized to fill all available space upto the specified offset If no value is specified or if the specified value is smaller than therecommended minimum the resulting root file system will be too small and futurereinstalls of RHCOS might overwrite the beginning of the data partition

4 Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories

Now you can use the Ignition config files as input to the installation procedures to install Red HatEnterprise Linux CoreOS (RHCOS) systems

apiVersion machineconfigurationopenshiftiov1kind MachineConfigmetadata labels machineconfigurationopenshiftiorole worker name 98-var-partitionspec config ignition version 310 storage disks - device devltdevice_namegt 1 partitions - sizeMiB ltpartition_sizegt startMiB ltpartition_start_offsetgt 2 label var filesystems - path var device devdiskby-partlabelvar format xfs systemd units - name varmount enabled true contents | [Unit] Before=local-fstarget [Mount] Where=var What=devdiskby-partlabelvar [Install] WantedBy=local-fstarget

$ openshift-install create ignition-configs --dir $HOMEclusterconfig$ ls $HOMEclusterconfigauth bootstrapign masterign metadatajson workerign

OpenShift Container Platform 46 Installing on AWS

166

1

1762 Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deployyour cluster

Prerequisites

You obtained the OpenShift Container Platform installation program for user-provisionedinfrastructure and the pull secret for your cluster

You checked that you are deploying your cluster to a region with an accompanying Red HatEnterprise Linux CoreOS (RHCOS) AMI published by Red Hat If you are deploying to a regionthat requires a custom AMI such as an AWS GovCloud region you must create the install-configyaml file manually

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select aws as the platform to target

iii If you do not have an AWS profile stored on your computer enter the AWS access keyID and secret access key for the user that you configured to run the installationprogram

NOTE

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

167

NOTE

The AWS access key ID and secret access key are stored in ~awscredentials in the home directory of the current user on theinstallation host You are prompted for the credentials by the installationprogram if the credentials for the exported profile are not present in thefile Any credentials that you provide to the installation program arestored in the file

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Optional Back up the install-configyaml file

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

Additional resources

See Configuration and credential file settings in the AWS documentation for more informationabout AWS profile and credential configuration

1763 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

OpenShift Container Platform 46 Installing on AWS

168

1

2

3

4

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

169

1

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

1764 Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines you mustgenerate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines

The installation configuration file transforms into the Kubernetes manifests The manifests wrap into theIgnition configuration files which are later used to create the cluster

IMPORTANT

The Ignition config files that the installation program generates contain certificates thatexpire after 24 hours which are then renewed at that time If the cluster is shut downbefore renewing the certificates and the cluster is later restarted after the 24 hours haveelapsed the cluster automatically recovers the expired certificates The exception is thatyou must manually approve the pending node-bootstrapper certificate signing requests(CSRs) to recover kubelet certificates See the documentation for Recovering fromexpired control plane certificates for more information

Prerequisites

You obtained the OpenShift Container Platform installation program

You created the install-configyaml installation configuration file

Procedure

1 Change to the directory that contains the installation program and generate the Kubernetesmanifests for the cluster

Example output

For ltinstallation_directorygt specify the installation directory that contains the install-configyaml file you created

2 Remove the Kubernetes manifest files that define the control plane machines

By removing these files you prevent the cluster from automatically generating control planemachines

3 Remove the Kubernetes manifest files that define the worker machines

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

INFO Credentials loaded from the myprofile profile in file homemyuserawscredentialsINFO Consuming Install Config from target directoryINFO Manifests created in install_dirmanifests and install_diropenshift

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_master-machines-yaml

OpenShift Container Platform 46 Installing on AWS

170

1 2

1

Because you create and manage the worker machines yourself you do not need to initializethese machines

4 Check that the mastersSchedulable parameter in the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml Kubernetes manifestfile is set to false This setting prevents pods from being scheduled on the control planemachines

a Open the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml file

b Locate the mastersSchedulable parameter and ensure that it is set to false

c Save and exit the file

5 Optional If you do not want the Ingress Operator to create DNS records on your behalf removethe privateZone and publicZone sections from the ltinstallation_directorygtmanifestscluster-dns-02-configyml DNS configuration file

Remove this section completely

If you do so you must add ingress DNS records manually in a later step

6 To create the Ignition configuration files run the following command from the directory thatcontains the installation program

For ltinstallation_directorygt specify the same installation directory

The following files are generated in the directory

auth kubeadmin-password kubeconfig bootstrapign masterign metadatajson workerign

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_worker-machineset-yaml

apiVersion configopenshiftiov1kind DNSmetadata creationTimestamp null name clusterspec baseDomain exampleopenshiftcom privateZone 1 id mycluster-100419-private-zone publicZone 2 id exampleopenshiftcomstatus

$ openshift-install create ignition-configs --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

171

1

1

177 Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify yourcluster in Amazon Web Services (AWS) The infrastructure name is also used to locate the appropriateAWS resources during an OpenShift Container Platform installation The provided CloudFormationtemplates contain references to this infrastructure name so you must extract it

Prerequisites

You obtained the OpenShift Container Platform installation program and the pull secret foryour cluster

You generated the Ignition config files for your cluster

You installed the jq package

Procedure

To extract and view the infrastructure name from the Ignition config file metadata run thefollowing command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

The output of this command is your cluster name and a random string

178 Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShiftContainer Platform cluster to use You can customize the VPC to meet your requirements includingVPN and route tables

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the VPC

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

$ jq -r infraID ltinstallation_directorygtmetadatajson 1

openshift-vw9j6 1

OpenShift Container Platform 46 Installing on AWS

172

1

2

3

4

5

6

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

Procedure

1 Create a JSON file that contains the parameter values that the template requires

The CIDR block for the VPC

Specify a CIDR block in the format xxxx16-24

The number of availability zones to deploy the VPC in

Specify an integer between 1 and 3

The size of each subnet in each availability zone

Specify an integer between 5 and 13 where 5 is 27 and 13 is 19

2 Copy the template from the CloudFormation template for the VPC section of this topic andsave it as a YAML file on your computer This template describes the VPC that your clusterrequires

3 Launch the CloudFormation template to create a stack of AWS resources that represent theVPC

IMPORTANT

You must enter the command on a single line

[ ParameterKey VpcCidr 1 ParameterValue 1000016 2 ParameterKey AvailabilityZoneCount 3 ParameterValue 1 4 ParameterKey SubnetBits 5 ParameterValue 12 6 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

173

1

2

3

ltnamegt is the name for the CloudFormation stack such as cluster-vpc You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

VpcId The ID of your VPC

PublicSubnetIds

The IDs of the new public subnets

PrivateSubnetIds

The IDs of the new private subnets

1781 CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for yourOpenShift Container Platform cluster

Example 123 CloudFormation template for the VPC

arnawscloudformationus-east-1269333783861stackcluster-vpcdbedae40-2fd3-11eb-820e-12a48460849f

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for Best Practice VPC with 1-3 AZs

Parameters VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String AvailabilityZoneCount ConstraintDescription The number of availability zones (Min 1 Max 3) MinValue 1 MaxValue 3

OpenShift Container Platform 46 Installing on AWS

174

Default 1 Description How many AZs to create VPC subnets for (Min 1 Max 3) Type Number SubnetBits ConstraintDescription CIDR block parameter must be in the form xxxx19-27 MinValue 5 MaxValue 13 Default 12 Description Size of each subnet to create within the availability zones (Min 5 = 27 Max 13 = 19) Type Number

Metadata AWSCloudFormationInterface ParameterGroups - Label default Network Configuration Parameters - VpcCidr - SubnetBits - Label default Availability Zones Parameters - AvailabilityZoneCount ParameterLabels AvailabilityZoneCount default Availability Zone Count VpcCidr default VPC CIDR SubnetBits default Bits Per Subnet

Conditions DoAz3 Equals [3 Ref AvailabilityZoneCount] DoAz2 Or [Equals [2 Ref AvailabilityZoneCount] Condition DoAz3]

Resources VPC Type AWSEC2VPC Properties EnableDnsSupport true EnableDnsHostnames true CidrBlock Ref VpcCidr PublicSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [0 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PublicSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC

CHAPTER 1 INSTALLING ON AWS

175

CidrBlock Select [1 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PublicSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [2 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion InternetGateway Type AWSEC2InternetGateway GatewayToInternet Type AWSEC2VPCGatewayAttachment Properties VpcId Ref VPC InternetGatewayId Ref InternetGateway PublicRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PublicRoute Type AWSEC2Route DependsOn GatewayToInternet Properties RouteTableId Ref PublicRouteTable DestinationCidrBlock 00000 GatewayId Ref InternetGateway PublicSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PublicSubnet2 RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation3 Condition DoAz3 Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet3 RouteTableId Ref PublicRouteTable PrivateSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [3 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion

OpenShift Container Platform 46 Installing on AWS

176

PrivateRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PrivateSubnet RouteTableId Ref PrivateRouteTable NAT DependsOn - GatewayToInternet Type AWSEC2NatGateway Properties AllocationId FnGetAtt - EIP - AllocationId SubnetId Ref PublicSubnet EIP Type AWSEC2EIP Properties Domain vpc Route Type AWSEC2Route Properties RouteTableId Ref PrivateRouteTable DestinationCidrBlock 00000 NatGatewayId Ref NAT PrivateSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [4 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PrivateRouteTable2 Type AWSEC2RouteTable Condition DoAz2 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PrivateSubnet2 RouteTableId Ref PrivateRouteTable2 NAT2 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz2

CHAPTER 1 INSTALLING ON AWS

177

Properties AllocationId FnGetAtt - EIP2 - AllocationId SubnetId Ref PublicSubnet2 EIP2 Type AWSEC2EIP Condition DoAz2 Properties Domain vpc Route2 Type AWSEC2Route Condition DoAz2 Properties RouteTableId Ref PrivateRouteTable2 DestinationCidrBlock 00000 NatGatewayId Ref NAT2 PrivateSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [5 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion PrivateRouteTable3 Type AWSEC2RouteTable Condition DoAz3 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation3 Type AWSEC2SubnetRouteTableAssociation Condition DoAz3 Properties SubnetId Ref PrivateSubnet3 RouteTableId Ref PrivateRouteTable3 NAT3 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz3 Properties AllocationId FnGetAtt - EIP3 - AllocationId SubnetId Ref PublicSubnet3 EIP3 Type AWSEC2EIP Condition DoAz3 Properties Domain vpc

OpenShift Container Platform 46 Installing on AWS

178

Route3 Type AWSEC2Route Condition DoAz3 Properties RouteTableId Ref PrivateRouteTable3 DestinationCidrBlock 00000 NatGatewayId Ref NAT3 S3Endpoint Type AWSEC2VPCEndpoint Properties PolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Action - Resource - RouteTableIds - Ref PublicRouteTable - Ref PrivateRouteTable - If [DoAz2 Ref PrivateRouteTable2 Ref AWSNoValue] - If [DoAz3 Ref PrivateRouteTable3 Ref AWSNoValue] ServiceName Join - - - comamazonaws - Ref AWSRegion - s3 VpcId Ref VPC

Outputs VpcId Description ID of the new VPC Value Ref VPC PublicSubnetIds Description Subnet IDs of the public subnets Value Join [ [Ref PublicSubnet If [DoAz2 Ref PublicSubnet2 Ref AWSNoValue] If [DoAz3 Ref PublicSubnet3 Ref AWSNoValue]] ] PrivateSubnetIds Description Subnet IDs of the private subnets Value Join [ [Ref PrivateSubnet If [DoAz2 Ref PrivateSubnet2 Ref AWSNoValue] If [DoAz3 Ref PrivateSubnet3 Ref AWSNoValue]] ]

CHAPTER 1 INSTALLING ON AWS

179

1

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

179 Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS)that your OpenShift Container Platform cluster can use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the networking and load balancing components that yourOpenShift Container Platform cluster requires The template also creates a hosted zone and subnettags

You can run the template multiple times within a single Virtual Private Cloud (VPC)

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-configyaml file for your cluster You can obtain details about your hosted zone by running thefollowing command

For the ltroute53_domaingt specify the Route 53 base domain that you used when yougenerated the install-configyaml file for the cluster

Example output

In the example output the hosted zone ID is Z21IXYZABCZ2A4

2 Create a JSON file that contains the parameter values that the template requires

$ aws route53 list-hosted-zones-by-name --dns-name ltroute53_domaingt 1

myclusterexamplecom False 100HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA hostedzoneZ21IXYZABCZ2A4 myclusterexamplecom 10

OpenShift Container Platform 46 Installing on AWS

180

1

2

3

4

5

6

7

8

9

A short representative cluster name to use for host names etc

Specify the cluster name that you used when you generated the install-configyaml filefor the cluster

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The Route 53 public zone ID to register the targets with

Specify the Route 53 public zone ID which as a format similar to Z21IXYZABCZ2A4 Youcan obtain this value from the AWS console

The Route 53 zone to register the targets with

Specify the Route 53 base domain that you used when you generated the install-configyaml file for the cluster Do not include the trailing period () that is displayed in theAWS console

The public subnets that you created for your VPC

[ ParameterKey ClusterName 1 ParameterValue mycluster 2 ParameterKey InfrastructureName 3 ParameterValue mycluster-ltrandom_stringgt 4 ParameterKey HostedZoneId 5 ParameterValue ltrandom_stringgt 6 ParameterKey HostedZoneName 7 ParameterValue examplecom 8 ParameterKey PublicSubnets 9 ParameterValue subnet-ltrandom_stringgt 10 ParameterKey PrivateSubnets 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey VpcId 13 ParameterValue vpc-ltrandom_stringgt 14 ]

CHAPTER 1 INSTALLING ON AWS

181

10

11

12

13

14

1

2

3

4

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

3 Copy the template from the CloudFormation template for the network and load balancerssection of this topic and save it as a YAML file on your computer This template describes thenetworking and load balancing objects that your cluster requires

IMPORTANT

If you are deploying your cluster to an AWS government region you must updatethe InternalApiServerRecord in the CloudFormation template to use CNAMErecords Records of type ALIAS are not supported for AWS government regions

4 Launch the CloudFormation template to create a stack of AWS resources that provide thenetworking and load balancing components

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-dns You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole resources

Example output

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-dnscd3e5de0-2fd4-11eb-5cf0-12be5c33a183

OpenShift Container Platform 46 Installing on AWS

182

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

PrivateHostedZoneId

Hosted zone ID for the private DNS

ExternalApiLoadBalancerName

Full name of the external API load balancer

InternalApiLoadBalancerName

Full name of the internal API load balancer

ApiServerDnsName

Full host name of the API server

RegisterNlbIpTargetsLambda

Lambda ARN useful to help registerderegister IP targets for these load balancers

ExternalApiTargetGroupArn

ARN of external API target group

InternalApiTargetGroupArn

ARN of internal API target group

InternalServiceTargetGroupArn

ARN of internal service target group

1791 CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and loadbalancers that you need for your OpenShift Container Platform cluster

Example 124 CloudFormation template for the network and load balancers

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Network Elements (Route53 amp LBs)

Parameters ClusterName

CHAPTER 1 INSTALLING ON AWS

183

AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Cluster name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short representative cluster name to use for host names and other identifying names Type String InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String HostedZoneId Description The Route53 public zone ID to register the targets with such as Z21IXYZABCZ2A4 Type String HostedZoneName Description The Route53 zone to register the targets with such as examplecom Omit the trailing period Type String Default examplecom PublicSubnets Description The internet-facing subnets Type ListltAWSEC2SubnetIdgt PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - ClusterName - InfrastructureName - Label default Network Configuration Parameters - VpcId - PublicSubnets - PrivateSubnets - Label default DNS Parameters - HostedZoneName - HostedZoneId ParameterLabels

OpenShift Container Platform 46 Installing on AWS

184

ClusterName default Cluster Name InfrastructureName default Infrastructure Name VpcId default VPC ID PublicSubnets default Public Subnets PrivateSubnets default Private Subnets HostedZoneName default Public Hosted Zone Name HostedZoneId default Public Hosted Zone ID

Resources ExtApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName ext]] IpAddressType ipv4 Subnets Ref PublicSubnets Type network

IntApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName int]] Scheme internal IpAddressType ipv4 Subnets Ref PrivateSubnets Type network

IntDns Type AWSRoute53HostedZone Properties HostedZoneConfig Comment Managed by CloudFormation Name Join [ [Ref ClusterName Ref HostedZoneName]] HostedZoneTags - Key Name Value Join [- [Ref InfrastructureName int]] - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value owned VPCs - VPCId Ref VpcId VPCRegion Ref AWSRegion

ExternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref HostedZoneId RecordSets - Name Join [

CHAPTER 1 INSTALLING ON AWS

185

[api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt ExtApiElbCanonicalHostedZoneID DNSName GetAtt ExtApiElbDNSName

InternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref IntDns RecordSets - Name Join [ [api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName - Name Join [ [api-int Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName

ExternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref ExternalApiTargetGroup LoadBalancerArn Ref ExtApiElb Port 6443 Protocol TCP

ExternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP

OpenShift Container Platform 46 Installing on AWS

186

TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalApiTargetGroup LoadBalancerArn Ref IntApiElb Port 6443 Protocol TCP

InternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalServiceInternalListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalServiceTargetGroup LoadBalancerArn Ref IntApiElb Port 22623 Protocol TCP

InternalServiceTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath healthz HealthCheckPort 22623 HealthCheckProtocol HTTPS

CHAPTER 1 INSTALLING ON AWS

187

HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 22623 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

RegisterTargetLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName nlb lambda role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalApiTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalServiceTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref ExternalApiTargetGroup

RegisterNlbIpTargets Type AWSLambdaFunction Properties Handler indexhandler

OpenShift Container Platform 46 Installing on AWS

188

Role FnGetAtt - RegisterTargetLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) elb = boto3client(elbv2) if event[RequestType] == Delete elbderegister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) elif event[RequestType] == Create elbregister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][TargetArn]+event[ResourceProperties][TargetIp]) Runtime python37 Timeout 120

RegisterSubnetTagsLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName subnet-tags-lambda-role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName subnet-tagging-policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ ec2DeleteTags ec2CreateTags ] Resource arnawsec2subnet - Effect Allow Action [ ec2DescribeSubnets ec2DescribeTags ] Resource

CHAPTER 1 INSTALLING ON AWS

189

RegisterSubnetTags Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterSubnetTagsLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) ec2_client = boto3client(ec2) if event[RequestType] == Delete for subnet_id in event[ResourceProperties][Subnets] ec2_clientdelete_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName]]) elif event[RequestType] == Create for subnet_id in event[ResourceProperties][Subnets] ec2_clientcreate_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName] Value shared]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][InfrastructureName]+event[ResourceProperties][Subnets][0]) Runtime python37 Timeout 120

RegisterPublicSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PublicSubnets

RegisterPrivateSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PrivateSubnets

Outputs PrivateHostedZoneId Description Hosted zone ID for the private DNS which is required for private records Value Ref IntDns ExternalApiLoadBalancerName Description Full name of the external API load balancer Value GetAtt ExtApiElbLoadBalancerFullName InternalApiLoadBalancerName Description Full name of the internal API load balancer Value GetAtt IntApiElbLoadBalancerFullName ApiServerDnsName Description Full hostname of the API server which is required for the Ignition config files

OpenShift Container Platform 46 Installing on AWS

190

IMPORTANT

If you are deploying your cluster to an AWS government region you must update the InternalApiServerRecord to use CNAME records Records of type ALIAS are notsupported for AWS government regions For example

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

You can view details about your hosted zones by navigating to the AWS Route 53 console

See Listing public hosted zones in the AWS documentation for more information about listingpublic hosted zones

1710 Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift ContainerPlatform cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the security groups and roles that your OpenShift ContainerPlatform cluster requires

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Value Join [ [api-int Ref ClusterName Ref HostedZoneName]] RegisterNlbIpTargetsLambda Description Lambda ARN useful to help register or deregister IP targets for these load balancers Value GetAtt RegisterNlbIpTargetsArn ExternalApiTargetGroupArn Description ARN of the external API target group Value Ref ExternalApiTargetGroup InternalApiTargetGroupArn Description ARN of the internal API target group Value Ref InternalApiTargetGroup InternalServiceTargetGroupArn Description ARN of the internal service target group Value Ref InternalServiceTargetGroup

Type CNAMETTL 10ResourceRecords- GetAtt IntApiElbDNSName

CHAPTER 1 INSTALLING ON AWS

191

1

2

3

4

5

6

7

8

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Create a JSON file that contains the parameter values that the template requires

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The CIDR block for the VPC

Specify the CIDR block parameter that you used for the VPC that you defined in the form xxxx16-24

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

2 Copy the template from the CloudFormation template for security objects section of this

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey VpcCidr 3 ParameterValue 1000016 4 ParameterKey PrivateSubnets 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey VpcId 7 ParameterValue vpc-ltrandom_stringgt 8 ]

OpenShift Container Platform 46 Installing on AWS

192

1

2

3

4

2 Copy the template from the CloudFormation template for security objects section of thistopic and save it as a YAML file on your computer This template describes the security groupsand roles that your cluster requires

3 Launch the CloudFormation template to create a stack of AWS resources that represent thesecurity groups and roles

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-sec You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

MasterSecurityGroupId

Master Security Group ID

WorkerSecurityGroupId

Worker Security Group ID

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-sec03bd4210-2ed7-11eb-6d7a-13fc0b61e9db

$ aws cloudformation describe-stacks --stack-name ltnamegt

CHAPTER 1 INSTALLING ON AWS

193

MasterInstanceProfile

Master IAM Instance Profile

WorkerInstanceProfile

Worker IAM Instance Profile

17101 CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need foryour OpenShift Container Platform cluster

Example 125 CloudFormation template for security objects

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Security Elements (Security Groups amp IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Network Configuration Parameters

OpenShift Container Platform 46 Installing on AWS

194

- VpcId - VpcCidr - PrivateSubnets ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID VpcCidr default VPC CIDR PrivateSubnets default Private Subnets

Resources MasterSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Master Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr - IpProtocol tcp ToPort 6443 FromPort 6443 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22623 ToPort 22623 CidrIp Ref VpcCidr VpcId Ref VpcId

WorkerSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Worker Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr VpcId Ref VpcId

MasterIngressEtcd Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId

CHAPTER 1 INSTALLING ON AWS

195

SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description etcd FromPort 2379 ToPort 2380 IpProtocol tcp

MasterIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressWorkerVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressWorkerGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

OpenShift Container Platform 46 Installing on AWS

196

MasterIngressWorkerInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressWorkerInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services

CHAPTER 1 INSTALLING ON AWS

197

FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressWorkerIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIngressWorkerIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressMasterVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressGeneve Type AWSEC2SecurityGroupIngress

OpenShift Container Platform 46 Installing on AWS

198

Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressMasterGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressMasterInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressMasterInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999

CHAPTER 1 INSTALLING ON AWS

199

IpProtocol udp

WorkerIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes secure kubelet port FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal Kubernetes communication FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressMasterIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressMasterIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId

OpenShift Container Platform 46 Installing on AWS

200

SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2AttachVolume - ec2AuthorizeSecurityGroupIngress - ec2CreateSecurityGroup - ec2CreateTags - ec2CreateVolume - ec2DeleteSecurityGroup - ec2DeleteVolume - ec2Describe - ec2DetachVolume - ec2ModifyInstanceAttribute - ec2ModifyVolume - ec2RevokeSecurityGroupIngress - elasticloadbalancingAddTags - elasticloadbalancingAttachLoadBalancerToSubnets - elasticloadbalancingApplySecurityGroupsToLoadBalancer - elasticloadbalancingCreateListener - elasticloadbalancingCreateLoadBalancer - elasticloadbalancingCreateLoadBalancerPolicy - elasticloadbalancingCreateLoadBalancerListeners - elasticloadbalancingCreateTargetGroup - elasticloadbalancingConfigureHealthCheck - elasticloadbalancingDeleteListener - elasticloadbalancingDeleteLoadBalancer - elasticloadbalancingDeleteLoadBalancerListeners - elasticloadbalancingDeleteTargetGroup - elasticloadbalancingDeregisterInstancesFromLoadBalancer - elasticloadbalancingDeregisterTargets - elasticloadbalancingDescribe - elasticloadbalancingDetachLoadBalancerFromSubnets - elasticloadbalancingModifyListener - elasticloadbalancingModifyLoadBalancerAttributes

CHAPTER 1 INSTALLING ON AWS

201

- elasticloadbalancingModifyTargetGroup - elasticloadbalancingModifyTargetGroupAttributes - elasticloadbalancingRegisterInstancesWithLoadBalancer - elasticloadbalancingRegisterTargets - elasticloadbalancingSetLoadBalancerPoliciesForBackendServer - elasticloadbalancingSetLoadBalancerPoliciesOfListener - kmsDescribeKey Resource

MasterInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref MasterIamRole

WorkerIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName worker policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2DescribeInstances - ec2DescribeRegions Resource

WorkerInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref WorkerIamRole

Outputs MasterSecurityGroupId Description Master Security Group ID Value GetAtt MasterSecurityGroupGroupId

WorkerSecurityGroupId Description Worker Security Group ID Value GetAtt WorkerSecurityGroupGroupId

MasterInstanceProfile Description Master IAM Instance Profile Value Ref MasterInstanceProfile

OpenShift Container Platform 46 Installing on AWS

202

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1711 RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs valid for the various Amazon WebServices (AWS) zones you can specify for your OpenShift Container Platform nodes

NOTE

You can also install to regions that do not have a RHCOS AMI published by importing yourown AMI

Table 118 RHCOS AMIs

AWS zone AWS AMI

af-south-1 ami-09921c9c1c36e695c

ap-east-1 ami-01ee8446e9af6b197

ap-northeast-1 ami-04e5b5722a55846ea

ap-northeast-2 ami-0fdc25c8a0273a742

ap-south-1 ami-09e3deb397cc526a8

ap-southeast-1 ami-0630e03f75e02eec4

ap-southeast-2 ami-069450613262ba03c

ca-central-1 ami-012518cdbd3057dfd

eu-central-1 ami-0bd7175ff5b1aef0c

eu-north-1 ami-06c9ec42d0a839ad2

eu-south-1 ami-0614d7440a0363d71

eu-west-1 ami-01b89df58b5d4d5fa

WorkerInstanceProfile Description Worker IAM Instance Profile Value Ref WorkerInstanceProfile

CHAPTER 1 INSTALLING ON AWS

203

eu-west-2 ami-06f6e31ddd554f89d

eu-west-3 ami-0dc82e2517ded15a1

me-south-1 ami-07d181e3aa0f76067

sa-east-1 ami-0cd44e6dd20e6c7fa

us-east-1 ami-04a16d506e5b0e246

us-east-2 ami-0a1f868ad58ea59a7

us-west-1 ami-0a65d76e3a6f6622f

us-west-2 ami-0dd9008abadc519f1

AWS zone AWS AMI

17111 AWS regions without a published RHCOS AMI

You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regionswithout native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI)or the AWS software development kit (SDK) If a published AMI is not available for an AWS region youcan upload a custom AMI prior to installing the cluster This is required if you are deploying your clusterto an AWS government region

If you are deploying to a non-government region that does not have a published RHCOS AMI and youdo not specify a custom AMI the installation program copies the us-east-1 AMI to the user accountautomatically Then the installation program creates the control plane machines with encrypted EBSvolumes using the default or user-specified Key Management Service (KMS) key This allows the AMI tofollow the same process workflow as published RHCOS AMIs

A region without native support for an RHCOS AMI is not available to select from the terminal duringcluster creation because it is not published However you can install to this region by configuring thecustom AMI in the install-configyaml file

17112 Uploading a custom RHCOS AMI in AWS

If you are deploying to a custom Amazon Web Services (AWS) region you must upload a custom RedHat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region

Prerequisites

You configured an AWS account

You created an Amazon S3 bucket with the required IAM service role

You uploaded your RHCOS VMDK file to Amazon S3 The RHCOS VMDK file must be thehighest version that is less than or equal to the OpenShift Container Platform version you areinstalling

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Using

OpenShift Container Platform 46 Installing on AWS

204

1

1

1

1

2

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer

Procedure

1 Export your AWS profile as an environment variable

The AWS profile name that holds your AWS credentials like govcloud

2 Export the region to associate with your custom AMI as an environment variable

The AWS region like us-gov-east-1

3 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable

The RHCOS VMDK version like 460

4 Export the Amazon S3 bucket name as an environment variable

5 Create the containersjson file and define your RHCOS VMDK file

6 Import the RHCOS disk as an Amazon EBS snapshot

The description of your RHCOS disk being imported like rhcos-$RHCOS_VERSION-x86_64-awsx86_64

The file path to the JSON file describing your RHCOS disk The JSON file should containyour Amazon S3 bucket name and key

$ export AWS_PROFILE=ltaws_profilegt 1

$ export AWS_DEFAULT_REGION=ltaws_regiongt 1

$ export RHCOS_VERSION=ltversiongt 1

$ export VMIMPORT_BUCKET_NAME=lts3_bucket_namegt

$ cat ltltEOF gt containersjson Description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 Format vmdk UserBucket S3Bucket $VMIMPORT_BUCKET_NAME S3Key rhcos-$RHCOS_VERSION-x86_64-awsx86_64vmdk EOF

$ aws ec2 import-snapshot --region $AWS_DEFAULT_REGION --description ltdescriptiongt 1 --disk-container ltfile_pathgtcontainersjson 2

CHAPTER 1 INSTALLING ON AWS

205

1

2

3

4

7 Check the status of the image import

Example output

Copy the SnapshotId to register the image

8 Create a custom RHCOS AMI from the RHCOS snapshot

The RHCOS VMDK architecture type like x86_64 s390x or ppc64le

The Description from the imported snapshot

The name of the RHCOS AMI

The SnapshotID from the imported snapshot

To learn more about these APIs see the AWS documentation for importing snapshots and creatingEBS-backed AMIs

1712 Creating the bootstrap node in AWS

$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region $AWS_DEFAULT_REGION

ImportSnapshotTasks [ Description rhcos-460-x86_64-awsx86_64 ImportTaskId import-snap-fh6i8uil SnapshotTaskDetail Description rhcos-460-x86_64-awsx86_64 DiskImageSize 8190566400 Format VMDK SnapshotId snap-06331325870076318 Status completed UserBucket S3Bucket external-images S3Key rhcos-460-x86_64-awsx86_64vmdk ]

$ aws ec2 register-image --region $AWS_DEFAULT_REGION --architecture x86_64 1 --description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 2 --ena-support --name rhcos-$RHCOS_VERSION-x86_64-awsx86_64 3 --virtualization-type hvm --root-device-name devxvda --block-device-mappings DeviceName=devxvdaEbs=DeleteOnTermination=trueSnapshotId=ltsnapshot_IDgt 4

OpenShift Container Platform 46 Installing on AWS

206

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift ContainerPlatform cluster initialization

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the bootstrap node that your OpenShift Container Platforminstallation requires

NOTE

If you do not use the provided CloudFormation template to create your bootstrap nodeyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

Procedure

1 Provide a location to serve the bootstrapign Ignition config file to your cluster This file islocated in your installation directory One way to do this is to create an S3 bucket in yourclusterrsquos region and upload the Ignition config file to it

IMPORTANT

The provided CloudFormation Template assumes that the Ignition config filesfor your cluster are served from an S3 bucket If you choose to serve the filesfrom another location you must modify the templates

IMPORTANT

If you are deploying to a region that has endpoints that differ from the AWS SDKor you are providing your own custom endpoints you must use a presigned URLfor your S3 bucket instead of the s3 schema

NOTE

The bootstrap Ignition config file does contain secrets like X509 keys Thefollowing steps provide basic security for the S3 bucket To provide additionalsecurity you can enable an S3 bucket policy to allow only certain users such asthe OpenShift IAM user to access objects that the bucket contains You canavoid S3 entirely and serve your bootstrap Ignition config file from any addressthat the bootstrap machine can reach

CHAPTER 1 INSTALLING ON AWS

207

1

1

a Create the bucket

ltcluster-namegt-infra is the bucket name When creating the install-configyaml filereplace ltcluster-namegt with the name specified for the cluster

b Upload the bootstrapign Ignition config file to the bucket

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

c Verify that the file uploaded

Example output

2 Create a JSON file that contains the parameter values that the template requires

$ aws s3 mb s3ltcluster-namegt-infra 1

$ aws s3 cp ltinstallation_directorygtbootstrapign s3ltcluster-namegt-infrabootstrapign 1

$ aws s3 ls s3ltcluster-namegt-infra

2019-04-03 161516 314878 bootstrapign

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AllowedBootstrapSshCidr 5 ParameterValue 00000 6 ParameterKey PublicSubnet 7 ParameterValue subnet-ltrandom_stringgt 8 ParameterKey MasterSecurityGroupId 9 ParameterValue sg-ltrandom_stringgt 10 ParameterKey VpcId 11 ParameterValue vpc-ltrandom_stringgt 12

OpenShift Container Platform 46 Installing on AWS

208

1

2

3

4

5

6

7

8

9

10

11

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node

Specify a valid AWSEC2ImageId value

CIDR block to allow SSH access to the bootstrap node

Specify a CIDR block in the format xxxx16-24

The public subnet that is associated with your VPC to launch the bootstrap node into

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The master security group ID (for registering temporary rules)

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The VPC created resources will belong to

ParameterKey BootstrapIgnitionLocation 13 ParameterValue s3ltbucket_namegtbootstrapign 14 ParameterKey AutoRegisterELB 15 ParameterValue yes 16 ParameterKey RegisterNlbIpTargetsLambdaArn 17 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 18 ParameterKey ExternalApiTargetGroupArn 19 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 20 ParameterKey InternalApiTargetGroupArn 21 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 22 ParameterKey InternalServiceTargetGroupArn 23 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 24 ]

CHAPTER 1 INSTALLING ON AWS

209

12

13

14

15

16

17

18

19

20

21

22

23

24

1

Specify the VpcId value from the output of the CloudFormation template for the VPC

Location to fetch bootstrap Ignition config file from

Specify the S3 bucket and file name in the form s3ltbucket_namegtbootstrapign

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

3 Copy the template from the CloudFormation template for the bootstrap machine section ofthis topic and save it as a YAML file on your computer This template describes the bootstrapmachine that your cluster requires

4 Launch the CloudFormation template to create a stack of AWS resources that represent thebootstrap node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-bootstrap You needthe name of this stack if you remove the cluster

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

OpenShift Container Platform 46 Installing on AWS

210

2

3

4

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

BootstrapInstanceId

The bootstrap Instance ID

BootstrapPublicIp

The bootstrap node public IP address

BootstrapPrivateIp

The bootstrap node private IP address

17121 CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need foryour OpenShift Container Platform cluster

Example 126 CloudFormation template for the bootstrap machine

arnawscloudformationus-east-1269333783861stackcluster-bootstrap12944486-2add-11eb-9dee-12dace8e3a83

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Bootstrap (EC2 Instance Security Groups and IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String

CHAPTER 1 INSTALLING ON AWS

211

RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AllowedBootstrapSshCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])(([0-9]|1[0-9]|2[0-9]|3[0-2]))$ ConstraintDescription CIDR block parameter must be in the form xxxx0-32 Default 00000 Description CIDR block to allow SSH access to the bootstrap node Type String PublicSubnet Description The public subnet to launch the bootstrap node into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID for registering temporary rules Type AWSEC2SecurityGroupId VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId BootstrapIgnitionLocation Default s3my-s3-bucketbootstrapign Description Ignition config file location Type String AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId

OpenShift Container Platform 46 Installing on AWS

212

- Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID AllowedBootstrapSshCidr default Allowed SSH Source PublicSubnet default Public Subnet RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Bootstrap Ignition Source MasterSecurityGroupId default Master Security Group ID AutoRegisterELB default Use Provided ELB Automation

Conditions DoRegistration Equals [yes Ref AutoRegisterELB]

Resources BootstrapIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName bootstrap policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action ec2Describe

CHAPTER 1 INSTALLING ON AWS

213

Resource - Effect Allow Action ec2AttachVolume Resource - Effect Allow Action ec2DetachVolume Resource - Effect Allow Action s3GetObject Resource

BootstrapInstanceProfile Type AWSIAMInstanceProfile Properties Path Roles - Ref BootstrapIamRole

BootstrapSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Bootstrap Security Group SecurityGroupIngress - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref AllowedBootstrapSshCidr - IpProtocol tcp ToPort 19531 FromPort 19531 CidrIp 00000 VpcId Ref VpcId

BootstrapInstance Type AWSEC2Instance Properties ImageId Ref RhcosAmi IamInstanceProfile Ref BootstrapInstanceProfile InstanceType i3large NetworkInterfaces - AssociatePublicIpAddress true DeviceIndex 0 GroupSet - Ref BootstrapSecurityGroup - Ref MasterSecurityGroupId SubnetId Ref PublicSubnet UserData FnBase64 Sub - ignitionconfigreplacesource$S3Locversion310 - S3Loc Ref BootstrapIgnitionLocation

RegisterBootstrapApiTarget Condition DoRegistration Type CustomNLBRegister

OpenShift Container Platform 46 Installing on AWS

214

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise LinuxCoreOS (RHCOS) AMIs for the AWS zones

1713 Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the control plane nodes

IMPORTANT

The CloudFormation template creates a stack that represents three control plane nodes

NOTE

Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

Outputs BootstrapInstanceId Description Bootstrap Instance ID Value Ref BootstrapInstance

BootstrapPublicIp Description The bootstrap node public IP address Value GetAtt BootstrapInstancePublicIp

BootstrapPrivateIp Description The bootstrap node private IP address Value GetAtt BootstrapInstancePrivateIp

CHAPTER 1 INSTALLING ON AWS

215

NOTE

If you do not use the provided CloudFormation template to create your control planenodes you must review the provided information and manually create the infrastructureIf your cluster does not initialize correctly you might have to contact Red Hat supportwith your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

Procedure

1 Create a JSON file that contains the parameter values that the template requires

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AutoRegisterDNS 5 ParameterValue yes 6 ParameterKey PrivateHostedZoneId 7 ParameterValue ltrandom_stringgt 8 ParameterKey PrivateHostedZoneName 9 ParameterValue myclusterexamplecom 10 ParameterKey Master0Subnet 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey Master1Subnet 13

OpenShift Container Platform 46 Installing on AWS

216

ParameterValue subnet-ltrandom_stringgt 14 ParameterKey Master2Subnet 15 ParameterValue subnet-ltrandom_stringgt 16 ParameterKey MasterSecurityGroupId 17 ParameterValue sg-ltrandom_stringgt 18 ParameterKey IgnitionLocation 19 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configmaster 20 ParameterKey CertificateAuthorities 21 ParameterValue datatextplaincharset=utf-8base64ABCxYz== 22 ParameterKey MasterInstanceProfileName 23 ParameterValue ltroles_stackgt-MasterInstanceProfile-ltrandom_stringgt 24 ParameterKey MasterInstanceType 25 ParameterValue m4xlarge 26 ParameterKey AutoRegisterELB 27 ParameterValue yes 28 ParameterKey RegisterNlbIpTargetsLambdaArn 29 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 30 ParameterKey ExternalApiTargetGroupArn 31 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 32 ParameterKey InternalApiTargetGroupArn 33 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 34 ParameterKey InternalServiceTargetGroupArn 35 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 36 ]

CHAPTER 1 INSTALLING ON AWS

217

1

2

3

4

5

6

7

8

9

10

11 13 15

12 14 16

17

18

19

20

21

22

23

24

25

26

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control planemachines

Specify an AWSEC2ImageId value

Whether or not to perform DNS etcd registration

Specify yes or no If you specify yes you must provide hosted zone information

The Route 53 private zone ID to register the etcd targets with

Specify the PrivateHostedZoneId value from the output of the CloudFormation templatefor DNS and load balancing

The Route 53 zone to register the targets with

Specify ltcluster_namegtltdomain_namegt where ltdomain_namegt is the Route 53 basedomain that you used when you generated install-configyaml file for the cluster Do notinclude the trailing period () that is displayed in the AWS console

A subnet preferably private to launch the control plane machines on

Specify a subnet from the PrivateSubnets value from the output of theCloudFormation template for DNS and load balancing

The master security group ID to associate with master nodes

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch control plane Ignition config file from

Specify the generated Ignition config file location httpsapi-intltcluster_namegtltdomain_namegt22623configmaster

The base64 encoded certificate authority string to use

Specify the value from the masterign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with master nodes

Specify the MasterInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4xlarge

m42xlarge

OpenShift Container Platform 46 Installing on AWS

218

27

28

29

30

31

32

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c42xlarge

c44xlarge

c48xlarge

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 specify an m5 type such as m5xlarge instead

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

CHAPTER 1 INSTALLING ON AWS

219

33

34

35

36

1

2

3

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

2 Copy the template from the CloudFormation template for control plane machines section ofthis topic and save it as a YAML file on your computer This template describes the control planemachines that your cluster requires

3 If you specified an m5 instance type as the value for MasterInstanceType add that instancetype to the MasterInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent thecontrol plane nodes

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-control-plane Youneed the name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents three control planenodes

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-control-plane21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

220

17131 CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you needfor your OpenShift Container Platform cluster

Example 127 CloudFormation template for control plane machines

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 master instances)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AutoRegisterDNS Default yes AllowedValues - yes - no Description Do you want to invoke DNS etcd registration which requires Hosted Zone information Type String PrivateHostedZoneId Description The Route53 private zone ID to register the etcd targets with such as Z21IXYZABCZ2A4 Type String PrivateHostedZoneName Description The Route53 zone to register the targets with such as clusterexamplecom Omit the trailing period Type String Master0Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master1Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master2Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configmaster Description Ignition config file location Type String

CHAPTER 1 INSTALLING ON AWS

221

CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String MasterInstanceProfileName Description IAM profile to associate with master nodes Type String MasterInstanceType Default m5xlarge Type String AllowedValues - m4xlarge - m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c42xlarge - c44xlarge - c48xlarge - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String

Metadata

OpenShift Container Platform 46 Installing on AWS

222

AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label default DNS Parameters - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID Master0Subnet default Master-0 Subnet Master1Subnet default Master-1 Subnet Master2Subnet default Master-2 Subnet MasterInstanceType default Master Instance Type MasterInstanceProfileName default Master Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Master Ignition Source

CHAPTER 1 INSTALLING ON AWS

223

CertificateAuthorities default Ignition CA String MasterSecurityGroupId default Master Security Group ID AutoRegisterDNS default Use Provided DNS Automation AutoRegisterELB default Use Provided ELB Automation PrivateHostedZoneName default Private Hosted Zone Name PrivateHostedZoneId default Private Hosted Zone ID

Conditions DoRegistration Equals [yes Ref AutoRegisterELB] DoDns Equals [yes Ref AutoRegisterDNS]

Resources Master0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master0Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster0 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalApiTarget

OpenShift Container Platform 46 Installing on AWS

224

Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master0PrivateIp

Master1 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master1Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster1 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties

CHAPTER 1 INSTALLING ON AWS

225

ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master1PrivateIp

Master2 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master2Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster2 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

OpenShift Container Platform 46 Installing on AWS

226

RegisterMaster2InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master2PrivateIp

EtcdSrvRecords Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [_etcd-server-ssl_tcp Ref PrivateHostedZoneName]] ResourceRecords - Join [ [0 10 2380 Join [ [etcd-0 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-1 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-2 Ref PrivateHostedZoneName]]] ] TTL 60 Type SRV

Etcd0Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-0 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master0PrivateIp TTL 60 Type A

Etcd1Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-1 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master1PrivateIp TTL 60 Type A

Etcd2Record Condition DoDns Type AWSRoute53RecordSet

CHAPTER 1 INSTALLING ON AWS

227

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1714 Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent a worker node

IMPORTANT

The CloudFormation template creates a stack that represents one worker node Youmust create a stack for each worker node

NOTE

If you do not use the provided CloudFormation template to create your worker nodesyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-2 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master2PrivateIp TTL 60 Type A

Outputs PrivateIPs Description The control-plane node private IP addresses Value Join [ [GetAtt Master0PrivateIp GetAtt Master1PrivateIp GetAtt Master2PrivateIp] ]

OpenShift Container Platform 46 Installing on AWS

228

1

2

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

Procedure

1 Create a JSON file that contains the parameter values that the CloudFormation templaterequires

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey Subnet 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey WorkerSecurityGroupId 7 ParameterValue sg-ltrandom_stringgt 8 ParameterKey IgnitionLocation 9 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configworker 10 ParameterKey CertificateAuthorities 11 ParameterValue 12 ParameterKey WorkerInstanceProfileName 13 ParameterValue 14 ParameterKey WorkerInstanceType 15 ParameterValue m4large 16 ]

CHAPTER 1 INSTALLING ON AWS

229

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes

Specify an AWSEC2ImageId value

A subnet preferably private to launch the worker nodes on

Specify a subnet from the PrivateSubnets value from the output of the CloudFormationtemplate for DNS and load balancing

The worker security group ID to associate with worker nodes

Specify the WorkerSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch bootstrap Ignition config file from

Specify the generated Ignition config location httpsapi-intltcluster_namegtltdomain_namegt22623configworker

Base64 encoded certificate authority string to use

Specify the value from the workerign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with worker nodes

Specify the WorkerInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4large

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5large

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

OpenShift Container Platform 46 Installing on AWS

230

1

2

3

m516xlarge

c4large

c4xlarge

c42xlarge

c44xlarge

c48xlarge

r4large

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5 types instead

2 Copy the template from the CloudFormation template for worker machines section of thistopic and save it as a YAML file on your computer This template describes the networkingobjects and load balancers that your cluster requires

3 If you specified an m5 instance type as the value for WorkerInstanceType add that instancetype to the WorkerInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent aworker node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-worker-1 You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

231

file

Example output

NOTE

The CloudFormation template creates a stack that represents one worker node

5 Confirm that the template components exist

6 Continue to create worker stacks until you have created enough worker machines for yourcluster You can create additional worker stacks by referencing the same template andparameter files and specifying a different stack name

IMPORTANT

You must create at least two worker machines so you must create at least twostacks that use this CloudFormation template

17141 CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need foryour OpenShift Container Platform cluster

Example 128 CloudFormation template for worker machines

arnawscloudformationus-east-1269333783861stackcluster-worker-1729ee301-1c2a-11eb-348f-sd9888c65b59

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId WorkerSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation

OpenShift Container Platform 46 Installing on AWS

232

Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configworker Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String WorkerInstanceProfileName Description IAM profile to associate with master nodes Type String WorkerInstanceType Default m5large Type String AllowedValues - m4large - m4xlarge - m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5large - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c4large - c4xlarge - c42xlarge - c44xlarge - c48xlarge - r4large - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName

CHAPTER 1 INSTALLING ON AWS

233

- Label default Network Configuration Parameters - Subnet ParameterLabels Subnet default Subnet InfrastructureName default Infrastructure Name WorkerInstanceType default Worker Instance Type WorkerInstanceProfileName default Worker Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID IgnitionLocation default Worker Ignition Source CertificateAuthorities default Ignition CA String WorkerSecurityGroupId default Worker Security Group ID

Resources Worker0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref WorkerInstanceProfileName InstanceType Ref WorkerInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref WorkerSecurityGroupId SubnetId Ref Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

Outputs PrivateIP Description The compute node private IP address Value GetAtt Worker0PrivateIp

OpenShift Container Platform 46 Installing on AWS

234

1

2

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1715 Initializing the bootstrap sequence on AWS with user-provisionedinfrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS) you can start thebootstrap sequence that initializes the OpenShift Container Platform control plane

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

You created the worker nodes

Procedure

1 Change to the directory that contains the installation program and start the bootstrap processthat initializes the OpenShift Container Platform control plane

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different installation details specify warn debug or error instead of info

Example output

$ openshift-install wait-for bootstrap-complete --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Waiting up to 20m0s for the Kubernetes API at httpsapimyclusterexamplecom6443INFO API v1190+9f84db3 upINFO Waiting up to 30m0s for bootstrapping to completeINFO It is now safe to remove the bootstrap resourcesINFO Time elapsed 1s

CHAPTER 1 INSTALLING ON AWS

235

If the command exits without a FATAL warning your OpenShift Container Platform controlplane has initialized

NOTE

After the control plane initializes it sets up the compute nodes and installsadditional services in the form of Operators

Additional resources

See Monitoring installation progress for details about monitoring the installation bootstrap andcontrol plane logs as an OpenShift Container Platform installation progresses

See Gathering bootstrap node diagnostic data for information about troubleshooting issuesrelated to the bootstrap process

You can view details about the running instances that are created by using the AWS EC2console

1716 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

17161 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

$ tar xvzf ltfilegt

$ echo $PATH

OpenShift Container Platform 46 Installing on AWS

236

17162 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

17163 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1717 Logging in to the cluster by using the CLI

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

$ echo $PATH

$ oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

237

1

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1718 Approving the certificate signing requests for your machines

When you add machines to a cluster two pending certificate signing requests (CSRs) are generated foreach machine that you added You must confirm that these CSRs are approved or if necessary approvethem yourself The client requests must be approved first followed by the server requests

Prerequisites

You added machines to your cluster

Procedure

1 Confirm that the cluster recognizes the machines

Example output

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 63m v1190master-1 Ready master 63m v1190master-2 Ready master 64m v1190worker-0 NotReady worker 76s v1190worker-1 NotReady worker 70s v1190

OpenShift Container Platform 46 Installing on AWS

238

The output lists all of the machines that you created

NOTE

The preceding output might not include the compute nodes also known asworker nodes until some CSRs are approved

2 Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster

Example output

In this example two machines are joining the cluster You might see more approved CSRs in thelist

3 If the CSRs were not approved after all of the pending CSRs for the machines you added are in Pending status approve the CSRs for your cluster machines

NOTE

Because the CSRs rotate automatically approve your CSRs within an hour ofadding the machines to the cluster If you do not approve them within an hour thecertificates will rotate and more than two certificates will be present for eachnode You must approve all of these certificates After you approve the initialCSRs the subsequent node client CSRs are automatically approved by thecluster kube-controller-manager

NOTE

For clusters running on platforms that are not machine API enabled such as baremetal and other user-provisioned infrastructure you must implement a methodof automatically approving the kubelet serving certificate requests (CSRs) If arequest is not approved then the oc exec oc rsh and oc logs commandscannot succeed because a serving certificate is required when the API serverconnects to the kubelet Any operation that contacts the Kubelet endpointrequires this certificate approval to be in place The method must watch for newCSRs confirm that the CSR was submitted by the node-bootstrapper serviceaccount in the systemnode or systemadmin groups and confirm the identityof the node

To approve them individually run the following command for each valid CSR

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-8b2br 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pendingcsr-8vnps 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pending

$ oc adm certificate approve ltcsr_namegt 1

CHAPTER 1 INSTALLING ON AWS

239

1

1

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

NOTE

Some Operators might not become available until some CSRs are approved

4 Now that your client requests are approved you must review the server requests for eachmachine that you added to the cluster

Example output

5 If the remaining CSRs are not approved and are in the Pending status approve the CSRs foryour cluster machines

To approve them individually run the following command for each valid CSR

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

6 After all client and server CSRs have been approved the machines have the Ready statusVerify this by running the following command

Example output

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs --no-run-if-empty oc adm certificate approve

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-bfd72 5m26s systemnodeip-10-0-50-126us-east-2computeinternal Pendingcsr-c57lv 5m26s systemnodeip-10-0-95-157us-east-2computeinternal Pending

$ oc adm certificate approve ltcsr_namegt 1

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs oc adm certificate approve

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 73m v1200master-1 Ready master 73m v1200

OpenShift Container Platform 46 Installing on AWS

240

NOTE

It can take a few minutes after approval of the server CSRs for the machines totransition to the Ready status

Additional information

For more information on CSRs see Certificate Signing Requests

1719 Initial Operator configuration

After the control plane initializes you must immediately configure some Operators so that they allbecome available

Prerequisites

Your control plane has initialized

Procedure

1 Watch the cluster components come online

Example output

master-2 Ready master 74m v1200worker-0 Ready worker 11m v1200worker-1 Ready worker 11m v1200

$ watch -n5 oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCEauthentication 460 True False False 3h56mcloud-credential 460 True False False 29hcluster-autoscaler 460 True False False 29hconfig-operator 460 True False False 6h39mconsole 460 True False False 3h59mcsi-snapshot-controller 460 True False False 4h12mdns 460 True False False 4h15metcd 460 True False False 29himage-registry 460 True False False 3h59mingress 460 True False False 4h30minsights 460 True False False 29hkube-apiserver 460 True False False 29hkube-controller-manager 460 True False False 29hkube-scheduler 460 True False False 29hkube-storage-version-migrator 460 True False False 4h2mmachine-api 460 True False False 29hmachine-approver 460 True False False 6h34mmachine-config 460 True False False 3h56mmarketplace 460 True False False 4h2mmonitoring 460 True False False 6h31mnetwork 460 True False False 29hnode-tuning 460 True False False 4h30m

CHAPTER 1 INSTALLING ON AWS

241

2 Configure the Operators that are not available

17191 Image registry storage configuration

Amazon Web Services provides default storage which means the Image Registry Operator is availableafter installation However if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage you must manually configure registry storage

Instructions are shown for configuring a persistent volume which is required for production clustersWhere applicable instructions are shown for configuring an empty directory as the storage locationwhich is available for only non-production clusters

Additional instructions are provided for allowing the image registry to use block storage types by usingthe Recreate rollout strategy during upgrades

You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShiftContainer Platform to hidden regions See Configuring the registry for AWS user-provisionedinfrastructure for more information

171911 Configuring registry storage for AWS with user-provisioned infrastructure

During installation your cloud credentials are sufficient to create an Amazon S3 bucket and the RegistryOperator will automatically configure storage

If the Registry Operator cannot create an S3 bucket and automatically configure storage you cancreate an S3 bucket and configure storage with the following procedure

Prerequisites

You have a cluster on AWS with user-provisioned infrastructure

For Amazon S3 storage the secret is expected to contain two keys

REGISTRY_STORAGE_S3_ACCESSKEY

REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage

1 Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old

2 Fill in the storage configuration in configsimageregistryoperatoropenshiftiocluster

openshift-apiserver 460 True False False 3h56mopenshift-controller-manager 460 True False False 4h36mopenshift-samples 460 True False False 4h30moperator-lifecycle-manager 460 True False False 29hoperator-lifecycle-manager-catalog 460 True False False 29hoperator-lifecycle-manager-packageserver 460 True False False 3h59mservice-ca 460 True False False 29hstorage 460 True False False 4h30m

$ oc edit configsimageregistryoperatoropenshiftiocluster

OpenShift Container Platform 46 Installing on AWS

242

Example configuration

WARNING

To secure your registry images in AWS block public access to the S3 bucket

171912 Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator For non-production clusters you can setthe image registry to an empty directory If you do so all images are lost if you restart the registry

Procedure

To set the image registry storage to an empty directory

WARNING

Configure this option for only non-production clusters

If you run this command before the Image Registry Operator initializes its components the oc patch command fails with the following error

Wait a few minutes and run the command again

1720 Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster remove the bootstrap resourcesfrom Amazon Web Services (AWS)

Prerequisites

You completed the initial Operator configuration for your cluster

storage s3 bucket ltbucket-namegt region ltregion-namegt

$ oc patch configsimageregistryoperatoropenshiftio cluster --type merge --patch specstorageemptyDir

Error from server (NotFound) configsimageregistryoperatoropenshiftio cluster not found

CHAPTER 1 INSTALLING ON AWS

243

1

Procedure

1 Delete the bootstrap resources If you used the CloudFormation template delete its stack

Delete the stack by using the AWS CLI

ltnamegt is the name of your bootstrap stack

Delete the stack by using the AWS CloudFormation console

1721 Creating the Ingress DNS Records

If you removed the DNS Zone configuration manually create DNS records that point to the Ingress loadbalancer You can create either a wildcard record or specific records While the following procedure usesA records you can use other record types that you require such as CNAME or alias

Prerequisites

You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) thatuses infrastructure that you provisioned

You installed the OpenShift CLI (oc)

You installed the jq package

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix)

Procedure

1 Determine the routes to create

To create a wildcard record use appsltcluster_namegtltdomain_namegt where ltcluster_namegt is your cluster name and ltdomain_namegt is the Route 53 base domainfor your OpenShift Container Platform cluster

To create specific records you must create a record for each route that your cluster uses asshown in the output of the following command

Example output

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP address

$ aws cloudformation delete-stack --stack-name ltnamegt 1

$ oc get --all-namespaces -o jsonpath=range items[]range statusingress[]hostnendend routes

oauth-openshiftappsltcluster_namegtltdomain_namegtconsole-openshift-consoleappsltcluster_namegtltdomain_namegtdownloads-openshift-consoleappsltcluster_namegtltdomain_namegtalertmanager-main-openshift-monitoringappsltcluster_namegtltdomain_namegtgrafana-openshift-monitoringappsltcluster_namegtltdomain_namegtprometheus-k8s-openshift-monitoringappsltcluster_namegtltdomain_namegt

OpenShift Container Platform 46 Installing on AWS

244

1

1 2

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP addressthat it uses which is shown in the EXTERNAL-IP column

Example output

3 Locate the hosted zone ID for the load balancer

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer that you obtained

Example output

The output of this command is the load balancer hosted zone ID

4 Obtain the public hosted zone ID for your clusterrsquos domain

For ltdomain_namegt specify the Route 53 base domain for your OpenShift ContainerPlatform cluster

Example output

The public hosted zone ID for your domain is shown in the command output In this example it is Z3URY6TWQ91KVV

5 Add the alias records to your private zone

$ oc -n openshift-ingress get service router-default

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGErouter-default LoadBalancer 1723062215 ab328us-east-2elbamazonawscom 8031499TCP44330693TCP 5m

$ aws elb describe-load-balancers | jq -r LoadBalancerDescriptions[] | select(DNSName == ltexternal_ipgt)CanonicalHostedZoneNameID 1

Z3AADJGX6KTTL2

$ aws route53 list-hosted-zones-by-name --dns-name ltdomain_namegt 1 --query HostedZones[ ConfigPrivateZone = `true` ampamp Name == `ltdomain_namegt`]Id 2 --output text

hostedzoneZ3URY6TWQ91KVV

$ aws route53 change-resource-record-sets --hosted-zone-id ltprivate_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATE

CHAPTER 1 INSTALLING ON AWS

245

1

2

3

4

1

2

3

4

For ltprivate_hosted_zone_idgt specify the value from the output of the CloudFormationtemplate for DNS and load balancing

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

6 Add the records to your public zone

For ltpublic_hosted_zone_idgt specify the public hosted zone for your domain

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operator

gt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

$ aws route53 change-resource-record-sets --hosted-zone-id ltpublic_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

OpenShift Container Platform 46 Installing on AWS

246

1

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

1722 Completing an AWS installation on user-provisioned infrastructure

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure monitor the deployment to completion

Prerequisites

You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure

You installed the oc CLI

Procedure

From the directory that contains the installation program complete the cluster installation

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

1723 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can log

$ openshift-install --dir=ltinstallation_directorygt wait-for install-complete 1

INFO Waiting up to 40m0s for the cluster at httpsapimyclusterexamplecom6443 to initializeINFO Waiting up to 10m0s for the openshift-console route to be createdINFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Fe5en-ymBEc-Wt6NLINFO Time elapsed 1s

CHAPTER 1 INSTALLING ON AWS

247

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

1724 Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

See Working with stacks in the AWS documentation for more information about AWSCloudFormation stacks

1725 Next steps

Validating an installation

Customize your cluster

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

OpenShift Container Platform 46 Installing on AWS

248

If necessary you can opt out of remote health reporting

18 INSTALLING A CLUSTER ON AWS THAT USES MIRROREDINSTALLATION CONTENT

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)using infrastructure that you provide and an internal mirror of the installation release content

IMPORTANT

While you can install an OpenShift Container Platform cluster by using mirroredinstallation release content your cluster still requires Internet access to use the AWSAPIs

One way to create this infrastructure is to use the provided CloudFormation templates You can modifythe templates to customize your infrastructure or use the information that they contain to create AWSobjects according to your companyrsquos policies

IMPORTANT

The steps for performing a user-provisioned infrastructure installation are provided as anexample only Installing a cluster with infrastructure you provide requires knowledge ofthe cloud provider and the installation process of OpenShift Container Platform SeveralCloudFormation templates are provided to assist in completing these steps or to helpmodel your own You are also free to create the required resources through othermethods the templates are just an example

181 Prerequisites

You created a mirror registry on your mirror host and obtained the imageContentSources datafor your version of OpenShift Container Platform

IMPORTANT

Because the installation media is on the mirror host you can use that computerto complete all installation steps

You reviewed details about the OpenShift Container Platform installation and updateprocesses

You configured an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

CHAPTER 1 INSTALLING ON AWS

249

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix) in the AWS documentation

If you use a firewall and plan to use the Telemetry service you configured the firewall to allowthe sites that your cluster requires access to

NOTE

Be sure to also review this site list if you are configuring a proxy

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

182 About installations in restricted networks

In OpenShift Container Platform 46 you can perform an installation that does not require an activeconnection to the Internet to obtain software components Restricted network installations can becompleted using installer-provisioned infrastructure or user-provisioned infrastructure depending onthe cloud platform to which you are installing the cluster

If you choose to perform a restricted network installation on a cloud platform you still require access toits cloud APIs Some cloud functions like Amazon Web Servicersquos IAM service require Internet access soyou might still require Internet access Depending on your network you might require less Internetaccess for an installation on bare metal hardware or on VMware vSphere

To complete a restricted network installation you must create a registry that mirrors the contents of theOpenShift Container Platform registry and contains the installation media You can create this registryon a mirror host which can access both the Internet and your closed network or by using other methodsthat meet your restrictions

IMPORTANT

Because of the complexity of the configuration for user-provisioned installationsconsider completing a standard user-provisioned infrastructure installation before youattempt a restricted network installation using user-provisioned infrastructureCompleting this test installation might make it easier to isolate and troubleshoot anyissues that might arise during your installation in a restricted network

1821 Additional limits

Clusters in restricted networks have the following additional limitations and restrictions

The ClusterVersion status includes an Unable to retrieve available updates error

By default you cannot use the contents of the Developer Catalog because you cannot accessthe required image stream tags

183 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

OpenShift Container Platform 46 Installing on AWS

250

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

184 Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services(AWS) you must manually create both the machines and their supporting infrastructure

For more information about the integration testing for different platforms see the OpenShift ContainerPlatform 4x Tested Integrations page

By using the provided CloudFormation templates you can create stacks of AWS resources thatrepresent the following components

An AWS Virtual Private Cloud (VPC)

Networking and load balancing components

Security groups and roles

An OpenShift Container Platform bootstrap node

OpenShift Container Platform control plane nodes

An OpenShift Container Platform compute node

Alternatively you can manually create the components or you can reuse existing infrastructure thatmeets the cluster requirements Review the CloudFormation templates for more details about how thecomponents interrelate

1841 Cluster machines

CHAPTER 1 INSTALLING ON AWS

251

You need AWSEC2Instance objects for the following machines

A bootstrap machine This machine is required during installation but you can remove it afteryour cluster deploys

Three control plane machines The control plane machines are not governed by a machine set

Compute machines You must create at least two compute machines which are also known asworker machines during installation These machines are not governed by a machine set

You can use the following instance types for the cluster machines with the provided CloudFormationtemplates

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5types instead

Table 119 Instance types for machines

Instance type Bootstrap Control plane Compute

i3large x

m4large x

m4xlarge x x

m42xlarge x x

m44xlarge x x

m48xlarge x x

m410xlarge x x

m416xlarge x x

m5large x

m5xlarge x x

m52xlarge x x

m54xlarge x x

m58xlarge x x

m510xlarge x x

OpenShift Container Platform 46 Installing on AWS

252

m516xlarge x x

c4large x

c4xlarge x

c42xlarge x x

c44xlarge x x

c48xlarge x x

r4large x

r4xlarge x x

r42xlarge x x

r44xlarge x x

r48xlarge x x

r416xlarge x x

Instance type Bootstrap Control plane Compute

You might be able to use other instance types that meet the specifications of these instance types

1842 Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructurethat you provision you must provide a mechanism for approving cluster certificate signing requests(CSRs) after installation The kube-controller-manager only approves the kubelet client CSRs The machine-approver cannot guarantee the validity of a serving certificate that is requested by usingkubelet credentials because it cannot confirm that the correct machine issued the request You mustdetermine and implement a method of verifying the validity of the kubelet serving certificate requestsand approving them

1843 Other infrastructure components

A VPC

DNS entries

Load balancers (classic or network) and listeners

A public and a private Route 53 zone

Security groups

CHAPTER 1 INSTALLING ON AWS

253

IAM roles

S3 buckets

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

OpenShift Container Platform 46 Installing on AWS

254

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a privatehosted zone similar to the one that the installation program uses if it provisions the clusterrsquosinfrastructure You must create a DNS entry that resolves to your load balancer An entry for apiltcluster_namegtltdomaingt must point to the external load balancer and an entry for api-intltcluster_namegtltdomaingt must point to the internal load balancer

The cluster also requires load balancers and listeners for port 6443 which are required for theKubernetes API and its extensions and port 22623 which are required for the Ignition config files fornew machines The targets will be the master nodes Port 6443 must be accessible to both clientsexternal to the cluster and nodes within the cluster Port 22623 must be accessible to nodes within thecluster

Component AWS type Description

DNS AWSRoute53HostedZone

The hosted zone for your internal DNS

etcd recordsets

AWSRoute53RecordSet

The registration records for etcd for your control plane machines

CHAPTER 1 INSTALLING ON AWS

255

Public loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your public subnets

External APIserver record

AWSRoute53RecordSetGroup

Alias records for the external API server

Externallistener

AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the external load balancer

External targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the external load balancer

Private loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your private subnets

Internal APIserver record

AWSRoute53RecordSetGroup

Alias records for the internal API server

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 22623 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Component AWS type Description

OpenShift Container Platform 46 Installing on AWS

256

Security groups

The control plane and worker machines require access to the following ports

Group Type IP Protocol Port range

MasterSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

tcp 6443

tcp 22623

WorkerSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

BootstrapSecurityGroup

AWSEC2SecurityGroup

tcp 22

tcp 19531

Control plane Ingress

The control plane machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd tcp 2379- 2380

MasterIngressVxlan

Vxlan packets udp 4789

MasterIngressWorkerVxlan

Vxlan packets udp 4789

MasterIngressInternal

Internal cluster communication and Kubernetesproxy metrics

tcp 9000 - 9999

MasterIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

MasterIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

CHAPTER 1 INSTALLING ON AWS

257

MasterIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Worker Ingress

The worker machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets udp 4789

WorkerIngressWorkerVxlan

Vxlan packets udp 4789

WorkerIngressInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

WorkerIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Roles and instance profiles

OpenShift Container Platform 46 Installing on AWS

258

You must grant the machines permissions in AWS The provided CloudFormation templates grant themachines Allow permissions for the following AWSIAMRole objects and provide a AWSIAMInstanceProfile for each set of roles If you do not use the templates you can grant themachines the following broad permissions or the following individual permissions

Role Effect Action Resource

Master Allow ec2

Allow elasticloadbalancing

Allow iamPassRole

Allow s3GetObject

Worker Allow ec2Describe

Bootstrap Allow ec2Describe

Allow ec2AttachVolume

Allow ec2DetachVolume

1844 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 129 Required EC2 permissions for installation

tagTagResources

tagUntagResources

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

CHAPTER 1 INSTALLING ON AWS

259

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

OpenShift Container Platform 46 Installing on AWS

260

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 130 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 131 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

CHAPTER 1 INSTALLING ON AWS

261

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

elasticloadbalancingSetLoadBalancerPoliciesOfListener

Example 132 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

OpenShift Container Platform 46 Installing on AWS

262

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 133 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

route53ListHostedZonesByName

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 134 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

CHAPTER 1 INSTALLING ON AWS

263

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 135 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

s3GetObjectAcl

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 136 Required permissions to delete base cluster resources

OpenShift Container Platform 46 Installing on AWS

264

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 137 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

ec2DeleteVpc

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

CHAPTER 1 INSTALLING ON AWS

265

Example 138 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 139 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

185 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster the

OpenShift Container Platform 46 Installing on AWS

266

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram If you install a cluster on infrastructure that you provision you must provide this key toyour clusterrsquos machines

186 Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisionedinfrastructure you must generate the files that the installation program needs to deploy your clusterand modify them so that the cluster creates only the machines that it will use You generate and

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

267

customize the install-configyaml file Kubernetes manifests and Ignition config files You also have theoption to first set up a separate var partition during the preparation phases of installation

1861 Optional Creating a separate var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installerHowever there are cases where you might want to create separate partitions in a part of the filesystemthat you expect to grow

OpenShift Container Platform supports the addition of a single partition to attach storage to either the var partition or a subdirectory of var For example

varlibcontainers Holds container-related content that can grow as more images andcontainers are added to a system

varlibetcd Holds data that you might want to keep separate for purposes such asperformance optimization of etcd storage

var Holds data that you might want to keep separate for purposes such as auditing

Storing the contents of a var directory separately makes it easier to grow storage for those areas asneeded and reinstall OpenShift Container Platform at a later date and keep that data intact With thismethod you will not have to pull all your containers again nor will you have to copy massive log fileswhen you update systems

Because var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS)the following procedure sets up the separate var partition by creating a machine config that is insertedduring the openshift-install preparation phases of an OpenShift Container Platform installation

IMPORTANT

If you follow the steps to create a separate var partition in this procedure it is notnecessary to create the Kubernetes manifest and Ignition config files again as describedlater in this section

Procedure

1 Create a directory to hold the OpenShift Container Platform installation files

2 Run openshift-install to create a set of files in the manifest and openshift subdirectoriesAnswer the system questions as you are prompted

3 Create a MachineConfig object and add it to a file in the openshift directory For examplename the file 98-var-partitionyaml change the disk device name to the name of the storage

$ mkdir $HOMEclusterconfig

$ openshift-install create manifests --dir $HOMEclusterconfig SSH Public Key $ ls $HOMEclusterconfigopenshift99_kubeadmin-password-secretyaml99_openshift-cluster-api_master-machines-0yaml99_openshift-cluster-api_master-machines-1yaml99_openshift-cluster-api_master-machines-2yaml

OpenShift Container Platform 46 Installing on AWS

268

1

2

device on the worker systems and set the storage size as appropriate This attaches storage toa separate var directory

The storage device name of the disk that you want to partition

When adding a data partition to the boot disk a minimum value of 25000 MiB (Mebibytes)is recommended The root file system is automatically resized to fill all available space upto the specified offset If no value is specified or if the specified value is smaller than therecommended minimum the resulting root file system will be too small and futurereinstalls of RHCOS might overwrite the beginning of the data partition

4 Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories

Now you can use the Ignition config files as input to the installation procedures to install Red HatEnterprise Linux CoreOS (RHCOS) systems

apiVersion machineconfigurationopenshiftiov1kind MachineConfigmetadata labels machineconfigurationopenshiftiorole worker name 98-var-partitionspec config ignition version 310 storage disks - device devltdevice_namegt 1 partitions - sizeMiB ltpartition_sizegt startMiB ltpartition_start_offsetgt 2 label var filesystems - path var device devdiskby-partlabelvar format xfs systemd units - name varmount enabled true contents | [Unit] Before=local-fstarget [Mount] Where=var What=devdiskby-partlabelvar [Install] WantedBy=local-fstarget

$ openshift-install create ignition-configs --dir $HOMEclusterconfig$ ls $HOMEclusterconfigauth bootstrapign masterign metadatajson workerign

CHAPTER 1 INSTALLING ON AWS

269

1

1862 Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deployyour cluster

Prerequisites

You obtained the OpenShift Container Platform installation program for user-provisionedinfrastructure and the pull secret for your cluster For a restricted network installation thesefiles are on your mirror host

You checked that you are deploying your cluster to a region with an accompanying Red HatEnterprise Linux CoreOS (RHCOS) AMI published by Red Hat If you are deploying to a regionthat requires a custom AMI such as an AWS GovCloud region you must create the install-configyaml file manually

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select aws as the platform to target

iii If you do not have an AWS profile stored on your computer enter the AWS access keyID and secret access key for the user that you configured to run the installationprogram

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

OpenShift Container Platform 46 Installing on AWS

270

NOTE

The AWS access key ID and secret access key are stored in ~awscredentials in the home directory of the current user on theinstallation host You are prompted for the credentials by the installationprogram if the credentials for the exported profile are not present in thefile Any credentials that you provide to the installation program arestored in the file

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Edit the install-configyaml file to provide the additional information that is required for aninstallation in a restricted network

a Update the pullSecret value to contain the authentication information for your registry

For ltlocal_registrygt specify the registry domain name and optionally the port that yourmirror registry uses to serve content For example registryexamplecom or registryexamplecom5000 For ltcredentialsgt specify the base64-encoded user nameand password for your mirror registry

b Add the additionalTrustBundle parameter and value The value must be the contents ofthe certificate file that you used for your mirror registry which can be an exiting trustedcertificate authority or the self-signed certificate that you generated for the mirror registry

c Add the image content resources

Use the imageContentSources section from the output of the command to mirror therepository or the values that you used when you mirrored the content from the media thatyou brought into your restricted network

pullSecret authsltlocal_registrygt auth ltcredentialsgtemail youexamplecom

additionalTrustBundle | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----

imageContentSources- mirrors - ltlocal_registrygtltlocal_repository_namegtrelease source quayioopenshift-release-devocp-release- mirrors - ltlocal_registrygtltlocal_repository_namegtrelease source registrysvcciopenshiftorgocprelease

CHAPTER 1 INSTALLING ON AWS

271

d Optional Set the publishing strategy to Internal

By setting this option you create an internal Ingress Controller and a private load balancer

3 Optional Back up the install-configyaml file

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

Additional resources

See Configuration and credential file settings in the AWS documentation for more informationabout AWS profile and credential configuration

1863 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

publish Internal

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3

OpenShift Container Platform 46 Installing on AWS

272

1

2

3

4

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

1864 Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines you mustgenerate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines

The installation configuration file transforms into the Kubernetes manifests The manifests wrap into theIgnition configuration files which are later used to create the cluster

additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

273

1

IMPORTANT

The Ignition config files that the installation program generates contain certificates thatexpire after 24 hours which are then renewed at that time If the cluster is shut downbefore renewing the certificates and the cluster is later restarted after the 24 hours haveelapsed the cluster automatically recovers the expired certificates The exception is thatyou must manually approve the pending node-bootstrapper certificate signing requests(CSRs) to recover kubelet certificates See the documentation for Recovering fromexpired control plane certificates for more information

Prerequisites

You obtained the OpenShift Container Platform installation program For a restricted networkinstallation these files are on your mirror host

You created the install-configyaml installation configuration file

Procedure

1 Change to the directory that contains the installation program and generate the Kubernetesmanifests for the cluster

Example output

For ltinstallation_directorygt specify the installation directory that contains the install-configyaml file you created

2 Remove the Kubernetes manifest files that define the control plane machines

By removing these files you prevent the cluster from automatically generating control planemachines

3 Remove the Kubernetes manifest files that define the worker machines

Because you create and manage the worker machines yourself you do not need to initializethese machines

4 Check that the mastersSchedulable parameter in the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml Kubernetes manifestfile is set to false This setting prevents pods from being scheduled on the control planemachines

a Open the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml file

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

INFO Credentials loaded from the myprofile profile in file homemyuserawscredentialsINFO Consuming Install Config from target directoryINFO Manifests created in install_dirmanifests and install_diropenshift

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_master-machines-yaml

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_worker-machineset-yaml

OpenShift Container Platform 46 Installing on AWS

274

1 2

1

b Locate the mastersSchedulable parameter and ensure that it is set to false

c Save and exit the file

5 Optional If you do not want the Ingress Operator to create DNS records on your behalf removethe privateZone and publicZone sections from the ltinstallation_directorygtmanifestscluster-dns-02-configyml DNS configuration file

Remove this section completely

If you do so you must add ingress DNS records manually in a later step

6 To create the Ignition configuration files run the following command from the directory thatcontains the installation program

For ltinstallation_directorygt specify the same installation directory

The following files are generated in the directory

auth kubeadmin-password kubeconfig bootstrapign masterign metadatajson workerign

187 Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify yourcluster in Amazon Web Services (AWS) The infrastructure name is also used to locate the appropriateAWS resources during an OpenShift Container Platform installation The provided CloudFormationtemplates contain references to this infrastructure name so you must extract it

Prerequisites

You obtained the OpenShift Container Platform installation program and the pull secret for

apiVersion configopenshiftiov1kind DNSmetadata creationTimestamp null name clusterspec baseDomain exampleopenshiftcom privateZone 1 id mycluster-100419-private-zone publicZone 2 id exampleopenshiftcomstatus

$ openshift-install create ignition-configs --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

275

1

1

You obtained the OpenShift Container Platform installation program and the pull secret foryour cluster

You generated the Ignition config files for your cluster

You installed the jq package

Procedure

To extract and view the infrastructure name from the Ignition config file metadata run thefollowing command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

The output of this command is your cluster name and a random string

188 Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShiftContainer Platform cluster to use You can customize the VPC to meet your requirements includingVPN and route tables

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the VPC

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

Procedure

1 Create a JSON file that contains the parameter values that the template requires

$ jq -r infraID ltinstallation_directorygtmetadatajson 1

openshift-vw9j6 1

[

OpenShift Container Platform 46 Installing on AWS

276

1

2

3

4

5

6

1

2

3

The CIDR block for the VPC

Specify a CIDR block in the format xxxx16-24

The number of availability zones to deploy the VPC in

Specify an integer between 1 and 3

The size of each subnet in each availability zone

Specify an integer between 5 and 13 where 5 is 27 and 13 is 19

2 Copy the template from the CloudFormation template for the VPC section of this topic andsave it as a YAML file on your computer This template describes the VPC that your clusterrequires

3 Launch the CloudFormation template to create a stack of AWS resources that represent theVPC

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-vpc You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

ParameterKey VpcCidr 1 ParameterValue 1000016 2 ParameterKey AvailabilityZoneCount 3 ParameterValue 1 4 ParameterKey SubnetBits 5 ParameterValue 12 6 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

277

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

VpcId The ID of your VPC

PublicSubnetIds

The IDs of the new public subnets

PrivateSubnetIds

The IDs of the new private subnets

1881 CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for yourOpenShift Container Platform cluster

Example 140 CloudFormation template for the VPC

arnawscloudformationus-east-1269333783861stackcluster-vpcdbedae40-2fd3-11eb-820e-12a48460849f

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for Best Practice VPC with 1-3 AZs

Parameters VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String AvailabilityZoneCount ConstraintDescription The number of availability zones (Min 1 Max 3) MinValue 1 MaxValue 3 Default 1 Description How many AZs to create VPC subnets for (Min 1 Max 3) Type Number SubnetBits ConstraintDescription CIDR block parameter must be in the form xxxx19-27 MinValue 5 MaxValue 13 Default 12 Description Size of each subnet to create within the availability zones (Min 5 = 27 Max 13 = 19) Type Number

OpenShift Container Platform 46 Installing on AWS

278

Metadata AWSCloudFormationInterface ParameterGroups - Label default Network Configuration Parameters - VpcCidr - SubnetBits - Label default Availability Zones Parameters - AvailabilityZoneCount ParameterLabels AvailabilityZoneCount default Availability Zone Count VpcCidr default VPC CIDR SubnetBits default Bits Per Subnet

Conditions DoAz3 Equals [3 Ref AvailabilityZoneCount] DoAz2 Or [Equals [2 Ref AvailabilityZoneCount] Condition DoAz3]

Resources VPC Type AWSEC2VPC Properties EnableDnsSupport true EnableDnsHostnames true CidrBlock Ref VpcCidr PublicSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [0 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PublicSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [1 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PublicSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [2 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select

CHAPTER 1 INSTALLING ON AWS

279

- 2 - FnGetAZs Ref AWSRegion InternetGateway Type AWSEC2InternetGateway GatewayToInternet Type AWSEC2VPCGatewayAttachment Properties VpcId Ref VPC InternetGatewayId Ref InternetGateway PublicRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PublicRoute Type AWSEC2Route DependsOn GatewayToInternet Properties RouteTableId Ref PublicRouteTable DestinationCidrBlock 00000 GatewayId Ref InternetGateway PublicSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PublicSubnet2 RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation3 Condition DoAz3 Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet3 RouteTableId Ref PublicRouteTable PrivateSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [3 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PrivateRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PrivateSubnet RouteTableId Ref PrivateRouteTable NAT DependsOn

OpenShift Container Platform 46 Installing on AWS

280

- GatewayToInternet Type AWSEC2NatGateway Properties AllocationId FnGetAtt - EIP - AllocationId SubnetId Ref PublicSubnet EIP Type AWSEC2EIP Properties Domain vpc Route Type AWSEC2Route Properties RouteTableId Ref PrivateRouteTable DestinationCidrBlock 00000 NatGatewayId Ref NAT PrivateSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [4 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PrivateRouteTable2 Type AWSEC2RouteTable Condition DoAz2 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PrivateSubnet2 RouteTableId Ref PrivateRouteTable2 NAT2 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz2 Properties AllocationId FnGetAtt - EIP2 - AllocationId SubnetId Ref PublicSubnet2 EIP2 Type AWSEC2EIP Condition DoAz2 Properties Domain vpc

CHAPTER 1 INSTALLING ON AWS

281

Route2 Type AWSEC2Route Condition DoAz2 Properties RouteTableId Ref PrivateRouteTable2 DestinationCidrBlock 00000 NatGatewayId Ref NAT2 PrivateSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [5 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion PrivateRouteTable3 Type AWSEC2RouteTable Condition DoAz3 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation3 Type AWSEC2SubnetRouteTableAssociation Condition DoAz3 Properties SubnetId Ref PrivateSubnet3 RouteTableId Ref PrivateRouteTable3 NAT3 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz3 Properties AllocationId FnGetAtt - EIP3 - AllocationId SubnetId Ref PublicSubnet3 EIP3 Type AWSEC2EIP Condition DoAz3 Properties Domain vpc Route3 Type AWSEC2Route Condition DoAz3 Properties RouteTableId Ref PrivateRouteTable3 DestinationCidrBlock 00000 NatGatewayId Ref NAT3 S3Endpoint Type AWSEC2VPCEndpoint

OpenShift Container Platform 46 Installing on AWS

282

189 Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS)that your OpenShift Container Platform cluster can use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the networking and load balancing components that yourOpenShift Container Platform cluster requires The template also creates a hosted zone and subnettags

You can run the template multiple times within a single Virtual Private Cloud (VPC)

Properties PolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Action - Resource - RouteTableIds - Ref PublicRouteTable - Ref PrivateRouteTable - If [DoAz2 Ref PrivateRouteTable2 Ref AWSNoValue] - If [DoAz3 Ref PrivateRouteTable3 Ref AWSNoValue] ServiceName Join - - - comamazonaws - Ref AWSRegion - s3 VpcId Ref VPC

Outputs VpcId Description ID of the new VPC Value Ref VPC PublicSubnetIds Description Subnet IDs of the public subnets Value Join [ [Ref PublicSubnet If [DoAz2 Ref PublicSubnet2 Ref AWSNoValue] If [DoAz3 Ref PublicSubnet3 Ref AWSNoValue]] ] PrivateSubnetIds Description Subnet IDs of the private subnets Value Join [ [Ref PrivateSubnet If [DoAz2 Ref PrivateSubnet2 Ref AWSNoValue] If [DoAz3 Ref PrivateSubnet3 Ref AWSNoValue]] ]

CHAPTER 1 INSTALLING ON AWS

283

1

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-configyaml file for your cluster You can obtain details about your hosted zone by running thefollowing command

For the ltroute53_domaingt specify the Route 53 base domain that you used when yougenerated the install-configyaml file for the cluster

Example output

In the example output the hosted zone ID is Z21IXYZABCZ2A4

2 Create a JSON file that contains the parameter values that the template requires

$ aws route53 list-hosted-zones-by-name --dns-name ltroute53_domaingt 1

myclusterexamplecom False 100HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA hostedzoneZ21IXYZABCZ2A4 myclusterexamplecom 10

[ ParameterKey ClusterName 1 ParameterValue mycluster 2 ParameterKey InfrastructureName 3 ParameterValue mycluster-ltrandom_stringgt 4 ParameterKey HostedZoneId 5 ParameterValue ltrandom_stringgt 6 ParameterKey HostedZoneName 7

OpenShift Container Platform 46 Installing on AWS

284

1

2

3

4

5

6

7

8

9

10

11

12

13

14

A short representative cluster name to use for host names etc

Specify the cluster name that you used when you generated the install-configyaml filefor the cluster

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The Route 53 public zone ID to register the targets with

Specify the Route 53 public zone ID which as a format similar to Z21IXYZABCZ2A4 Youcan obtain this value from the AWS console

The Route 53 zone to register the targets with

Specify the Route 53 base domain that you used when you generated the install-configyaml file for the cluster Do not include the trailing period () that is displayed in theAWS console

The public subnets that you created for your VPC

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

3 Copy the template from the CloudFormation template for the network and load balancerssection of this topic and save it as a YAML file on your computer This template describes thenetworking and load balancing objects that your cluster requires

ParameterValue examplecom 8 ParameterKey PublicSubnets 9 ParameterValue subnet-ltrandom_stringgt 10 ParameterKey PrivateSubnets 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey VpcId 13 ParameterValue vpc-ltrandom_stringgt 14 ]

CHAPTER 1 INSTALLING ON AWS

285

1

2

3

4

IMPORTANT

If you are deploying your cluster to an AWS government region you must updatethe InternalApiServerRecord in the CloudFormation template to use CNAMErecords Records of type ALIAS are not supported for AWS government regions

4 Launch the CloudFormation template to create a stack of AWS resources that provide thenetworking and load balancing components

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-dns You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole resources

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

PrivateHostedZoneId

Hosted zone ID for the private DNS

ExternalApiLoadBalancerName

Full name of the external API load balancer

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-dnscd3e5de0-2fd4-11eb-5cf0-12be5c33a183

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

286

InternalApiLoadBalancerName

Full name of the internal API load balancer

ApiServerDnsName

Full host name of the API server

RegisterNlbIpTargetsLambda

Lambda ARN useful to help registerderegister IP targets for these load balancers

ExternalApiTargetGroupArn

ARN of external API target group

InternalApiTargetGroupArn

ARN of internal API target group

InternalServiceTargetGroupArn

ARN of internal service target group

1891 CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and loadbalancers that you need for your OpenShift Container Platform cluster

Example 141 CloudFormation template for the network and load balancers

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Network Elements (Route53 amp LBs)

Parameters ClusterName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Cluster name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short representative cluster name to use for host names and other identifying names Type String InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster

CHAPTER 1 INSTALLING ON AWS

287

Type String HostedZoneId Description The Route53 public zone ID to register the targets with such as Z21IXYZABCZ2A4 Type String HostedZoneName Description The Route53 zone to register the targets with such as examplecom Omit the trailing period Type String Default examplecom PublicSubnets Description The internet-facing subnets Type ListltAWSEC2SubnetIdgt PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - ClusterName - InfrastructureName - Label default Network Configuration Parameters - VpcId - PublicSubnets - PrivateSubnets - Label default DNS Parameters - HostedZoneName - HostedZoneId ParameterLabels ClusterName default Cluster Name InfrastructureName default Infrastructure Name VpcId default VPC ID PublicSubnets default Public Subnets PrivateSubnets default Private Subnets HostedZoneName default Public Hosted Zone Name HostedZoneId default Public Hosted Zone ID

Resources

OpenShift Container Platform 46 Installing on AWS

288

ExtApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName ext]] IpAddressType ipv4 Subnets Ref PublicSubnets Type network

IntApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName int]] Scheme internal IpAddressType ipv4 Subnets Ref PrivateSubnets Type network

IntDns Type AWSRoute53HostedZone Properties HostedZoneConfig Comment Managed by CloudFormation Name Join [ [Ref ClusterName Ref HostedZoneName]] HostedZoneTags - Key Name Value Join [- [Ref InfrastructureName int]] - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value owned VPCs - VPCId Ref VpcId VPCRegion Ref AWSRegion

ExternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref HostedZoneId RecordSets - Name Join [ [api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt ExtApiElbCanonicalHostedZoneID DNSName GetAtt ExtApiElbDNSName

InternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref IntDns RecordSets - Name Join [

CHAPTER 1 INSTALLING ON AWS

289

[api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName - Name Join [ [api-int Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName

ExternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref ExternalApiTargetGroup LoadBalancerArn Ref ExtApiElb Port 6443 Protocol TCP

ExternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalApiTargetGroup LoadBalancerArn Ref IntApiElb

OpenShift Container Platform 46 Installing on AWS

290

Port 6443 Protocol TCP

InternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalServiceInternalListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalServiceTargetGroup LoadBalancerArn Ref IntApiElb Port 22623 Protocol TCP

InternalServiceTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath healthz HealthCheckPort 22623 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 22623 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

RegisterTargetLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName nlb lambda role]] AssumeRolePolicyDocument

CHAPTER 1 INSTALLING ON AWS

291

Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalApiTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalServiceTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref ExternalApiTargetGroup

RegisterNlbIpTargets Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterTargetLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) elb = boto3client(elbv2) if event[RequestType] == Delete elbderegister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) elif event[RequestType] == Create elbregister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=

OpenShift Container Platform 46 Installing on AWS

292

[Id event[ResourceProperties][TargetIp]]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][TargetArn]+event[ResourceProperties][TargetIp]) Runtime python37 Timeout 120

RegisterSubnetTagsLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName subnet-tags-lambda-role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName subnet-tagging-policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ ec2DeleteTags ec2CreateTags ] Resource arnawsec2subnet - Effect Allow Action [ ec2DescribeSubnets ec2DescribeTags ] Resource

RegisterSubnetTags Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterSubnetTagsLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) ec2_client = boto3client(ec2)

CHAPTER 1 INSTALLING ON AWS

293

if event[RequestType] == Delete for subnet_id in event[ResourceProperties][Subnets] ec2_clientdelete_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName]]) elif event[RequestType] == Create for subnet_id in event[ResourceProperties][Subnets] ec2_clientcreate_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName] Value shared]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][InfrastructureName]+event[ResourceProperties][Subnets][0]) Runtime python37 Timeout 120

RegisterPublicSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PublicSubnets

RegisterPrivateSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PrivateSubnets

Outputs PrivateHostedZoneId Description Hosted zone ID for the private DNS which is required for private records Value Ref IntDns ExternalApiLoadBalancerName Description Full name of the external API load balancer Value GetAtt ExtApiElbLoadBalancerFullName InternalApiLoadBalancerName Description Full name of the internal API load balancer Value GetAtt IntApiElbLoadBalancerFullName ApiServerDnsName Description Full hostname of the API server which is required for the Ignition config files Value Join [ [api-int Ref ClusterName Ref HostedZoneName]] RegisterNlbIpTargetsLambda Description Lambda ARN useful to help register or deregister IP targets for these load balancers Value GetAtt RegisterNlbIpTargetsArn ExternalApiTargetGroupArn Description ARN of the external API target group Value Ref ExternalApiTargetGroup InternalApiTargetGroupArn Description ARN of the internal API target group Value Ref InternalApiTargetGroup InternalServiceTargetGroupArn Description ARN of the internal service target group Value Ref InternalServiceTargetGroup

OpenShift Container Platform 46 Installing on AWS

294

IMPORTANT

If you are deploying your cluster to an AWS government region you must update the InternalApiServerRecord to use CNAME records Records of type ALIAS are notsupported for AWS government regions For example

Additional resources

See Listing public hosted zones in the AWS documentation for more information about listingpublic hosted zones

1810 Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift ContainerPlatform cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the security groups and roles that your OpenShift ContainerPlatform cluster requires

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Create a JSON file that contains the parameter values that the template requires

Type CNAMETTL 10ResourceRecords- GetAtt IntApiElbDNSName

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey VpcCidr 3 ParameterValue 1000016 4

CHAPTER 1 INSTALLING ON AWS

295

1

2

3

4

5

6

7

8

1

2

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The CIDR block for the VPC

Specify the CIDR block parameter that you used for the VPC that you defined in the form xxxx16-24

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

2 Copy the template from the CloudFormation template for security objects section of thistopic and save it as a YAML file on your computer This template describes the security groupsand roles that your cluster requires

3 Launch the CloudFormation template to create a stack of AWS resources that represent thesecurity groups and roles

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-sec You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ParameterKey PrivateSubnets 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey VpcId 7 ParameterValue vpc-ltrandom_stringgt 8 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

OpenShift Container Platform 46 Installing on AWS

296

3

4

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

MasterSecurityGroupId

Master Security Group ID

WorkerSecurityGroupId

Worker Security Group ID

MasterInstanceProfile

Master IAM Instance Profile

WorkerInstanceProfile

Worker IAM Instance Profile

18101 CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need foryour OpenShift Container Platform cluster

Example 142 CloudFormation template for security objects

arnawscloudformationus-east-1269333783861stackcluster-sec03bd4210-2ed7-11eb-6d7a-13fc0b61e9db

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Security Elements (Security Groups amp IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1

CHAPTER 1 INSTALLING ON AWS

297

ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Network Configuration Parameters - VpcId - VpcCidr - PrivateSubnets ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID VpcCidr default VPC CIDR PrivateSubnets default Private Subnets

Resources MasterSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Master Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr

OpenShift Container Platform 46 Installing on AWS

298

- IpProtocol tcp ToPort 6443 FromPort 6443 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22623 ToPort 22623 CidrIp Ref VpcCidr VpcId Ref VpcId

WorkerSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Worker Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr VpcId Ref VpcId

MasterIngressEtcd Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description etcd FromPort 2379 ToPort 2380 IpProtocol tcp

MasterIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressWorkerVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressGeneve

CHAPTER 1 INSTALLING ON AWS

299

Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressWorkerGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressWorkerInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressWorkerInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000

OpenShift Container Platform 46 Installing on AWS

300

ToPort 9999 IpProtocol udp

MasterIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressWorkerIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIngressWorkerIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties

CHAPTER 1 INSTALLING ON AWS

301

GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressMasterVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressMasterGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

OpenShift Container Platform 46 Installing on AWS

302

WorkerIngressMasterInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressMasterInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes secure kubelet port FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal Kubernetes communication FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId

CHAPTER 1 INSTALLING ON AWS

303

Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressMasterIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressMasterIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2AttachVolume - ec2AuthorizeSecurityGroupIngress

OpenShift Container Platform 46 Installing on AWS

304

- ec2CreateSecurityGroup - ec2CreateTags - ec2CreateVolume - ec2DeleteSecurityGroup - ec2DeleteVolume - ec2Describe - ec2DetachVolume - ec2ModifyInstanceAttribute - ec2ModifyVolume - ec2RevokeSecurityGroupIngress - elasticloadbalancingAddTags - elasticloadbalancingAttachLoadBalancerToSubnets - elasticloadbalancingApplySecurityGroupsToLoadBalancer - elasticloadbalancingCreateListener - elasticloadbalancingCreateLoadBalancer - elasticloadbalancingCreateLoadBalancerPolicy - elasticloadbalancingCreateLoadBalancerListeners - elasticloadbalancingCreateTargetGroup - elasticloadbalancingConfigureHealthCheck - elasticloadbalancingDeleteListener - elasticloadbalancingDeleteLoadBalancer - elasticloadbalancingDeleteLoadBalancerListeners - elasticloadbalancingDeleteTargetGroup - elasticloadbalancingDeregisterInstancesFromLoadBalancer - elasticloadbalancingDeregisterTargets - elasticloadbalancingDescribe - elasticloadbalancingDetachLoadBalancerFromSubnets - elasticloadbalancingModifyListener - elasticloadbalancingModifyLoadBalancerAttributes - elasticloadbalancingModifyTargetGroup - elasticloadbalancingModifyTargetGroupAttributes - elasticloadbalancingRegisterInstancesWithLoadBalancer - elasticloadbalancingRegisterTargets - elasticloadbalancingSetLoadBalancerPoliciesForBackendServer - elasticloadbalancingSetLoadBalancerPoliciesOfListener - kmsDescribeKey Resource

MasterInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref MasterIamRole

WorkerIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole

CHAPTER 1 INSTALLING ON AWS

305

1811 RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs valid for the various Amazon WebServices (AWS) zones you can specify for your OpenShift Container Platform nodes

NOTE

You can also install to regions that do not have a RHCOS AMI published by importing yourown AMI

Table 120 RHCOS AMIs

AWS zone AWS AMI

af-south-1 ami-09921c9c1c36e695c

ap-east-1 ami-01ee8446e9af6b197

Policies - PolicyName Join [- [Ref InfrastructureName worker policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2DescribeInstances - ec2DescribeRegions Resource

WorkerInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref WorkerIamRole

Outputs MasterSecurityGroupId Description Master Security Group ID Value GetAtt MasterSecurityGroupGroupId

WorkerSecurityGroupId Description Worker Security Group ID Value GetAtt WorkerSecurityGroupGroupId

MasterInstanceProfile Description Master IAM Instance Profile Value Ref MasterInstanceProfile

WorkerInstanceProfile Description Worker IAM Instance Profile Value Ref WorkerInstanceProfile

OpenShift Container Platform 46 Installing on AWS

306

ap-northeast-1 ami-04e5b5722a55846ea

ap-northeast-2 ami-0fdc25c8a0273a742

ap-south-1 ami-09e3deb397cc526a8

ap-southeast-1 ami-0630e03f75e02eec4

ap-southeast-2 ami-069450613262ba03c

ca-central-1 ami-012518cdbd3057dfd

eu-central-1 ami-0bd7175ff5b1aef0c

eu-north-1 ami-06c9ec42d0a839ad2

eu-south-1 ami-0614d7440a0363d71

eu-west-1 ami-01b89df58b5d4d5fa

eu-west-2 ami-06f6e31ddd554f89d

eu-west-3 ami-0dc82e2517ded15a1

me-south-1 ami-07d181e3aa0f76067

sa-east-1 ami-0cd44e6dd20e6c7fa

us-east-1 ami-04a16d506e5b0e246

us-east-2 ami-0a1f868ad58ea59a7

us-west-1 ami-0a65d76e3a6f6622f

us-west-2 ami-0dd9008abadc519f1

AWS zone AWS AMI

1812 Creating the bootstrap node in AWS

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift ContainerPlatform cluster initialization

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the bootstrap node that your OpenShift Container Platforminstallation requires

NOTE

CHAPTER 1 INSTALLING ON AWS

307

1

NOTE

If you do not use the provided CloudFormation template to create your bootstrap nodeyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

Procedure

1 Provide a location to serve the bootstrapign Ignition config file to your cluster This file islocated in your installation directory One way to do this is to create an S3 bucket in yourclusterrsquos region and upload the Ignition config file to it

IMPORTANT

The provided CloudFormation Template assumes that the Ignition config filesfor your cluster are served from an S3 bucket If you choose to serve the filesfrom another location you must modify the templates

IMPORTANT

If you are deploying to a region that has endpoints that differ from the AWS SDKor you are providing your own custom endpoints you must use a presigned URLfor your S3 bucket instead of the s3 schema

NOTE

The bootstrap Ignition config file does contain secrets like X509 keys Thefollowing steps provide basic security for the S3 bucket To provide additionalsecurity you can enable an S3 bucket policy to allow only certain users such asthe OpenShift IAM user to access objects that the bucket contains You canavoid S3 entirely and serve your bootstrap Ignition config file from any addressthat the bootstrap machine can reach

a Create the bucket

ltcluster-namegt-infra is the bucket name When creating the install-configyaml filereplace ltcluster-namegt with the name specified for the cluster

$ aws s3 mb s3ltcluster-namegt-infra 1

OpenShift Container Platform 46 Installing on AWS

308

1

b Upload the bootstrapign Ignition config file to the bucket

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

c Verify that the file uploaded

Example output

2 Create a JSON file that contains the parameter values that the template requires

$ aws s3 cp ltinstallation_directorygtbootstrapign s3ltcluster-namegt-infrabootstrapign 1

$ aws s3 ls s3ltcluster-namegt-infra

2019-04-03 161516 314878 bootstrapign

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AllowedBootstrapSshCidr 5 ParameterValue 00000 6 ParameterKey PublicSubnet 7 ParameterValue subnet-ltrandom_stringgt 8 ParameterKey MasterSecurityGroupId 9 ParameterValue sg-ltrandom_stringgt 10 ParameterKey VpcId 11 ParameterValue vpc-ltrandom_stringgt 12 ParameterKey BootstrapIgnitionLocation 13 ParameterValue s3ltbucket_namegtbootstrapign 14 ParameterKey AutoRegisterELB 15 ParameterValue yes 16

CHAPTER 1 INSTALLING ON AWS

309

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node

Specify a valid AWSEC2ImageId value

CIDR block to allow SSH access to the bootstrap node

Specify a CIDR block in the format xxxx16-24

The public subnet that is associated with your VPC to launch the bootstrap node into

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The master security group ID (for registering temporary rules)

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The VPC created resources will belong to

Specify the VpcId value from the output of the CloudFormation template for the VPC

Location to fetch bootstrap Ignition config file from

Specify the S3 bucket and file name in the form s3ltbucket_namegtbootstrapign

Whether or not to register a network load balancer (NLB)

ParameterKey RegisterNlbIpTargetsLambdaArn 17 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 18 ParameterKey ExternalApiTargetGroupArn 19 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 20 ParameterKey InternalApiTargetGroupArn 21 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 22 ParameterKey InternalServiceTargetGroupArn 23 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 24 ]

OpenShift Container Platform 46 Installing on AWS

310

16

17

18

19

20

21

22

23

24

1

2

3

4

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to an

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

3 Copy the template from the CloudFormation template for the bootstrap machine section ofthis topic and save it as a YAML file on your computer This template describes the bootstrapmachine that your cluster requires

4 Launch the CloudFormation template to create a stack of AWS resources that represent thebootstrap node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-bootstrap You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

CHAPTER 1 INSTALLING ON AWS

311

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

BootstrapInstanceId

The bootstrap Instance ID

BootstrapPublicIp

The bootstrap node public IP address

BootstrapPrivateIp

The bootstrap node private IP address

18121 CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need foryour OpenShift Container Platform cluster

Example 143 CloudFormation template for the bootstrap machine

arnawscloudformationus-east-1269333783861stackcluster-bootstrap12944486-2add-11eb-9dee-12dace8e3a83

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Bootstrap (EC2 Instance Security Groups and IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AllowedBootstrapSshCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])(([0-9]|1[0-9]|2[0-9]|3[0-2]))$ ConstraintDescription CIDR block parameter must be in the form xxxx0-32 Default 00000 Description CIDR block to allow SSH access to the bootstrap node

OpenShift Container Platform 46 Installing on AWS

312

Type String PublicSubnet Description The public subnet to launch the bootstrap node into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID for registering temporary rules Type AWSEC2SecurityGroupId VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId BootstrapIgnitionLocation Default s3my-s3-bucketbootstrapign Description Ignition config file location Type String AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label default Load Balancer Automation Parameters

CHAPTER 1 INSTALLING ON AWS

313

- AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID AllowedBootstrapSshCidr default Allowed SSH Source PublicSubnet default Public Subnet RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Bootstrap Ignition Source MasterSecurityGroupId default Master Security Group ID AutoRegisterELB default Use Provided ELB Automation

Conditions DoRegistration Equals [yes Ref AutoRegisterELB]

Resources BootstrapIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName bootstrap policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action ec2Describe Resource - Effect Allow Action ec2AttachVolume Resource - Effect Allow Action ec2DetachVolume Resource - Effect Allow Action s3GetObject

OpenShift Container Platform 46 Installing on AWS

314

Resource

BootstrapInstanceProfile Type AWSIAMInstanceProfile Properties Path Roles - Ref BootstrapIamRole

BootstrapSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Bootstrap Security Group SecurityGroupIngress - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref AllowedBootstrapSshCidr - IpProtocol tcp ToPort 19531 FromPort 19531 CidrIp 00000 VpcId Ref VpcId

BootstrapInstance Type AWSEC2Instance Properties ImageId Ref RhcosAmi IamInstanceProfile Ref BootstrapInstanceProfile InstanceType i3large NetworkInterfaces - AssociatePublicIpAddress true DeviceIndex 0 GroupSet - Ref BootstrapSecurityGroup - Ref MasterSecurityGroupId SubnetId Ref PublicSubnet UserData FnBase64 Sub - ignitionconfigreplacesource$S3Locversion310 - S3Loc Ref BootstrapIgnitionLocation

RegisterBootstrapApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties

CHAPTER 1 INSTALLING ON AWS

315

Additional resources

See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise LinuxCoreOS (RHCOS) AMIs for the AWS zones

1813 Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the control plane nodes

IMPORTANT

The CloudFormation template creates a stack that represents three control plane nodes

NOTE

If you do not use the provided CloudFormation template to create your control planenodes you must review the provided information and manually create the infrastructureIf your cluster does not initialize correctly you might have to contact Red Hat supportwith your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

Outputs BootstrapInstanceId Description Bootstrap Instance ID Value Ref BootstrapInstance

BootstrapPublicIp Description The bootstrap node public IP address Value GetAtt BootstrapInstancePublicIp

BootstrapPrivateIp Description The bootstrap node private IP address Value GetAtt BootstrapInstancePrivateIp

OpenShift Container Platform 46 Installing on AWS

316

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

Procedure

1 Create a JSON file that contains the parameter values that the template requires

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AutoRegisterDNS 5 ParameterValue yes 6 ParameterKey PrivateHostedZoneId 7 ParameterValue ltrandom_stringgt 8 ParameterKey PrivateHostedZoneName 9 ParameterValue myclusterexamplecom 10 ParameterKey Master0Subnet 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey Master1Subnet 13 ParameterValue subnet-ltrandom_stringgt 14 ParameterKey Master2Subnet 15 ParameterValue subnet-ltrandom_stringgt 16 ParameterKey MasterSecurityGroupId 17 ParameterValue sg-ltrandom_stringgt 18 ParameterKey IgnitionLocation 19

CHAPTER 1 INSTALLING ON AWS

317

1

2

3

4

5

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control planemachines

Specify an AWSEC2ImageId value

Whether or not to perform DNS etcd registration

ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configmaster 20 ParameterKey CertificateAuthorities 21 ParameterValue datatextplaincharset=utf-8base64ABCxYz== 22 ParameterKey MasterInstanceProfileName 23 ParameterValue ltroles_stackgt-MasterInstanceProfile-ltrandom_stringgt 24 ParameterKey MasterInstanceType 25 ParameterValue m4xlarge 26 ParameterKey AutoRegisterELB 27 ParameterValue yes 28 ParameterKey RegisterNlbIpTargetsLambdaArn 29 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 30 ParameterKey ExternalApiTargetGroupArn 31 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 32 ParameterKey InternalApiTargetGroupArn 33 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 34 ParameterKey InternalServiceTargetGroupArn 35 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 36 ]

OpenShift Container Platform 46 Installing on AWS

318

6

7

8

9

10

11 13 15

12 14 16

17

18

19

20

21

22

23

24

25

26

Specify yes or no If you specify yes you must provide hosted zone information

The Route 53 private zone ID to register the etcd targets with

Specify the PrivateHostedZoneId value from the output of the CloudFormation templatefor DNS and load balancing

The Route 53 zone to register the targets with

Specify ltcluster_namegtltdomain_namegt where ltdomain_namegt is the Route 53 basedomain that you used when you generated install-configyaml file for the cluster Do notinclude the trailing period () that is displayed in the AWS console

A subnet preferably private to launch the control plane machines on

Specify a subnet from the PrivateSubnets value from the output of theCloudFormation template for DNS and load balancing

The master security group ID to associate with master nodes

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch control plane Ignition config file from

Specify the generated Ignition config file location httpsapi-intltcluster_namegtltdomain_namegt22623configmaster

The base64 encoded certificate authority string to use

Specify the value from the masterign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with master nodes

Specify the MasterInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5xlarge

m52xlarge

CHAPTER 1 INSTALLING ON AWS

319

27

28

29

30

31

32

33

34

35

36

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c42xlarge

c44xlarge

c48xlarge

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 specify an m5 type such as m5xlarge instead

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

OpenShift Container Platform 46 Installing on AWS

320

1

2

3

2 Copy the template from the CloudFormation template for control plane machines section ofthis topic and save it as a YAML file on your computer This template describes the control planemachines that your cluster requires

3 If you specified an m5 instance type as the value for MasterInstanceType add that instancetype to the MasterInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent thecontrol plane nodes

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-control-plane Youneed the name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents three control planenodes

5 Confirm that the template components exist

18131 CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you needfor your OpenShift Container Platform cluster

Example 144 CloudFormation template for control plane machines

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-control-plane21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 master instances)

CHAPTER 1 INSTALLING ON AWS

321

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AutoRegisterDNS Default yes AllowedValues - yes - no Description Do you want to invoke DNS etcd registration which requires Hosted Zone information Type String PrivateHostedZoneId Description The Route53 private zone ID to register the etcd targets with such as Z21IXYZABCZ2A4 Type String PrivateHostedZoneName Description The Route53 zone to register the targets with such as clusterexamplecom Omit the trailing period Type String Master0Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master1Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master2Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configmaster Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String MasterInstanceProfileName Description IAM profile to associate with master nodes Type String MasterInstanceType Default m5xlarge Type String AllowedValues - m4xlarge

OpenShift Container Platform 46 Installing on AWS

322

- m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c42xlarge - c44xlarge - c48xlarge - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - MasterInstanceType - RhcosAmi - IgnitionLocation

CHAPTER 1 INSTALLING ON AWS

323

- CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label default DNS Parameters - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID Master0Subnet default Master-0 Subnet Master1Subnet default Master-1 Subnet Master2Subnet default Master-2 Subnet MasterInstanceType default Master Instance Type MasterInstanceProfileName default Master Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Master Ignition Source CertificateAuthorities default Ignition CA String MasterSecurityGroupId default Master Security Group ID AutoRegisterDNS default Use Provided DNS Automation AutoRegisterELB default Use Provided ELB Automation PrivateHostedZoneName default Private Hosted Zone Name PrivateHostedZoneId default Private Hosted Zone ID

OpenShift Container Platform 46 Installing on AWS

324

Conditions DoRegistration Equals [yes Ref AutoRegisterELB] DoDns Equals [yes Ref AutoRegisterDNS]

Resources Master0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master0Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster0 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn

CHAPTER 1 INSTALLING ON AWS

325

TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master0PrivateIp

Master1 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master1Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster1 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master1PrivateIp

OpenShift Container Platform 46 Installing on AWS

326

Master2 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master2Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster2 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master2PrivateIp

EtcdSrvRecords Condition DoDns Type AWSRoute53RecordSet

CHAPTER 1 INSTALLING ON AWS

327

Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [_etcd-server-ssl_tcp Ref PrivateHostedZoneName]] ResourceRecords - Join [ [0 10 2380 Join [ [etcd-0 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-1 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-2 Ref PrivateHostedZoneName]]] ] TTL 60 Type SRV

Etcd0Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-0 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master0PrivateIp TTL 60 Type A

Etcd1Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-1 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master1PrivateIp TTL 60 Type A

Etcd2Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-2 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master2PrivateIp TTL 60 Type A

Outputs PrivateIPs Description The control-plane node private IP addresses Value

OpenShift Container Platform 46 Installing on AWS

328

1814 Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent a worker node

IMPORTANT

The CloudFormation template creates a stack that represents one worker node Youmust create a stack for each worker node

NOTE

If you do not use the provided CloudFormation template to create your worker nodesyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

Procedure

1 Create a JSON file that contains the parameter values that the CloudFormation templaterequires

Join [ [GetAtt Master0PrivateIp GetAtt Master1PrivateIp GetAtt Master2PrivateIp] ]

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2

CHAPTER 1 INSTALLING ON AWS

329

1

2

3

4

5

6

7

8

9

10

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes

Specify an AWSEC2ImageId value

A subnet preferably private to launch the worker nodes on

Specify a subnet from the PrivateSubnets value from the output of the CloudFormationtemplate for DNS and load balancing

The worker security group ID to associate with worker nodes

Specify the WorkerSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch bootstrap Ignition config file from

Specify the generated Ignition config location httpsapi-intltcluster_namegtltdomain_namegt22623configworker

ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey Subnet 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey WorkerSecurityGroupId 7 ParameterValue sg-ltrandom_stringgt 8 ParameterKey IgnitionLocation 9 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configworker 10 ParameterKey CertificateAuthorities 11 ParameterValue 12 ParameterKey WorkerInstanceProfileName 13 ParameterValue 14 ParameterKey WorkerInstanceType 15 ParameterValue m4large 16 ]

OpenShift Container Platform 46 Installing on AWS

330

11

12

13

14

15

16

Base64 encoded certificate authority string to use

Specify the value from the workerign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with worker nodes

Specify the WorkerInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4large

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5large

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c4large

c4xlarge

c42xlarge

c44xlarge

c48xlarge

r4large

r4xlarge

r42xlarge

CHAPTER 1 INSTALLING ON AWS

331

1

2

3

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5 types instead

2 Copy the template from the CloudFormation template for worker machines section of thistopic and save it as a YAML file on your computer This template describes the networkingobjects and load balancers that your cluster requires

3 If you specified an m5 instance type as the value for WorkerInstanceType add that instancetype to the WorkerInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent aworker node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-worker-1 You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents one worker node

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-worker-1729ee301-1c2a-11eb-348f-sd9888c65b59

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

332

6 Continue to create worker stacks until you have created enough worker machines for yourcluster You can create additional worker stacks by referencing the same template andparameter files and specifying a different stack name

IMPORTANT

You must create at least two worker machines so you must create at least twostacks that use this CloudFormation template

18141 CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need foryour OpenShift Container Platform cluster

Example 145 CloudFormation template for worker machines

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId WorkerSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configworker Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String WorkerInstanceProfileName Description IAM profile to associate with master nodes Type String WorkerInstanceType Default m5large Type String AllowedValues - m4large - m4xlarge - m42xlarge

CHAPTER 1 INSTALLING ON AWS

333

- m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5large - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c4large - c4xlarge - c42xlarge - c44xlarge - c48xlarge - r4large - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label default Network Configuration Parameters - Subnet ParameterLabels Subnet default Subnet InfrastructureName default Infrastructure Name WorkerInstanceType default Worker Instance Type WorkerInstanceProfileName default Worker Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID IgnitionLocation default Worker Ignition Source

OpenShift Container Platform 46 Installing on AWS

334

1815 Initializing the bootstrap sequence on AWS with user-provisionedinfrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS) you can start thebootstrap sequence that initializes the OpenShift Container Platform control plane

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

CertificateAuthorities default Ignition CA String WorkerSecurityGroupId default Worker Security Group ID

Resources Worker0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref WorkerInstanceProfileName InstanceType Ref WorkerInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref WorkerSecurityGroupId SubnetId Ref Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

Outputs PrivateIP Description The compute node private IP address Value GetAtt Worker0PrivateIp

CHAPTER 1 INSTALLING ON AWS

335

1

2

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

You created the worker nodes

Procedure

1 Change to the directory that contains the installation program and start the bootstrap processthat initializes the OpenShift Container Platform control plane

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different installation details specify warn debug or error instead of info

Example output

If the command exits without a FATAL warning your OpenShift Container Platform controlplane has initialized

NOTE

After the control plane initializes it sets up the compute nodes and installsadditional services in the form of Operators

Additional resources

See Monitoring installation progress for details about monitoring the installation bootstrap andcontrol plane logs as an OpenShift Container Platform installation progresses

See Gathering bootstrap node diagnostic data for information about troubleshooting issuesrelated to the bootstrap process

1816 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The

$ openshift-install wait-for bootstrap-complete --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Waiting up to 20m0s for the Kubernetes API at httpsapimyclusterexamplecom6443INFO API v1190+9f84db3 upINFO Waiting up to 30m0s for bootstrapping to completeINFO It is now safe to remove the bootstrap resourcesINFO Time elapsed 1s

OpenShift Container Platform 46 Installing on AWS

336

1

kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1817 Approving the certificate signing requests for your machines

When you add machines to a cluster two pending certificate signing requests (CSRs) are generated foreach machine that you added You must confirm that these CSRs are approved or if necessary approvethem yourself The client requests must be approved first followed by the server requests

Prerequisites

You added machines to your cluster

Procedure

1 Confirm that the cluster recognizes the machines

Example output

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 63m v1190master-1 Ready master 63m v1190master-2 Ready master 64m v1190worker-0 NotReady worker 76s v1190worker-1 NotReady worker 70s v1190

CHAPTER 1 INSTALLING ON AWS

337

The output lists all of the machines that you created

NOTE

The preceding output might not include the compute nodes also known asworker nodes until some CSRs are approved

2 Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster

Example output

In this example two machines are joining the cluster You might see more approved CSRs in thelist

3 If the CSRs were not approved after all of the pending CSRs for the machines you added are in Pending status approve the CSRs for your cluster machines

NOTE

Because the CSRs rotate automatically approve your CSRs within an hour ofadding the machines to the cluster If you do not approve them within an hour thecertificates will rotate and more than two certificates will be present for eachnode You must approve all of these certificates After you approve the initialCSRs the subsequent node client CSRs are automatically approved by thecluster kube-controller-manager

NOTE

For clusters running on platforms that are not machine API enabled such as baremetal and other user-provisioned infrastructure you must implement a methodof automatically approving the kubelet serving certificate requests (CSRs) If arequest is not approved then the oc exec oc rsh and oc logs commandscannot succeed because a serving certificate is required when the API serverconnects to the kubelet Any operation that contacts the Kubelet endpointrequires this certificate approval to be in place The method must watch for newCSRs confirm that the CSR was submitted by the node-bootstrapper serviceaccount in the systemnode or systemadmin groups and confirm the identityof the node

To approve them individually run the following command for each valid CSR

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-8b2br 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pendingcsr-8vnps 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pending

$ oc adm certificate approve ltcsr_namegt 1

OpenShift Container Platform 46 Installing on AWS

338

1

1

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

NOTE

Some Operators might not become available until some CSRs are approved

4 Now that your client requests are approved you must review the server requests for eachmachine that you added to the cluster

Example output

5 If the remaining CSRs are not approved and are in the Pending status approve the CSRs foryour cluster machines

To approve them individually run the following command for each valid CSR

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

6 After all client and server CSRs have been approved the machines have the Ready statusVerify this by running the following command

Example output

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs --no-run-if-empty oc adm certificate approve

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-bfd72 5m26s systemnodeip-10-0-50-126us-east-2computeinternal Pendingcsr-c57lv 5m26s systemnodeip-10-0-95-157us-east-2computeinternal Pending

$ oc adm certificate approve ltcsr_namegt 1

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs oc adm certificate approve

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 73m v1200master-1 Ready master 73m v1200

CHAPTER 1 INSTALLING ON AWS

339

NOTE

It can take a few minutes after approval of the server CSRs for the machines totransition to the Ready status

Additional information

For more information on CSRs see Certificate Signing Requests

1818 Initial Operator configuration

After the control plane initializes you must immediately configure some Operators so that they allbecome available

Prerequisites

Your control plane has initialized

Procedure

1 Watch the cluster components come online

Example output

master-2 Ready master 74m v1200worker-0 Ready worker 11m v1200worker-1 Ready worker 11m v1200

$ watch -n5 oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCEauthentication 460 True False False 3h56mcloud-credential 460 True False False 29hcluster-autoscaler 460 True False False 29hconfig-operator 460 True False False 6h39mconsole 460 True False False 3h59mcsi-snapshot-controller 460 True False False 4h12mdns 460 True False False 4h15metcd 460 True False False 29himage-registry 460 True False False 3h59mingress 460 True False False 4h30minsights 460 True False False 29hkube-apiserver 460 True False False 29hkube-controller-manager 460 True False False 29hkube-scheduler 460 True False False 29hkube-storage-version-migrator 460 True False False 4h2mmachine-api 460 True False False 29hmachine-approver 460 True False False 6h34mmachine-config 460 True False False 3h56mmarketplace 460 True False False 4h2mmonitoring 460 True False False 6h31mnetwork 460 True False False 29hnode-tuning 460 True False False 4h30m

OpenShift Container Platform 46 Installing on AWS

340

2 Configure the Operators that are not available

18181 Image registry storage configuration

Amazon Web Services provides default storage which means the Image Registry Operator is availableafter installation However if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage you must manually configure registry storage

Instructions are shown for configuring a persistent volume which is required for production clustersWhere applicable instructions are shown for configuring an empty directory as the storage locationwhich is available for only non-production clusters

Additional instructions are provided for allowing the image registry to use block storage types by usingthe Recreate rollout strategy during upgrades

181811 Configuring registry storage for AWS with user-provisioned infrastructure

During installation your cloud credentials are sufficient to create an Amazon S3 bucket and the RegistryOperator will automatically configure storage

If the Registry Operator cannot create an S3 bucket and automatically configure storage you cancreate an S3 bucket and configure storage with the following procedure

Prerequisites

You have a cluster on AWS with user-provisioned infrastructure

For Amazon S3 storage the secret is expected to contain two keys

REGISTRY_STORAGE_S3_ACCESSKEY

REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage

1 Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old

2 Fill in the storage configuration in configsimageregistryoperatoropenshiftiocluster

Example configuration

openshift-apiserver 460 True False False 3h56mopenshift-controller-manager 460 True False False 4h36mopenshift-samples 460 True False False 4h30moperator-lifecycle-manager 460 True False False 29hoperator-lifecycle-manager-catalog 460 True False False 29hoperator-lifecycle-manager-packageserver 460 True False False 3h59mservice-ca 460 True False False 29hstorage 460 True False False 4h30m

$ oc edit configsimageregistryoperatoropenshiftiocluster

storage

CHAPTER 1 INSTALLING ON AWS

341

WARNING

To secure your registry images in AWS block public access to the S3 bucket

181812 Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator For non-production clusters you can setthe image registry to an empty directory If you do so all images are lost if you restart the registry

Procedure

To set the image registry storage to an empty directory

WARNING

Configure this option for only non-production clusters

If you run this command before the Image Registry Operator initializes its components the oc patch command fails with the following error

Wait a few minutes and run the command again

1819 Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster remove the bootstrap resourcesfrom Amazon Web Services (AWS)

Prerequisites

You completed the initial Operator configuration for your cluster

Procedure

1 Delete the bootstrap resources If you used the CloudFormation template delete its stack

s3 bucket ltbucket-namegt region ltregion-namegt

$ oc patch configsimageregistryoperatoropenshiftio cluster --type merge --patch specstorageemptyDir

Error from server (NotFound) configsimageregistryoperatoropenshiftio cluster not found

OpenShift Container Platform 46 Installing on AWS

342

1

Delete the stack by using the AWS CLI

ltnamegt is the name of your bootstrap stack

Delete the stack by using the AWS CloudFormation console

1820 Creating the Ingress DNS Records

If you removed the DNS Zone configuration manually create DNS records that point to the Ingress loadbalancer You can create either a wildcard record or specific records While the following procedure usesA records you can use other record types that you require such as CNAME or alias

Prerequisites

You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) thatuses infrastructure that you provisioned

You installed the OpenShift CLI (oc)

You installed the jq package

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix)

Procedure

1 Determine the routes to create

To create a wildcard record use appsltcluster_namegtltdomain_namegt where ltcluster_namegt is your cluster name and ltdomain_namegt is the Route 53 base domainfor your OpenShift Container Platform cluster

To create specific records you must create a record for each route that your cluster uses asshown in the output of the following command

Example output

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP addressthat it uses which is shown in the EXTERNAL-IP column

$ aws cloudformation delete-stack --stack-name ltnamegt 1

$ oc get --all-namespaces -o jsonpath=range items[]range statusingress[]hostnendend routes

oauth-openshiftappsltcluster_namegtltdomain_namegtconsole-openshift-consoleappsltcluster_namegtltdomain_namegtdownloads-openshift-consoleappsltcluster_namegtltdomain_namegtalertmanager-main-openshift-monitoringappsltcluster_namegtltdomain_namegtgrafana-openshift-monitoringappsltcluster_namegtltdomain_namegtprometheus-k8s-openshift-monitoringappsltcluster_namegtltdomain_namegt

$ oc -n openshift-ingress get service router-default

CHAPTER 1 INSTALLING ON AWS

343

1

1 2

Example output

3 Locate the hosted zone ID for the load balancer

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer that you obtained

Example output

The output of this command is the load balancer hosted zone ID

4 Obtain the public hosted zone ID for your clusterrsquos domain

For ltdomain_namegt specify the Route 53 base domain for your OpenShift ContainerPlatform cluster

Example output

The public hosted zone ID for your domain is shown in the command output In this example it is Z3URY6TWQ91KVV

5 Add the alias records to your private zone

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGErouter-default LoadBalancer 1723062215 ab328us-east-2elbamazonawscom 8031499TCP44330693TCP 5m

$ aws elb describe-load-balancers | jq -r LoadBalancerDescriptions[] | select(DNSName == ltexternal_ipgt)CanonicalHostedZoneNameID 1

Z3AADJGX6KTTL2

$ aws route53 list-hosted-zones-by-name --dns-name ltdomain_namegt 1 --query HostedZones[ ConfigPrivateZone = `true` ampamp Name == `ltdomain_namegt`]Id 2 --output text

hostedzoneZ3URY6TWQ91KVV

$ aws route53 change-resource-record-sets --hosted-zone-id ltprivate_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3

OpenShift Container Platform 46 Installing on AWS

344

1

2

3

4

1

2

3

4

For ltprivate_hosted_zone_idgt specify the value from the output of the CloudFormationtemplate for DNS and load balancing

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

6 Add the records to your public zone

For ltpublic_hosted_zone_idgt specify the public hosted zone for your domain

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

1821 Completing an AWS installation on user-provisioned infrastructure

gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

$ aws route53 change-resource-record-sets --hosted-zone-id ltpublic_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

CHAPTER 1 INSTALLING ON AWS

345

1

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure monitor the deployment to completion

Prerequisites

You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure

You installed the oc CLI

Procedure

1 From the directory that contains the installation program complete the cluster installation

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

2 Register your cluster on the Cluster registration page

1822 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

$ openshift-install --dir=ltinstallation_directorygt wait-for install-complete 1

INFO Waiting up to 40m0s for the cluster at httpsapimyclusterexamplecom6443 to initializeINFO Waiting up to 10m0s for the openshift-console route to be createdINFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Fe5en-ymBEc-Wt6NLINFO Time elapsed 1s

OpenShift Container Platform 46 Installing on AWS

346

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

1823 Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

See Working with stacks in the AWS documentation for more information about AWSCloudFormation stacks

1824 Next steps

Validate an installation

Customize your cluster

If the mirror registry that you used to install your cluster has a trusted CA add it to the cluster byconfiguring additional trust stores

If necessary you can opt out of remote health reporting

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

347

1

2

19 UNINSTALLING A CLUSTER ON AWS

You can remove a cluster that you deployed to Amazon Web Services (AWS)

191 Removing a cluster that uses installer-provisioned infrastructure

You can remove a cluster that uses installer-provisioned infrastructure from your cloud

Prerequisites

Have a copy of the installation program that you used to deploy the cluster

Have the files that the installation program generated when you created your cluster

Procedure

1 From the directory that contains the installation program on the computer that you used toinstall the cluster run the following command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different details specify warn debug or error instead of info

NOTE

You must specify the directory that contains the cluster definition files for yourcluster The installation program requires the metadatajson file in this directoryto delete the cluster

2 Optional Delete the ltinstallation_directorygt directory and the OpenShift Container Platforminstallation program

$ openshift-install destroy cluster --dir=ltinstallation_directorygt --log-level=info 1 2

OpenShift Container Platform 46 Installing on AWS

348

  • Table of Contents
  • CHAPTER 1 INSTALLING ON AWS
    • 11 CONFIGURING AN AWS ACCOUNT
      • 111 Configuring Route 53
        • 1111 Ingress Operator endpoint configuration for AWS Route 53
          • 112 AWS account limits
          • 113 Required AWS permissions
          • 114 Creating an IAM user
          • 115 Supported AWS regions
          • 116 Next steps
            • 12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS
              • 121 Prerequisites
              • 122 Internet and Telemetry access for OpenShift Container Platform
              • 123 Generating an SSH private key and adding it to the agent
              • 124 Obtaining the installation program
              • 125 Creating the installation configuration file
                • 1251 Installation configuration parameters
                • 1252 Sample customized install-configyaml file for AWS
                  • 126 Deploying the cluster
                  • 127 Installing the OpenShift CLI by downloading the binary
                    • 1271 Installing the OpenShift CLI on Linux
                    • 1272 Installing the OpenShift CLI on Windows
                    • 1273 Installing the OpenShift CLI on macOS
                      • 128 Logging in to the cluster by using the CLI
                      • 129 Logging in to the cluster by using the web console
                      • 1210 Next steps
                        • 13 INSTALLING A CLUSTER ON AWS WITH NETWORK CUSTOMIZATIONS
                          • 131 Prerequisites
                          • 132 Internet and Telemetry access for OpenShift Container Platform
                          • 133 Generating an SSH private key and adding it to the agent
                          • 134 Obtaining the installation program
                          • 135 Creating the installation configuration file
                            • 1351 Installation configuration parameters
                            • 1352 Network configuration parameters
                            • 1353 Sample customized install-configyaml file for AWS
                              • 136 Modifying advanced network configuration parameters
                              • 137 Configuring an Ingress Controller Network Load Balancer on a new AWS cluster
                              • 138 Cluster Network Operator configuration
                                • 1381 Configuration parameters for the OpenShift SDN default CNI network provider
                                • 1382 Configuration parameters for the OVN-Kubernetes default CNI network provider
                                • 1383 Cluster Network Operator example configuration
                                  • 139 Configuring hybrid networking with OVN-Kubernetes
                                  • 1310 Deploying the cluster
                                  • 1311 Installing the OpenShift CLI by downloading the binary
                                    • 13111 Installing the OpenShift CLI on Linux
                                    • 13112 Installing the OpenShift CLI on Windows
                                    • 13113 Installing the OpenShift CLI on macOS
                                      • 1312 Logging in to the cluster by using the CLI
                                      • 1313 Logging in to the cluster by using the web console
                                      • 1314 Next steps
                                        • 14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC
                                          • 141 Prerequisites
                                          • 142 About using a custom VPC
                                            • 1421 Requirements for using your VPC
                                            • 1422 VPC validation
                                            • 1423 Division of permissions
                                            • 1424 Isolation between clusters
                                              • 143 Internet and Telemetry access for OpenShift Container Platform
                                              • 144 Generating an SSH private key and adding it to the agent
                                              • 145 Obtaining the installation program
                                              • 146 Creating the installation configuration file
                                                • 1461 Installation configuration parameters
                                                • 1462 Sample customized install-configyaml file for AWS
                                                • 1463 Configuring the cluster-wide proxy during installation
                                                  • 147 Deploying the cluster
                                                  • 148 Installing the OpenShift CLI by downloading the binary
                                                    • 1481 Installing the OpenShift CLI on Linux
                                                    • 1482 Installing the OpenShift CLI on Windows
                                                    • 1483 Installing the OpenShift CLI on macOS
                                                      • 149 Logging in to the cluster by using the CLI
                                                      • 1410 Logging in to the cluster by using the web console
                                                      • 1411 Next steps
                                                        • 15 INSTALLING A PRIVATE CLUSTER ON AWS
                                                          • 151 Prerequisites
                                                          • 152 Private clusters
                                                            • 1521 Private clusters in AWS
                                                              • 153 About using a custom VPC
                                                                • 1531 Requirements for using your VPC
                                                                • 1532 VPC validation
                                                                • 1533 Division of permissions
                                                                • 1534 Isolation between clusters
                                                                  • 154 Internet and Telemetry access for OpenShift Container Platform
                                                                  • 155 Generating an SSH private key and adding it to the agent
                                                                  • 156 Obtaining the installation program
                                                                  • 157 Manually creating the installation configuration file
                                                                    • 1571 Installation configuration parameters
                                                                    • 1572 Sample customized install-configyaml file for AWS
                                                                    • 1573 Configuring the cluster-wide proxy during installation
                                                                      • 158 Deploying the cluster
                                                                      • 159 Installing the OpenShift CLI by downloading the binary
                                                                        • 1591 Installing the OpenShift CLI on Linux
                                                                        • 1592 Installing the OpenShift CLI on Windows
                                                                        • 1593 Installing the OpenShift CLI on macOS
                                                                          • 1510 Logging in to the cluster by using the CLI
                                                                          • 1511 Logging in to the cluster by using the web console
                                                                          • 1512 Next steps
                                                                            • 16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION
                                                                              • 161 Prerequisites
                                                                              • 162 AWS government regions
                                                                              • 163 Private clusters
                                                                                • 1631 Private clusters in AWS
                                                                                  • 164 About using a custom VPC
                                                                                    • 1641 Requirements for using your VPC
                                                                                    • 1642 VPC validation
                                                                                    • 1643 Division of permissions
                                                                                    • 1644 Isolation between clusters
                                                                                      • 165 Internet and Telemetry access for OpenShift Container Platform
                                                                                      • 166 Generating an SSH private key and adding it to the agent
                                                                                      • 167 Obtaining the installation program
                                                                                      • 168 Manually creating the installation configuration file
                                                                                        • 1681 Installation configuration parameters
                                                                                        • 1682 Sample customized install-configyaml file for AWS
                                                                                        • 1683 AWS regions without a published RHCOS AMI
                                                                                        • 1684 Uploading a custom RHCOS AMI in AWS
                                                                                        • 1685 Configuring the cluster-wide proxy during installation
                                                                                          • 169 Deploying the cluster
                                                                                          • 1610 Installing the OpenShift CLI by downloading the binary
                                                                                            • 16101 Installing the OpenShift CLI on Linux
                                                                                            • 16102 Installing the OpenShift CLI on Windows
                                                                                            • 16103 Installing the OpenShift CLI on macOS
                                                                                              • 1611 Logging in to the cluster by using the CLI
                                                                                              • 1612 Logging in to the cluster by using the web console
                                                                                              • 1613 Next steps
                                                                                                • 17 INSTALLING A CLUSTER ON USER-PROVISIONED INFRASTRUCTURE IN AWS BY USING CLOUDFORMATION TEMPLATES
                                                                                                  • 171 Prerequisites
                                                                                                  • 172 Internet and Telemetry access for OpenShift Container Platform
                                                                                                  • 173 Required AWS infrastructure components
                                                                                                    • 1731 Cluster machines
                                                                                                    • 1732 Certificate signing requests management
                                                                                                    • 1733 Other infrastructure components
                                                                                                    • 1734 Required AWS permissions
                                                                                                      • 174 Obtaining the installation program
                                                                                                      • 175 Generating an SSH private key and adding it to the agent
                                                                                                      • 176 Creating the installation files for AWS
                                                                                                        • 1761 Optional Creating a separate var partition
                                                                                                        • 1762 Creating the installation configuration file
                                                                                                        • 1763 Configuring the cluster-wide proxy during installation
                                                                                                        • 1764 Creating the Kubernetes manifest and Ignition config files
                                                                                                          • 177 Extracting the infrastructure name
                                                                                                          • 178 Creating a VPC in AWS
                                                                                                            • 1781 CloudFormation template for the VPC
                                                                                                              • 179 Creating networking and load balancing components in AWS
                                                                                                                • 1791 CloudFormation template for the network and load balancers
                                                                                                                  • 1710 Creating security group and roles in AWS
                                                                                                                    • 17101 CloudFormation template for security objects
                                                                                                                      • 1711 RHCOS AMIs for the AWS infrastructure
                                                                                                                        • 17111 AWS regions without a published RHCOS AMI
                                                                                                                        • 17112 Uploading a custom RHCOS AMI in AWS
                                                                                                                          • 1712 Creating the bootstrap node in AWS
                                                                                                                            • 17121 CloudFormation template for the bootstrap machine
                                                                                                                              • 1713 Creating the control plane machines in AWS
                                                                                                                                • 17131 CloudFormation template for control plane machines
                                                                                                                                  • 1714 Creating the worker nodes in AWS
                                                                                                                                    • 17141 CloudFormation template for worker machines
                                                                                                                                      • 1715 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
                                                                                                                                      • 1716 Installing the OpenShift CLI by downloading the binary
                                                                                                                                        • 17161 Installing the OpenShift CLI on Linux
                                                                                                                                        • 17162 Installing the OpenShift CLI on Windows
                                                                                                                                        • 17163 Installing the OpenShift CLI on macOS
                                                                                                                                          • 1717 Logging in to the cluster by using the CLI
                                                                                                                                          • 1718 Approving the certificate signing requests for your machines
                                                                                                                                          • 1719 Initial Operator configuration
                                                                                                                                            • 17191 Image registry storage configuration
                                                                                                                                              • 1720 Deleting the bootstrap resources
                                                                                                                                              • 1721 Creating the Ingress DNS Records
                                                                                                                                              • 1722 Completing an AWS installation on user-provisioned infrastructure
                                                                                                                                              • 1723 Logging in to the cluster by using the web console
                                                                                                                                              • 1724 Additional resources
                                                                                                                                              • 1725 Next steps
                                                                                                                                                • 18 INSTALLING A CLUSTER ON AWS THAT USES MIRRORED INSTALLATION CONTENT
                                                                                                                                                  • 181 Prerequisites
                                                                                                                                                  • 182 About installations in restricted networks
                                                                                                                                                    • 1821 Additional limits
                                                                                                                                                      • 183 Internet and Telemetry access for OpenShift Container Platform
                                                                                                                                                      • 184 Required AWS infrastructure components
                                                                                                                                                        • 1841 Cluster machines
                                                                                                                                                        • 1842 Certificate signing requests management
                                                                                                                                                        • 1843 Other infrastructure components
                                                                                                                                                        • 1844 Required AWS permissions
                                                                                                                                                          • 185 Generating an SSH private key and adding it to the agent
                                                                                                                                                          • 186 Creating the installation files for AWS
                                                                                                                                                            • 1861 Optional Creating a separate var partition
                                                                                                                                                            • 1862 Creating the installation configuration file
                                                                                                                                                            • 1863 Configuring the cluster-wide proxy during installation
                                                                                                                                                            • 1864 Creating the Kubernetes manifest and Ignition config files
                                                                                                                                                              • 187 Extracting the infrastructure name
                                                                                                                                                              • 188 Creating a VPC in AWS
                                                                                                                                                                • 1881 CloudFormation template for the VPC
                                                                                                                                                                  • 189 Creating networking and load balancing components in AWS
                                                                                                                                                                    • 1891 CloudFormation template for the network and load balancers
                                                                                                                                                                      • 1810 Creating security group and roles in AWS
                                                                                                                                                                        • 18101 CloudFormation template for security objects
                                                                                                                                                                          • 1811 RHCOS AMIs for the AWS infrastructure
                                                                                                                                                                          • 1812 Creating the bootstrap node in AWS
                                                                                                                                                                            • 18121 CloudFormation template for the bootstrap machine
                                                                                                                                                                              • 1813 Creating the control plane machines in AWS
                                                                                                                                                                                • 18131 CloudFormation template for control plane machines
                                                                                                                                                                                  • 1814 Creating the worker nodes in AWS
                                                                                                                                                                                    • 18141 CloudFormation template for worker machines
                                                                                                                                                                                      • 1815 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
                                                                                                                                                                                      • 1816 Logging in to the cluster by using the CLI
                                                                                                                                                                                      • 1817 Approving the certificate signing requests for your machines
                                                                                                                                                                                      • 1818 Initial Operator configuration
                                                                                                                                                                                        • 18181 Image registry storage configuration
                                                                                                                                                                                          • 1819 Deleting the bootstrap resources
                                                                                                                                                                                          • 1820 Creating the Ingress DNS Records
                                                                                                                                                                                          • 1821 Completing an AWS installation on user-provisioned infrastructure
                                                                                                                                                                                          • 1822 Logging in to the cluster by using the web console
                                                                                                                                                                                          • 1823 Additional resources
                                                                                                                                                                                          • 1824 Next steps
                                                                                                                                                                                            • 19 UNINSTALLING A CLUSTER ON AWS
                                                                                                                                                                                              • 191 Removing a cluster that uses installer-provisioned infrastructure
Page 3: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6

Legal Notice

Copyright copy 2021 Red Hat Inc

The text of and illustrations in this document are licensed by Red Hat under a Creative CommonsAttributionndashShare Alike 30 Unported license (CC-BY-SA) An explanation of CC-BY-SA isavailable athttpcreativecommonsorglicensesby-sa30 In accordance with CC-BY-SA if you distribute this document or an adaptation of it you mustprovide the URL for the original version

Red Hat as the licensor of this document waives the right to enforce and agrees not to assertSection 4d of CC-BY-SA to the fullest extent permitted by applicable law

Red Hat Red Hat Enterprise Linux the Shadowman logo the Red Hat logo JBoss OpenShiftFedora the Infinity logo and RHCE are trademarks of Red Hat Inc registered in the United Statesand other countries

Linux reg is the registered trademark of Linus Torvalds in the United States and other countries

Java reg is a registered trademark of Oracle andor its affiliates

XFS reg is a trademark of Silicon Graphics International Corp or its subsidiaries in the United Statesandor other countries

MySQL reg is a registered trademark of MySQL AB in the United States the European Union andother countries

Nodejs reg is an official trademark of Joyent Red Hat is not formally related to or endorsed by theofficial Joyent Nodejs open source or commercial project

The OpenStack reg Word Mark and OpenStack logo are either registered trademarksservice marksor trademarksservice marks of the OpenStack Foundation in the United States and othercountries and are used with the OpenStack Foundations permission We are not affiliated withendorsed or sponsored by the OpenStack Foundation or the OpenStack community

All other trademarks are the property of their respective owners

Abstract

This document provides instructions for installing and uninstalling OpenShift Container Platformclusters on Amazon Web Services

Table of Contents

CHAPTER 1 INSTALLING ON AWS11 CONFIGURING AN AWS ACCOUNT

111 Configuring Route 531111 Ingress Operator endpoint configuration for AWS Route 53

112 AWS account limits113 Required AWS permissions114 Creating an IAM user115 Supported AWS regions116 Next steps

12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS121 Prerequisites122 Internet and Telemetry access for OpenShift Container Platform123 Generating an SSH private key and adding it to the agent124 Obtaining the installation program125 Creating the installation configuration file

1251 Installation configuration parameters1252 Sample customized install-configyaml file for AWS

126 Deploying the cluster127 Installing the OpenShift CLI by downloading the binary

1271 Installing the OpenShift CLI on Linux1272 Installing the OpenShift CLI on Windows1273 Installing the OpenShift CLI on macOS

128 Logging in to the cluster by using the CLI129 Logging in to the cluster by using the web console1210 Next steps

13 INSTALLING A CLUSTER ON AWS WITH NETWORK CUSTOMIZATIONS131 Prerequisites132 Internet and Telemetry access for OpenShift Container Platform133 Generating an SSH private key and adding it to the agent134 Obtaining the installation program135 Creating the installation configuration file

1351 Installation configuration parameters1352 Network configuration parameters1353 Sample customized install-configyaml file for AWS

136 Modifying advanced network configuration parameters137 Configuring an Ingress Controller Network Load Balancer on a new AWS cluster138 Cluster Network Operator configuration

1381 Configuration parameters for the OpenShift SDN default CNI network provider1382 Configuration parameters for the OVN-Kubernetes default CNI network provider1383 Cluster Network Operator example configuration

139 Configuring hybrid networking with OVN-Kubernetes1310 Deploying the cluster1311 Installing the OpenShift CLI by downloading the binary

13111 Installing the OpenShift CLI on Linux13112 Installing the OpenShift CLI on Windows13113 Installing the OpenShift CLI on macOS

1312 Logging in to the cluster by using the CLI1313 Logging in to the cluster by using the web console1314 Next steps

14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC141 Prerequisites

77778

10181819

2020202122232432353636373738383939404041

424244525355575859606061

62646465656666676767

Table of Contents

1

142 About using a custom VPC1421 Requirements for using your VPC1422 VPC validation1423 Division of permissions1424 Isolation between clusters

143 Internet and Telemetry access for OpenShift Container Platform144 Generating an SSH private key and adding it to the agent145 Obtaining the installation program146 Creating the installation configuration file

1461 Installation configuration parameters1462 Sample customized install-configyaml file for AWS1463 Configuring the cluster-wide proxy during installation

147 Deploying the cluster148 Installing the OpenShift CLI by downloading the binary

1481 Installing the OpenShift CLI on Linux1482 Installing the OpenShift CLI on Windows1483 Installing the OpenShift CLI on macOS

149 Logging in to the cluster by using the CLI1410 Logging in to the cluster by using the web console1411 Next steps

15 INSTALLING A PRIVATE CLUSTER ON AWS151 Prerequisites152 Private clusters

1521 Private clusters in AWS15211 Limitations

153 About using a custom VPC1531 Requirements for using your VPC1532 VPC validation1533 Division of permissions1534 Isolation between clusters

154 Internet and Telemetry access for OpenShift Container Platform155 Generating an SSH private key and adding it to the agent156 Obtaining the installation program157 Manually creating the installation configuration file

1571 Installation configuration parameters1572 Sample customized install-configyaml file for AWS1573 Configuring the cluster-wide proxy during installation

158 Deploying the cluster159 Installing the OpenShift CLI by downloading the binary

1591 Installing the OpenShift CLI on Linux1592 Installing the OpenShift CLI on Windows1593 Installing the OpenShift CLI on macOS

1510 Logging in to the cluster by using the CLI1511 Logging in to the cluster by using the web console1512 Next steps

16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION161 Prerequisites162 AWS government regions163 Private clusters

1631 Private clusters in AWS16311 Limitations

164 About using a custom VPC1641 Requirements for using your VPC

686870717171727374758486888990909191

929393939394949494979797979899

100101

109111

113114114115115116116117117117118118119119119

120

OpenShift Container Platform 46 Installing on AWS

2

1642 VPC validation1643 Division of permissions1644 Isolation between clusters

165 Internet and Telemetry access for OpenShift Container Platform166 Generating an SSH private key and adding it to the agent167 Obtaining the installation program168 Manually creating the installation configuration file

1681 Installation configuration parameters1682 Sample customized install-configyaml file for AWS1683 AWS regions without a published RHCOS AMI1684 Uploading a custom RHCOS AMI in AWS1685 Configuring the cluster-wide proxy during installation

169 Deploying the cluster1610 Installing the OpenShift CLI by downloading the binary

16101 Installing the OpenShift CLI on Linux16102 Installing the OpenShift CLI on Windows16103 Installing the OpenShift CLI on macOS

1611 Logging in to the cluster by using the CLI1612 Logging in to the cluster by using the web console1613 Next steps

17 INSTALLING A CLUSTER ON USER-PROVISIONED INFRASTRUCTURE IN AWS BY USINGCLOUDFORMATION TEMPLATES

171 Prerequisites172 Internet and Telemetry access for OpenShift Container Platform173 Required AWS infrastructure components

1731 Cluster machines1732 Certificate signing requests management1733 Other infrastructure components1734 Required AWS permissions

174 Obtaining the installation program175 Generating an SSH private key and adding it to the agent176 Creating the installation files for AWS

1761 Optional Creating a separate var partition1762 Creating the installation configuration file1763 Configuring the cluster-wide proxy during installation1764 Creating the Kubernetes manifest and Ignition config files

177 Extracting the infrastructure name178 Creating a VPC in AWS

1781 CloudFormation template for the VPC179 Creating networking and load balancing components in AWS

1791 CloudFormation template for the network and load balancers1710 Creating security group and roles in AWS

17101 CloudFormation template for security objects1711 RHCOS AMIs for the AWS infrastructure

17111 AWS regions without a published RHCOS AMI17112 Uploading a custom RHCOS AMI in AWS

1712 Creating the bootstrap node in AWS17121 CloudFormation template for the bootstrap machine

1713 Creating the control plane machines in AWS17131 CloudFormation template for control plane machines

1714 Creating the worker nodes in AWS17141 CloudFormation template for worker machines

1715 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure

122122122123123124125126134136137139141

142142143143144144145

146146146147148149149155163163164165167168170172172174180183191

194203204204206

211215221

228232235

Table of Contents

3

1716 Installing the OpenShift CLI by downloading the binary17161 Installing the OpenShift CLI on Linux17162 Installing the OpenShift CLI on Windows17163 Installing the OpenShift CLI on macOS

1717 Logging in to the cluster by using the CLI1718 Approving the certificate signing requests for your machines1719 Initial Operator configuration

17191 Image registry storage configuration171911 Configuring registry storage for AWS with user-provisioned infrastructure171912 Configuring storage for the image registry in non-production clusters

1720 Deleting the bootstrap resources1721 Creating the Ingress DNS Records1722 Completing an AWS installation on user-provisioned infrastructure1723 Logging in to the cluster by using the web console1724 Additional resources1725 Next steps

18 INSTALLING A CLUSTER ON AWS THAT USES MIRRORED INSTALLATION CONTENT181 Prerequisites182 About installations in restricted networks

1821 Additional limits183 Internet and Telemetry access for OpenShift Container Platform184 Required AWS infrastructure components

1841 Cluster machines1842 Certificate signing requests management1843 Other infrastructure components1844 Required AWS permissions

185 Generating an SSH private key and adding it to the agent186 Creating the installation files for AWS

1861 Optional Creating a separate var partition1862 Creating the installation configuration file1863 Configuring the cluster-wide proxy during installation1864 Creating the Kubernetes manifest and Ignition config files

187 Extracting the infrastructure name188 Creating a VPC in AWS

1881 CloudFormation template for the VPC189 Creating networking and load balancing components in AWS

1891 CloudFormation template for the network and load balancers1810 Creating security group and roles in AWS

18101 CloudFormation template for security objects1811 RHCOS AMIs for the AWS infrastructure1812 Creating the bootstrap node in AWS

18121 CloudFormation template for the bootstrap machine1813 Creating the control plane machines in AWS

18131 CloudFormation template for control plane machines1814 Creating the worker nodes in AWS

18141 CloudFormation template for worker machines1815 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure1816 Logging in to the cluster by using the CLI1817 Approving the certificate signing requests for your machines1818 Initial Operator configuration

18181 Image registry storage configuration181811 Configuring registry storage for AWS with user-provisioned infrastructure181812 Configuring storage for the image registry in non-production clusters

236236237237237238241242242243243244247247248248249249250250250251251

253253259266267268270272273275276278283287295297306307312316321

329333335336337340341341

342

OpenShift Container Platform 46 Installing on AWS

4

1819 Deleting the bootstrap resources1820 Creating the Ingress DNS Records1821 Completing an AWS installation on user-provisioned infrastructure1822 Logging in to the cluster by using the web console1823 Additional resources1824 Next steps

19 UNINSTALLING A CLUSTER ON AWS191 Removing a cluster that uses installer-provisioned infrastructure

342343345346347347348348

Table of Contents

5

OpenShift Container Platform 46 Installing on AWS

6

CHAPTER 1 INSTALLING ON AWS

11 CONFIGURING AN AWS ACCOUNT

Before you can install OpenShift Container Platform you must configure an Amazon Web Services(AWS) account

111 Configuring Route 53

To install OpenShift Container Platform the Amazon Web Services (AWS) account you use must have adedicated public hosted zone in your Route 53 service This zone must be authoritative for the domainThe Route 53 service provides cluster DNS resolution and name lookup for external connections to thecluster

Procedure

1 Identify your domain or subdomain and registrar You can transfer an existing domain andregistrar or obtain a new one through AWS or another source

NOTE

If you purchase a new domain through AWS it takes time for the relevant DNSchanges to propagate For more information about purchasing domains throughAWS see Registering Domain Names Using Amazon Route 53 in the AWSdocumentation

2 If you are using an existing domain and registrar migrate its DNS to AWS See Making AmazonRoute 53 the DNS Service for an Existing Domain in the AWS documentation

3 Create a public hosted zone for your domain or subdomain See Creating a Public Hosted Zonein the AWS documentationUse an appropriate root domain such as openshiftcorpcom or subdomain such as clustersopenshiftcorpcom

4 Extract the new authoritative name servers from the hosted zone records See Getting theName Servers for a Public Hosted Zone in the AWS documentation

5 Update the registrar records for the AWS Route 53 name servers that your domain uses Forexample if you registered your domain to a Route 53 service in a different accounts see thefollowing topic in the AWS documentation Adding or Changing Name Servers or Glue Records

6 If you are using a subdomain add its delegation records to the parent domain This givesAmazon Route 53 responsibility for the subdomain Follow the delegation procedure outlined bythe DNS provider of the parent domain See Creating a subdomain that uses Amazon Route 53as the DNS service without migrating the parent domain in the AWS documentation for anexample high level procedure

1111 Ingress Operator endpoint configuration for AWS Route 53

If you install in either Amazon Web Services (AWS) GovCloud (US) US-West or US-East region theIngress Operator uses us-gov-west-1 region for Route53 and tagging API clients

The Ingress Operator uses httpstaggingus-gov-west-1amazonawscom as the tagging APIendpoint if a tagging custom endpoint is configured that includes the string us-gov-east-1

CHAPTER 1 INSTALLING ON AWS

7

1

2

For more information on AWS GovCloud (US) endpoints see the Service Endpoints in the AWSdocumentation about GovCloud (US)

IMPORTANT

Private disconnected installations are not supported for AWS GovCloud when you installin the us-gov-east-1 region

Example Route 53 configuration

Route 53 defaults to httpsroute53us-govamazonawscom for both AWS GovCloud (US)regions

Only the US-West region has endpoints for tagging Omit this parameter if your cluster is inanother region

112 AWS account limits

The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) componentsand the default Service Limits affect your ability to install OpenShift Container Platform clusters If youuse certain cluster configurations deploy your cluster in certain AWS regions or run multiple clustersfrom your account you might need to request additional resources for your AWS account

The following table summarizes the AWS components whose limits can impact your ability to install andrun OpenShift Container Platform clusters

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

platform aws region us-gov-west-1 serviceEndpoints - name ec2 url httpsec2us-gov-west-1amazonawscom - name elasticloadbalancing url httpselasticloadbalancingus-gov-west-1amazonawscom - name route53 url httpsroute53us-govamazonawscom 1 - name tagging url httpstaggingus-gov-west-1amazonawscom 2

OpenShift Container Platform 46 Installing on AWS

8

InstanceLimits

Varies Varies By default each cluster creates the followinginstances

One bootstrap machine which is removedafter installation

Three master nodes

Three worker nodes

These instance type counts are within a newaccountrsquos default limit To deploy more workernodes enable autoscaling deploy large workloadsor use a different instance type review your accountlimits to ensure that your cluster can deploy themachines that you need

In most regions the bootstrap and worker machinesuses an m4large machines and the mastermachines use m4xlarge instances In some regionsincluding all regions that do not support theseinstance types m5large and m5xlarge instancesare used instead

Elastic IPs(EIPs)

0 to 1 5 EIPs peraccount

To provision the cluster in a highly availableconfiguration the installation program creates apublic and private subnet for each availability zonewithin a region Each private subnet requires a NATGateway and each NAT gateway requires aseparate elastic IP Review the AWS region map todetermine how many availability zones are in eachregion To take advantage of the default highavailability install the cluster in a region with at leastthree availability zones To install a cluster in aregion with more than five availability zones youmust increase the EIP limit

IMPORTANT

To use the us-east-1 region youmust increase the EIP limit for youraccount

VirtualPrivateClouds(VPCs)

5 5 VPCs perregion

Each cluster creates its own VPC

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

CHAPTER 1 INSTALLING ON AWS

9

ElasticLoadBalancing(ELBNLB)

3 20 per region By default each cluster creates internal andexternal network load balancers for the master APIserver and a single classic elastic load balancer forthe router Deploying more Kubernetes Serviceobjects with type LoadBalancer will createadditional load balancers

NATGateways

5 5 per availabilityzone

The cluster deploys one NAT gateway in eachavailability zone

ElasticNetworkInterfaces(ENIs)

At least 12 350 per region The default installation creates 21 ENIs and an ENIfor each availability zone in your region Forexample the us-east-1 region contains sixavailability zones so a cluster that is deployed inthat zone uses 27 ENIs Review the AWS region mapto determine how many availability zones are in eachregion

Additional ENIs are created for additional machinesand elastic load balancers that are created bycluster usage and deployed workloads

VPCGateway

20 20 per account Each cluster creates a single VPC Gateway for S3access

S3 buckets 99 100 buckets peraccount

Because the installation process creates atemporary bucket and the registry component ineach cluster creates a bucket you can create only99 OpenShift Container Platform clusters per AWSaccount

SecurityGroups

250 2500 peraccount

Each cluster creates 10 distinct security groups

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

113 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 11 Required EC2 permissions for installation

tagTagResources

tagUntagResources

OpenShift Container Platform 46 Installing on AWS

10

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

CHAPTER 1 INSTALLING ON AWS

11

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 12 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

OpenShift Container Platform 46 Installing on AWS

12

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 13 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

CHAPTER 1 INSTALLING ON AWS

13

elasticloadbalancingSetLoadBalancerPoliciesOfListener

Example 14 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 15 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

OpenShift Container Platform 46 Installing on AWS

14

route53ListHostedZonesByName

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 16 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 17 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

CHAPTER 1 INSTALLING ON AWS

15

s3GetObjectAcl

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 18 Required permissions to delete base cluster resources

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 19 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

OpenShift Container Platform 46 Installing on AWS

16

ec2DeleteVpc

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

Example 110 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 111 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

CHAPTER 1 INSTALLING ON AWS

17

114 Creating an IAM user

Each Amazon Web Services (AWS) account contains a root user account that is based on the emailaddress you used to create the account This is a highly-privileged account and it is recommended touse it for only initial account and billing configuration creating an initial set of users and securing theaccount

Before you install OpenShift Container Platform create a secondary IAM administrative user As youcomplete the Creating an IAM User in Your AWS Account procedure in the AWS documentation set thefollowing options

Procedure

1 Specify the IAM user name and select Programmatic access

2 Attach the AdministratorAccess policy to ensure that the account has sufficient permission tocreate the cluster This policy provides the cluster with the ability to grant credentials to eachOpenShift Container Platform component The cluster grants the components only thecredentials that they require

NOTE

While it is possible to create a policy that grants the all of the required AWSpermissions and attach it to the user this is not the preferred option The clusterwill not have the ability to grant additional credentials to individual componentsso the same credentials are used by all components

3 Optional Add metadata to the user by attaching tags

4 Confirm that the user name that you specified is granted the AdministratorAccess policy

5 Record the access key ID and secret access key values You must use these values when youconfigure your local machine to run the installation program

IMPORTANT

You cannot use a temporary session token that you generated while using amulti-factor authentication device to authenticate to AWS when you deploy acluster The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials

Additional resources

See Manually creating IAM for AWS for steps to set the Cloud Credential Operator (CCO) tomanual mode prior to installation Use this mode in environments where the cloud identity andaccess management (IAM) APIs are not reachable or if you prefer not to store an administrator-level credential secret in the cluster kube-system project

115 Supported AWS regions

You can deploy an OpenShift Container Platform cluster to the following public regions

OpenShift Container Platform 46 Installing on AWS

18

af-south-1 (Cape Town)

ap-east-1 (Hong Kong)

ap-northeast-1 (Tokyo)

ap-northeast-2 (Seoul)

ap-south-1 (Mumbai)

ap-southeast-1 (Singapore)

ap-southeast-2 (Sydney)

ca-central-1 (Central)

eu-central-1 (Frankfurt)

eu-north-1 (Stockholm)

eu-south-1 (Milan)

eu-west-1 (Ireland)

eu-west-2 (London)

eu-west-3 (Paris)

me-south-1 (Bahrain)

sa-east-1 (Satildeo Paulo)

us-east-1 (N Virginia)

us-east-2 (Ohio)

us-west-1 (N California)

us-west-2 (Oregon)

The following AWS GovCloud regions are supported

us-gov-west-1

us-gov-east-1

116 Next steps

Install an OpenShift Container Platform cluster

Quickly install a cluster with default options on installer-provisioned infrastructure

Install a cluster with cloud customizations on installer-provisioned infrastructure

Install a cluster with network customizations on installer-provisioned infrastructure

Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormationtemplates

CHAPTER 1 INSTALLING ON AWS

19

12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS

In OpenShift Container Platform version 46 you can install a customized cluster on infrastructure thatthe installation program provisions on Amazon Web Services (AWS) To customize the installation youmodify parameters in the install-configyaml file before you install the cluster

121 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

122 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

OpenShift Container Platform 46 Installing on AWS

20

1

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

123 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

CHAPTER 1 INSTALLING ON AWS

21

1

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

124 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operating

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

22

1

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

125 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

$ tar xvf openshift-install-linuxtargz

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

23

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1251 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 11 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

OpenShift Container Platform 46 Installing on AWS

24

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

25

Table 12 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

OpenShift Container Platform 46 Installing on AWS

26

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

27

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

28

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

29

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 13 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

OpenShift Container Platform 46 Installing on AWS

30

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

31

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1252 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

OpenShift Container Platform 46 Installing on AWS

32

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 amiID ami-96c6f8f7 12 serviceEndpoints 13

CHAPTER 1 INSTALLING ON AWS

33

1 10 11 14

2

3 7

4

5 8

6 9

12

13

15

16

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

- name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 14fips false 15sshKey ssh-ed25519 AAAA 16

OpenShift Container Platform 46 Installing on AWS

34

1

2

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

126 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-

CHAPTER 1 INSTALLING ON AWS

35

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

127 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1271 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

36

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1272 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

1273 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

37

1

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

128 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

129 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

38

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1210 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

13 INSTALLING A CLUSTER ON AWS WITH NETWORKCUSTOMIZATIONS

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)with customized network configuration options By customizing your network configuration your clustercan coexist with existing IP address allocations in your environment and integrate with existing MTU andVXLAN configurations

You must set most of the network configuration parameters during installation and you can modify only kubeProxy configuration parameters in a running cluster

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

39

131 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

132 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

OpenShift Container Platform 46 Installing on AWS

40

1

1

See About remote health monitoring for more information about the Telemetry service

133 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

41

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

134 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

135 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

$ tar xvf openshift-install-linuxtargz

OpenShift Container Platform 46 Installing on AWS

42

1

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

43

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1351 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 14 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

OpenShift Container Platform 46 Installing on AWS

44

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

Table 15 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

45

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

46

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

47

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

48

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

49

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 16 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

50

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

51

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1352 Network configuration parameters

You can modify your cluster network configuration parameters in the install-configyaml configurationfile The following table describes the parameters

NOTE

You cannot modify these parameters in the install-configyaml file after installation

Table 17 Required network parameters

Parameter Description Value

networkingnetworkType

The default Container Network Interface (CNI)network provider plug-in to deploy

Either OpenShiftSDN or OVNKubernetes Thedefault value is OpenShiftSDN

OpenShift Container Platform 46 Installing on AWS

52

networkingclusterNetwork[]cidr

A block of IP addresses from which pod IP addressesare allocated The OpenShiftSDN network plug-insupports multiple cluster networks The addressblocks for multiple cluster networks must not overlapSelect address pools large enough to fit youranticipated workload

An IP address allocation inCIDR format The defaultvalue is 101280014

networkingclusterNetwork[]hostPrefix

The subnet prefix length to assign to each individualnode For example if hostPrefix is set to 23 theneach node is assigned a 23 subnet out of the given cidr allowing for 510 (2^(32 - 23) - 2) pod IPaddresses

A subnet prefix The defaultvalue is 23

networkingserviceNetwork[]

A block of IP addresses for services OpenShiftSDNallows only one serviceNetwork block The addressblock must not overlap with any other network block

An IP address allocation inCIDR format The defaultvalue is 172300016

networkingmachineNetwork[]cidr

A block of IP addresses assigned to nodes created bythe OpenShift Container Platform installationprogram while installing the cluster The addressblock must not overlap with any other network blockMultiple CIDR ranges may be specified

An IP address allocation inCIDR format The defaultvalue is 1000016

Parameter Description Value

1353 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b rootVolume iops 4000 size 500 type io1 6

CHAPTER 1 INSTALLING ON AWS

53

1 10 12 15

2

3 7 11

4

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

If you do not provide these parameters and values the installation program provides thedefault value

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only one

type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking 11 clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 12 userTags adminContact jdoe costCenter 7536 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17

OpenShift Container Platform 46 Installing on AWS

54

5 8

6 9

13

14

16

17

control plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

136 Modifying advanced network configuration parameters

You can modify the advanced network configuration parameters only before you install the clusterAdvanced configuration customization lets you integrate your cluster into your existing networkenvironment by specifying an MTU or VXLAN port by allowing customization of kube-proxy settingsand by specifying a different mode for the openshiftSDNConfig parameter

IMPORTANT

Modifying the OpenShift Container Platform manifest files directly is not supported

Prerequisites

Create the install-configyaml file and complete any modifications to it

Procedure

CHAPTER 1 INSTALLING ON AWS

55

1

1

1

1 Change to the directory that contains the installation program and create the manifests

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-network-03-configyml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

3 Open the cluster-network-03-configyml file in an editor and enter a CR that describes theOperator configuration you want

The parameters for the spec parameter are only an example Specify your configurationfor the Cluster Network Operator in the CR

The CNO provides default values for the parameters in the CR so you must specify only theparameters that you want to change

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-network-03-configyml 1

$ ls ltinstallation_directorygtmanifestscluster-network-

cluster-network-01-crdymlcluster-network-02-configymlcluster-network-03-configyml

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec 1 clusterNetwork - cidr 101280014 hostPrefix 23 serviceNetwork - 172300016 defaultNetwork type OpenShiftSDN openshiftSDNConfig mode NetworkPolicy mtu 1450 vxlanPort 4789

OpenShift Container Platform 46 Installing on AWS

56

1

1

4 Save the cluster-network-03-configyml file and quit the text editor

5 Optional Back up the manifestscluster-network-03-configyml file The installation programdeletes the manifests directory when creating the cluster

NOTE

For more information on using a Network Load Balancer (NLB) on AWS see ConfiguringIngress cluster traffic on AWS using a Network Load Balancer

137 Configuring an Ingress Controller Network Load Balancer on a new AWScluster

You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster

Prerequisites

Create the install-configyaml file and complete any modifications to it

Procedure

Create an Ingress Controller backed by an AWS NLB on a new cluster

1 Change to the directory that contains the installation program and create the manifests

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-ingress-default-ingresscontrolleryaml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

3 Open the cluster-ingress-default-ingresscontrolleryaml file in an editor and enter a CR thatdescribes the Operator configuration you want

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-ingress-default-ingresscontrolleryaml 1

$ ls ltinstallation_directorygtmanifestscluster-ingress-default-ingresscontrolleryaml

cluster-ingress-default-ingresscontrolleryaml

apiVersion operatoropenshiftiov1kind IngressController

CHAPTER 1 INSTALLING ON AWS

57

1 2

3

4 Save the cluster-ingress-default-ingresscontrolleryaml file and quit the text editor

5 Optional Back up the manifestscluster-ingress-default-ingresscontrolleryaml file Theinstallation program deletes the manifests directory when creating the cluster

138 Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO)configuration and stored in a CR object that is named cluster The CR specifies the parameters for the Network API in the operatoropenshiftio API group

You can specify the cluster network configuration for your OpenShift Container Platform cluster bysetting the parameter values for the defaultNetwork parameter in the CNO CR The following CRdisplays the default configuration for the CNO and explains both the parameters you can configure andthe valid parameter values

Cluster Network Operator CR

Specified in the install-configyaml file

Configures the default Container Network Interface (CNI) network provider for the clusternetwork

metadata creationTimestamp null name default namespace openshift-ingress-operatorspec endpointPublishingStrategy loadBalancer scope External providerParameters type AWS aws type NLB type LoadBalancerService

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec clusterNetwork 1 - cidr 101280014 hostPrefix 23 serviceNetwork 2 - 172300016 defaultNetwork 3 kubeProxyConfig 4 iptablesSyncPeriod 30s 5 proxyArguments iptables-min-sync-period 6 - 0s

OpenShift Container Platform 46 Installing on AWS

58

4

5

6

1

2

3

4

5

The parameters for this object specify the kube-proxy configuration If you do not specify theparameter values the Cluster Network Operator applies the displayed default parameter values If

The refresh period for iptables rules The default value is 30s Valid suffixes include s m and hand are described in the Go time package documentation

NOTE

Because of performance improvements introduced in OpenShift Container Platform43 and greater adjusting the iptablesSyncPeriod parameter is no longernecessary

The minimum duration before refreshing iptables rules This parameter ensures that the refreshdoes not happen too frequently Valid suffixes include s m and h and are described in the Go timepackage

1381 Configuration parameters for the OpenShift SDN default CNI network provider

The following YAML object describes the configuration parameters for the OpenShift SDN defaultContainer Network Interface (CNI) network provider

Specified in the install-configyaml file

Specify only if you want to override part of the OpenShift SDN configuration

Configures the network isolation mode for OpenShift SDN The allowed values are Multitenant Subnet or NetworkPolicy The default value is NetworkPolicy

The maximum transmission unit (MTU) for the VXLAN overlay network This is detectedautomatically based on the MTU of the primary network interface You do not normally need tooverride the detected MTU

If the auto-detected value is not what you expected it to be confirm that the MTU on the primarynetwork interface on your nodes is correct You cannot use this option to change the MTU value ofthe primary network interface on the nodes

If your cluster requires different MTU values for different nodes you must set this value to 50 lessthan the lowest MTU value in your cluster For example if some nodes in your cluster have an MTUof 9001 and some have an MTU of 1500 you must set this value to 1450

The port to use for all VXLAN packets The default value is 4789 If you are running in a virtualizedenvironment with existing nodes that are part of another VXLAN network then you might berequired to change this For example when running an OpenShift SDN overlay on top of VMwareNSX-T you must select an alternate port for VXLAN since both SDNs use the same defaultVXLAN port number

On Amazon Web Services (AWS) you can select an alternate port for the VXLAN between port

defaultNetwork type OpenShiftSDN 1 openshiftSDNConfig 2 mode NetworkPolicy 3 mtu 1450 4 vxlanPort 4789 5

CHAPTER 1 INSTALLING ON AWS

59

1

2

3

4

On Amazon Web Services (AWS) you can select an alternate port for the VXLAN between port 9000 and port 9999

1382 Configuration parameters for the OVN-Kubernetes default CNI network provider

The following YAML object describes the configuration parameters for the OVN-Kubernetes defaultCNI network provider

Specified in the install-configyaml file

Specify only if you want to override part of the OVN-Kubernetes configuration

The maximum transmission unit (MTU) for the Geneve (Generic Network VirtualizationEncapsulation) overlay network This is detected automatically based on the MTU of the primarynetwork interface You do not normally need to override the detected MTU

If the auto-detected value is not what you expected it to be confirm that the MTU on the primarynetwork interface on your nodes is correct You cannot use this option to change the MTU value ofthe primary network interface on the nodes

If your cluster requires different MTU values for different nodes you must set this value to 100 lessthan the lowest MTU value in your cluster For example if some nodes in your cluster have an MTUof 9001 and some have an MTU of 1500 you must set this value to 1400

The UDP port for the Geneve overlay network

1383 Cluster Network Operator example configuration

A complete CR object for the CNO is displayed in the following example

Cluster Network Operator example CR

defaultNetwork type OVNKubernetes 1 ovnKubernetesConfig 2 mtu 1400 3 genevePort 6081 4

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec clusterNetwork - cidr 101280014 hostPrefix 23 serviceNetwork - 172300016 defaultNetwork type OpenShiftSDN openshiftSDNConfig mode NetworkPolicy mtu 1450

OpenShift Container Platform 46 Installing on AWS

60

1

1

139 Configuring hybrid networking with OVN-Kubernetes

You can configure your cluster to use hybrid networking with OVN-Kubernetes This allows a hybridcluster that supports different node networking configurations For example this is necessary to runboth Linux and Windows nodes in a cluster

IMPORTANT

You must configure hybrid networking with OVN-Kubernetes during the installation ofyour cluster You cannot switch to hybrid networking after the installation process

Prerequisites

You defined OVNKubernetes for the networkingnetworkType parameter in the install-configyaml file See the installation documentation for configuring OpenShift ContainerPlatform network customizations on your chosen cloud provider for more information

Procedure

1 Create the manifests from the directory that contains the installation program

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-network-03-configyml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

vxlanPort 4789 kubeProxyConfig iptablesSyncPeriod 30s proxyArguments iptables-min-sync-period - 0s

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-network-03-configyml 1

$ ls -1 ltinstallation_directorygtmanifestscluster-network-

cluster-network-01-crdymlcluster-network-02-configymlcluster-network-03-configyml

CHAPTER 1 INSTALLING ON AWS

61

1

2

3

4

3 Open the cluster-network-03-configyml file and configure OVN-Kubernetes with hybridnetworking For example

The parameters for the spec parameter are only an example Specify your configurationfor the Cluster Network Operator in the custom resource

Specify the CIDR configuration used when adding nodes

Specify OVNKubernetes as the Container Network Interface (CNI) cluster networkprovider

Specify the CIDR configuration used for nodes on the additional overlay network The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR

4 Optional Back up the ltinstallation_directorygtmanifestscluster-network-03-configyml fileThe installation program deletes the manifests directory when creating the cluster

NOTE

For more information on using Linux and Windows nodes in the same cluster seeUnderstanding Windows container workloads

1310 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

apiVersion operatoropenshiftiov1kind Networkmetadata creationTimestamp null name clusterspec 1 clusterNetwork 2 - cidr 101280014 hostPrefix 23 externalIP policy serviceNetwork - 172300016 defaultNetwork type OVNKubernetes 3 ovnKubernetesConfig hybridOverlayConfig hybridClusterNetwork 4 - cidr 101320014 hostPrefix 23status

OpenShift Container Platform 46 Installing on AWS

62

1

2

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

CHAPTER 1 INSTALLING ON AWS

63

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

1311 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

13111 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

$ tar xvzf ltfilegt

OpenShift Container Platform 46 Installing on AWS

64

After you install the CLI it is available using the oc command

13112 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

13113 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

$ echo $PATH

CHAPTER 1 INSTALLING ON AWS

65

1

1312 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1313 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ cat ltinstallation_directorygtauthkubeadmin-password

OpenShift Container Platform 46 Installing on AWS

66

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1314 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC

In OpenShift Container Platform version 46 you can install a cluster into an existing Amazon VirtualPrivate Cloud (VPC) on Amazon Web Services (AWS) The installation program provisions the rest ofthe required infrastructure which you can further customize To customize the installation you modifyparameters in the install-configyaml file before you install the cluster

141 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

67

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

142 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

1421 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

OpenShift Container Platform 46 Installing on AWS

68

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

CHAPTER 1 INSTALLING ON AWS

69

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

1422 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

OpenShift Container Platform 46 Installing on AWS

70

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1423 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1424 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

143 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintained

CHAPTER 1 INSTALLING ON AWS

71

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

144 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

OpenShift Container Platform 46 Installing on AWS

72

1

1

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

145 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

73

1

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

146 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

$ tar xvf openshift-install-linuxtargz

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

OpenShift Container Platform 46 Installing on AWS

74

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1461 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

CHAPTER 1 INSTALLING ON AWS

75

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 18 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

OpenShift Container Platform 46 Installing on AWS

76

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

Table 19 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

77

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

78

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

79

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

80

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

81

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 110 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

82

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

83

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1462 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b

OpenShift Container Platform 46 Installing on AWS

84

1 10 11 15

2

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 subnets 12 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17

CHAPTER 1 INSTALLING ON AWS

85

3 7

4

5 8

6 9

12

13

14

16

17

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

1463 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

OpenShift Container Platform 46 Installing on AWS

86

1

2

3

4

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless the

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

87

1

2

proxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

147 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

OpenShift Container Platform 46 Installing on AWS

88

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

148 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

CHAPTER 1 INSTALLING ON AWS

89

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1481 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1482 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

OpenShift Container Platform 46 Installing on AWS

90

1

After you install the CLI it is available using the oc command

1483 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

149 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Cgt oc ltcommandgt

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

CHAPTER 1 INSTALLING ON AWS

91

Example output

1410 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

$ oc whoami

systemadmin

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

OpenShift Container Platform 46 Installing on AWS

92

1411 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

15 INSTALLING A PRIVATE CLUSTER ON AWS

In OpenShift Container Platform version 46 you can install a private cluster into an existing VPC onAmazon Web Services (AWS) The installation program provisions the rest of the requiredinfrastructure which you can further customize To customize the installation you modify parameters inthe install-configyaml file before you install the cluster

151 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

152 Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose externalendpoints Private clusters are accessible from only an internal network and are not visible to theInternet

By default OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpointsA private cluster sets the DNS Ingress Controller and API server to private when you deploy yourcluster This means that the cluster resources are only accessible from your internal network and are notvisible to the internet

To deploy a private cluster you must use existing networking that meets your requirements Your clusterresources might be shared between other clusters on the network

Additionally you must deploy a private cluster from a machine that has access the API services for thecloud you provision to the hosts on the network that you provision and to the internet to obtaininstallation media You can use any machine that meets these access requirements and follows your

CHAPTER 1 INSTALLING ON AWS

93

companyrsquos guidelines For example this machine can be a bastion host on your cloud network or amachine that has access to the network through a VPN

1521 Private clusters in AWS

To create a private cluster on Amazon Web Services (AWS) you must provide an existing private VPCand subnets to host the cluster The installation program must also be able to resolve the DNS recordsthat the cluster requires The installation program configures the Ingress Operator and API server foraccess from only the private network

The cluster still requires access to Internet to access the AWS APIs

The following items are not required or created when you install a private cluster

Public subnets

Public load balancers which support public ingress

A public Route 53 zone that matches the baseDomain for the cluster

The installation program does use the baseDomain that you specify to create a private Route 53 zoneand the required records for the cluster The cluster is configured so that the Operators do not createpublic records for the cluster and all cluster machines are placed in the private subnets that you specify

15211 Limitations

The ability to add public functionality to a private cluster is limited

You cannot make the Kubernetes API endpoints public after installation without takingadditional actions including creating public subnets in the VPC for each availability zone in usecreating a public load balancer and configuring the control plane security groups to allow trafficfrom Internet on 6443 (Kubernetes API port)

If you use a public Service type load balancer you must tag a public subnet in each availabilityzone with kubernetesioclusterltcluster-infra-idgt shared so that AWS can use them tocreate public load balancers

153 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

1531 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

OpenShift Container Platform 46 Installing on AWS

94

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

CHAPTER 1 INSTALLING ON AWS

95

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

OpenShift Container Platform 46 Installing on AWS

96

1532 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1533 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1534 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

154 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster The

CHAPTER 1 INSTALLING ON AWS

97

Telemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

155 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on your

OpenShift Container Platform 46 Installing on AWS

98

1

1

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

156 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for your

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

99

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

157 Manually creating the installation configuration file

For installations of a private OpenShift Container Platform cluster that are only accessible from aninternal network and are not visible to the Internet you must manually generate your installationconfiguration file

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for yourcluster

Procedure

1 Create an installation directory to store your required installation assets in

IMPORTANT

You must create a directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse an installationdirectory If you want to reuse individual files from another cluster installationyou can copy them into your directory However the file names for theinstallation assets might change between releases Use caution when copyinginstallation files from an earlier OpenShift Container Platform version

$ tar xvf openshift-install-linuxtargz

$ mkdir ltinstallation_directorygt

OpenShift Container Platform 46 Installing on AWS

100

2 Customize the following install-configyaml file template and save it in the ltinstallation_directorygt

NOTE

You must name this configuration file install-configyaml

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the next step of the installationprocess You must back it up now

1571 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 111 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

CHAPTER 1 INSTALLING ON AWS

101

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

OpenShift Container Platform 46 Installing on AWS

102

Table 112 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

CHAPTER 1 INSTALLING ON AWS

103

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

104

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

105

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

106

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 113 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

CHAPTER 1 INSTALLING ON AWS

107

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

108

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1572 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b

CHAPTER 1 INSTALLING ON AWS

109

1 10 11 15

2

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operators

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 subnets 12 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17publish Internal 18

OpenShift Container Platform 46 Installing on AWS

110

3 7

4

5 8

6 9

12

13

14

16

17

18

reference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

How to publish the user-facing endpoints of your cluster Set publish to Internal to deploy aprivate cluster which cannot be accessed from the Internet The default value is External

1573 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS

CHAPTER 1 INSTALLING ON AWS

111

1

2

3

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

OpenShift Container Platform 46 Installing on AWS

112

4

1

2

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificates

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

158 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

CHAPTER 1 INSTALLING ON AWS

113

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

159 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1591 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

114

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1592 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

1593 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and click

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

115

1

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1510 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1511 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

116

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1512 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)into a government region To configure the government region modify parameters in the install-configyaml file before you install the cluster

161 Prerequisites

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

117

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

162 AWS government regions

OpenShift Container Platform supports deploying a cluster to AWS GovCloud (US) regions AWSGovCloud is specifically designed for US government agencies at the federal state and local level aswell as contractors educational institutions and other US customers that must run sensitive workloadsin the cloud

These regions do not have published Red Hat Enterprise Linux CoreOS (RHCOS) Amazon MachineImages (AMI) to select so you must upload a custom AMI that belongs to that region

The following AWS GovCloud partitions are supported

us-gov-west-1

us-gov-east-1

The AWS GovCloud region and custom AMI must be manually configured in the install-configyaml filesince RHCOS AMIs are not provided by Red Hat for those regions

163 Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose externalendpoints Private clusters are accessible from only an internal network and are not visible to theInternet

NOTE

Public zones are not supported in Route 53 in AWS GovCloud Therefore clusters mustbe private if they are deployed to an AWS government region

By default OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpointsA private cluster sets the DNS Ingress Controller and API server to private when you deploy yourcluster This means that the cluster resources are only accessible from your internal network and are notvisible to the internet

OpenShift Container Platform 46 Installing on AWS

118

To deploy a private cluster you must use existing networking that meets your requirements Your clusterresources might be shared between other clusters on the network

Additionally you must deploy a private cluster from a machine that has access the API services for thecloud you provision to the hosts on the network that you provision and to the internet to obtaininstallation media You can use any machine that meets these access requirements and follows yourcompanyrsquos guidelines For example this machine can be a bastion host on your cloud network or amachine that has access to the network through a VPN

1631 Private clusters in AWS

To create a private cluster on Amazon Web Services (AWS) you must provide an existing private VPCand subnets to host the cluster The installation program must also be able to resolve the DNS recordsthat the cluster requires The installation program configures the Ingress Operator and API server foraccess from only the private network

The cluster still requires access to Internet to access the AWS APIs

The following items are not required or created when you install a private cluster

Public subnets

Public load balancers which support public ingress

A public Route 53 zone that matches the baseDomain for the cluster

The installation program does use the baseDomain that you specify to create a private Route 53 zoneand the required records for the cluster The cluster is configured so that the Operators do not createpublic records for the cluster and all cluster machines are placed in the private subnets that you specify

16311 Limitations

The ability to add public functionality to a private cluster is limited

You cannot make the Kubernetes API endpoints public after installation without takingadditional actions including creating public subnets in the VPC for each availability zone in usecreating a public load balancer and configuring the control plane security groups to allow trafficfrom Internet on 6443 (Kubernetes API port)

If you use a public Service type load balancer you must tag a public subnet in each availabilityzone with kubernetesioclusterltcluster-infra-idgt shared so that AWS can use them tocreate public load balancers

164 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

CHAPTER 1 INSTALLING ON AWS

119

1641 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

OpenShift Container Platform 46 Installing on AWS

120

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

CHAPTER 1 INSTALLING ON AWS

121

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

1642 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1643 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1644 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

OpenShift Container Platform 46 Installing on AWS

122

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

165 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

166 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

CHAPTER 1 INSTALLING ON AWS

123

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

167 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

124

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

168 Manually creating the installation configuration file

When installing OpenShift Container Platform on Amazon Web Services (AWS) into a region requiring acustom Red Hat Enterprise Linux CoreOS (RHCOS) AMI you must manually generate your installationconfiguration file

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for yourcluster

Procedure

1 Create an installation directory to store your required installation assets in

IMPORTANT

$ tar xvf openshift-install-linuxtargz

$ mkdir ltinstallation_directorygt

CHAPTER 1 INSTALLING ON AWS

125

IMPORTANT

You must create a directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse an installationdirectory If you want to reuse individual files from another cluster installationyou can copy them into your directory However the file names for theinstallation assets might change between releases Use caution when copyinginstallation files from an earlier OpenShift Container Platform version

2 Customize the following install-configyaml file template and save it in the ltinstallation_directorygt

NOTE

You must name this configuration file install-configyaml

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the next step of the installationprocess You must back it up now

1681 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 114 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

OpenShift Container Platform 46 Installing on AWS

126

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

127

Table 115 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

OpenShift Container Platform 46 Installing on AWS

128

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

129

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

130

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

131

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 116 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

132

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

133

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1682 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-gov-west-1a - us-gov-west-1b

OpenShift Container Platform 46 Installing on AWS

134

1 10 14

2

Required

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operators

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-gov-west-1c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-gov-west-1 userTags adminContact jdoe costCenter 7536 subnets 11 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 12 serviceEndpoints 13 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 14fips false 15sshKey ssh-ed25519 AAAA 16publish Internal 17

CHAPTER 1 INSTALLING ON AWS

135

3 7

4

5 8

6 9

11

12

13

15

16

17

reference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

How to publish the user-facing endpoints of your cluster Set publish to Internal to deploy aprivate cluster which cannot be accessed from the Internet The default value is External

1683 AWS regions without a published RHCOS AMI

You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions

OpenShift Container Platform 46 Installing on AWS

136

1

1

1

without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI)or the AWS software development kit (SDK) If a published AMI is not available for an AWS region youcan upload a custom AMI prior to installing the cluster This is required if you are deploying your clusterto an AWS government region

If you are deploying to a non-government region that does not have a published RHCOS AMI and youdo not specify a custom AMI the installation program copies the us-east-1 AMI to the user accountautomatically Then the installation program creates the control plane machines with encrypted EBSvolumes using the default or user-specified Key Management Service (KMS) key This allows the AMI tofollow the same process workflow as published RHCOS AMIs

A region without native support for an RHCOS AMI is not available to select from the terminal duringcluster creation because it is not published However you can install to this region by configuring thecustom AMI in the install-configyaml file

1684 Uploading a custom RHCOS AMI in AWS

If you are deploying to a custom Amazon Web Services (AWS) region you must upload a custom RedHat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region

Prerequisites

You configured an AWS account

You created an Amazon S3 bucket with the required IAM service role

You uploaded your RHCOS VMDK file to Amazon S3 The RHCOS VMDK file must be thehighest version that is less than or equal to the OpenShift Container Platform version you areinstalling

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer

Procedure

1 Export your AWS profile as an environment variable

The AWS profile name that holds your AWS credentials like govcloud

2 Export the region to associate with your custom AMI as an environment variable

The AWS region like us-gov-east-1

3 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable

The RHCOS VMDK version like 460

$ export AWS_PROFILE=ltaws_profilegt 1

$ export AWS_DEFAULT_REGION=ltaws_regiongt 1

$ export RHCOS_VERSION=ltversiongt 1

CHAPTER 1 INSTALLING ON AWS

137

1

2

4 Export the Amazon S3 bucket name as an environment variable

5 Create the containersjson file and define your RHCOS VMDK file

6 Import the RHCOS disk as an Amazon EBS snapshot

The description of your RHCOS disk being imported like rhcos-$RHCOS_VERSION-x86_64-awsx86_64

The file path to the JSON file describing your RHCOS disk The JSON file should containyour Amazon S3 bucket name and key

7 Check the status of the image import

Example output

$ export VMIMPORT_BUCKET_NAME=lts3_bucket_namegt

$ cat ltltEOF gt containersjson Description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 Format vmdk UserBucket S3Bucket $VMIMPORT_BUCKET_NAME S3Key rhcos-$RHCOS_VERSION-x86_64-awsx86_64vmdk EOF

$ aws ec2 import-snapshot --region $AWS_DEFAULT_REGION --description ltdescriptiongt 1 --disk-container ltfile_pathgtcontainersjson 2

$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region $AWS_DEFAULT_REGION

ImportSnapshotTasks [ Description rhcos-460-x86_64-awsx86_64 ImportTaskId import-snap-fh6i8uil SnapshotTaskDetail Description rhcos-460-x86_64-awsx86_64 DiskImageSize 8190566400 Format VMDK SnapshotId snap-06331325870076318 Status completed UserBucket S3Bucket external-images S3Key rhcos-460-x86_64-awsx86_64vmdk

OpenShift Container Platform 46 Installing on AWS

138

1

2

3

4

Copy the SnapshotId to register the image

8 Create a custom RHCOS AMI from the RHCOS snapshot

The RHCOS VMDK architecture type like x86_64 s390x or ppc64le

The Description from the imported snapshot

The name of the RHCOS AMI

The SnapshotID from the imported snapshot

To learn more about these APIs see the AWS documentation for importing snapshots and creatingEBS-backed AMIs

1685 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

]

$ aws ec2 register-image --region $AWS_DEFAULT_REGION --architecture x86_64 1 --description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 2 --ena-support --name rhcos-$RHCOS_VERSION-x86_64-awsx86_64 3 --virtualization-type hvm --root-device-name devxvda --block-device-mappings DeviceName=devxvdaEbs=DeleteOnTermination=trueSnapshotId=ltsnapshot_IDgt 4

CHAPTER 1 INSTALLING ON AWS

139

1

2

3

4

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

OpenShift Container Platform 46 Installing on AWS

140

1

2

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

169 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

CHAPTER 1 INSTALLING ON AWS

141

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

1610 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

16101 Installing the OpenShift CLI on Linux

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

142

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

16102 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

16103 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

143

1

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1611 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1612 Logging in to the cluster by using the web console

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

144

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1613 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

17 INSTALLING A CLUSTER ON USER-PROVISIONED

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

145

17 INSTALLING A CLUSTER ON USER-PROVISIONEDINFRASTRUCTURE IN AWS BY USING CLOUDFORMATIONTEMPLATES

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)that uses infrastructure that you provide

One way to create this infrastructure is to use the provided CloudFormation templates You can modifythe templates to customize your infrastructure or use the information that they contain to create AWSobjects according to your companyrsquos policies

IMPORTANT

The steps for performing a user-provisioned infrastructure installation are provided as anexample only Installing a cluster with infrastructure you provide requires knowledge ofthe cloud provider and the installation process of OpenShift Container Platform SeveralCloudFormation templates are provided to assist in completing these steps or to helpmodel your own You are also free to create the required resources through othermethods the templates are just an example

171 Prerequisites

You reviewed details about the OpenShift Container Platform installation and updateprocesses

You configured an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix) in the AWS documentation

If you use a firewall you configured it to allow the sites that your cluster requires access to

NOTE

Be sure to also review this site list if you are configuring a proxy

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

172 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster The

OpenShift Container Platform 46 Installing on AWS

146

Telemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

173 Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services(AWS) you must manually create both the machines and their supporting infrastructure

For more information about the integration testing for different platforms see the OpenShift ContainerPlatform 4x Tested Integrations page

By using the provided CloudFormation templates you can create stacks of AWS resources thatrepresent the following components

An AWS Virtual Private Cloud (VPC)

Networking and load balancing components

Security groups and roles

An OpenShift Container Platform bootstrap node

OpenShift Container Platform control plane nodes

An OpenShift Container Platform compute node

Alternatively you can manually create the components or you can reuse existing infrastructure that

CHAPTER 1 INSTALLING ON AWS

147

Alternatively you can manually create the components or you can reuse existing infrastructure thatmeets the cluster requirements Review the CloudFormation templates for more details about how thecomponents interrelate

1731 Cluster machines

You need AWSEC2Instance objects for the following machines

A bootstrap machine This machine is required during installation but you can remove it afteryour cluster deploys

Three control plane machines The control plane machines are not governed by a machine set

Compute machines You must create at least two compute machines which are also known asworker machines during installation These machines are not governed by a machine set

You can use the following instance types for the cluster machines with the provided CloudFormationtemplates

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5types instead

Table 117 Instance types for machines

Instance type Bootstrap Control plane Compute

i3large x

m4large x

m4xlarge x x

m42xlarge x x

m44xlarge x x

m48xlarge x x

m410xlarge x x

m416xlarge x x

m5large x

m5xlarge x x

m52xlarge x x

m54xlarge x x

OpenShift Container Platform 46 Installing on AWS

148

m58xlarge x x

m510xlarge x x

m516xlarge x x

c4large x

c4xlarge x

c42xlarge x x

c44xlarge x x

c48xlarge x x

r4large x

r4xlarge x x

r42xlarge x x

r44xlarge x x

r48xlarge x x

r416xlarge x x

Instance type Bootstrap Control plane Compute

You might be able to use other instance types that meet the specifications of these instance types

1732 Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructurethat you provision you must provide a mechanism for approving cluster certificate signing requests(CSRs) after installation The kube-controller-manager only approves the kubelet client CSRs The machine-approver cannot guarantee the validity of a serving certificate that is requested by usingkubelet credentials because it cannot confirm that the correct machine issued the request You mustdetermine and implement a method of verifying the validity of the kubelet serving certificate requestsand approving them

1733 Other infrastructure components

A VPC

DNS entries

CHAPTER 1 INSTALLING ON AWS

149

Load balancers (classic or network) and listeners

A public and a private Route 53 zone

Security groups

IAM roles

S3 buckets

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

OpenShift Container Platform 46 Installing on AWS

150

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a privatehosted zone similar to the one that the installation program uses if it provisions the clusterrsquosinfrastructure You must create a DNS entry that resolves to your load balancer An entry for apiltcluster_namegtltdomaingt must point to the external load balancer and an entry for api-intltcluster_namegtltdomaingt must point to the internal load balancer

The cluster also requires load balancers and listeners for port 6443 which are required for theKubernetes API and its extensions and port 22623 which are required for the Ignition config files fornew machines The targets will be the master nodes Port 6443 must be accessible to both clientsexternal to the cluster and nodes within the cluster Port 22623 must be accessible to nodes within thecluster

Component AWS type Description

DNS AWSRoute53HostedZone

The hosted zone for your internal DNS

CHAPTER 1 INSTALLING ON AWS

151

etcd recordsets

AWSRoute53RecordSet

The registration records for etcd for your control plane machines

Public loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your public subnets

External APIserver record

AWSRoute53RecordSetGroup

Alias records for the external API server

Externallistener

AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the external load balancer

External targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the external load balancer

Private loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your private subnets

Internal APIserver record

AWSRoute53RecordSetGroup

Alias records for the internal API server

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 22623 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the internal load balancer

Component AWS type Description

OpenShift Container Platform 46 Installing on AWS

152

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Component AWS type Description

Security groups

The control plane and worker machines require access to the following ports

Group Type IP Protocol Port range

MasterSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

tcp 6443

tcp 22623

WorkerSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

BootstrapSecurityGroup

AWSEC2SecurityGroup

tcp 22

tcp 19531

Control plane Ingress

The control plane machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd tcp 2379- 2380

MasterIngressVxlan

Vxlan packets udp 4789

MasterIngressWorkerVxlan

Vxlan packets udp 4789

MasterIngressInternal

Internal cluster communication and Kubernetesproxy metrics

tcp 9000 - 9999

CHAPTER 1 INSTALLING ON AWS

153

MasterIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

MasterIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Worker Ingress

The worker machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets udp 4789

WorkerIngressWorkerVxlan

Vxlan packets udp 4789

WorkerIngressInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

OpenShift Container Platform 46 Installing on AWS

154

WorkerIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Roles and instance profiles

You must grant the machines permissions in AWS The provided CloudFormation templates grant themachines Allow permissions for the following AWSIAMRole objects and provide a AWSIAMInstanceProfile for each set of roles If you do not use the templates you can grant themachines the following broad permissions or the following individual permissions

Role Effect Action Resource

Master Allow ec2

Allow elasticloadbalancing

Allow iamPassRole

Allow s3GetObject

Worker Allow ec2Describe

Bootstrap Allow ec2Describe

Allow ec2AttachVolume

Allow ec2DetachVolume

1734 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 112 Required EC2 permissions for installation

tagTagResources

tagUntagResources

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

CHAPTER 1 INSTALLING ON AWS

155

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

OpenShift Container Platform 46 Installing on AWS

156

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 113 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

CHAPTER 1 INSTALLING ON AWS

157

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 114 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

elasticloadbalancingSetLoadBalancerPoliciesOfListener

OpenShift Container Platform 46 Installing on AWS

158

Example 115 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 116 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

route53ListHostedZonesByName

CHAPTER 1 INSTALLING ON AWS

159

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 117 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 118 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

s3GetObjectAcl

OpenShift Container Platform 46 Installing on AWS

160

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 119 Required permissions to delete base cluster resources

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 120 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

ec2DeleteVpc

CHAPTER 1 INSTALLING ON AWS

161

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

Example 121 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 122 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

OpenShift Container Platform 46 Installing on AWS

162

174 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

175 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster the

$ tar xvf openshift-install-linuxtargz

CHAPTER 1 INSTALLING ON AWS

163

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram If you install a cluster on infrastructure that you provision you must provide this key toyour clusterrsquos machines

176 Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisionedinfrastructure you must generate the files that the installation program needs to deploy your clusterand modify them so that the cluster creates only the machines that it will use You generate and

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

164

customize the install-configyaml file Kubernetes manifests and Ignition config files You also have theoption to first set up a separate var partition during the preparation phases of installation

1761 Optional Creating a separate var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installerHowever there are cases where you might want to create separate partitions in a part of the filesystemthat you expect to grow

OpenShift Container Platform supports the addition of a single partition to attach storage to either the var partition or a subdirectory of var For example

varlibcontainers Holds container-related content that can grow as more images andcontainers are added to a system

varlibetcd Holds data that you might want to keep separate for purposes such asperformance optimization of etcd storage

var Holds data that you might want to keep separate for purposes such as auditing

Storing the contents of a var directory separately makes it easier to grow storage for those areas asneeded and reinstall OpenShift Container Platform at a later date and keep that data intact With thismethod you will not have to pull all your containers again nor will you have to copy massive log fileswhen you update systems

Because var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS)the following procedure sets up the separate var partition by creating a machine config that is insertedduring the openshift-install preparation phases of an OpenShift Container Platform installation

IMPORTANT

If you follow the steps to create a separate var partition in this procedure it is notnecessary to create the Kubernetes manifest and Ignition config files again as describedlater in this section

Procedure

1 Create a directory to hold the OpenShift Container Platform installation files

2 Run openshift-install to create a set of files in the manifest and openshift subdirectoriesAnswer the system questions as you are prompted

3 Create a MachineConfig object and add it to a file in the openshift directory For examplename the file 98-var-partitionyaml change the disk device name to the name of the storage

$ mkdir $HOMEclusterconfig

$ openshift-install create manifests --dir $HOMEclusterconfig SSH Public Key $ ls $HOMEclusterconfigopenshift99_kubeadmin-password-secretyaml99_openshift-cluster-api_master-machines-0yaml99_openshift-cluster-api_master-machines-1yaml99_openshift-cluster-api_master-machines-2yaml

CHAPTER 1 INSTALLING ON AWS

165

1

2

device on the worker systems and set the storage size as appropriate This attaches storage toa separate var directory

The storage device name of the disk that you want to partition

When adding a data partition to the boot disk a minimum value of 25000 MiB (Mebibytes)is recommended The root file system is automatically resized to fill all available space upto the specified offset If no value is specified or if the specified value is smaller than therecommended minimum the resulting root file system will be too small and futurereinstalls of RHCOS might overwrite the beginning of the data partition

4 Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories

Now you can use the Ignition config files as input to the installation procedures to install Red HatEnterprise Linux CoreOS (RHCOS) systems

apiVersion machineconfigurationopenshiftiov1kind MachineConfigmetadata labels machineconfigurationopenshiftiorole worker name 98-var-partitionspec config ignition version 310 storage disks - device devltdevice_namegt 1 partitions - sizeMiB ltpartition_sizegt startMiB ltpartition_start_offsetgt 2 label var filesystems - path var device devdiskby-partlabelvar format xfs systemd units - name varmount enabled true contents | [Unit] Before=local-fstarget [Mount] Where=var What=devdiskby-partlabelvar [Install] WantedBy=local-fstarget

$ openshift-install create ignition-configs --dir $HOMEclusterconfig$ ls $HOMEclusterconfigauth bootstrapign masterign metadatajson workerign

OpenShift Container Platform 46 Installing on AWS

166

1

1762 Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deployyour cluster

Prerequisites

You obtained the OpenShift Container Platform installation program for user-provisionedinfrastructure and the pull secret for your cluster

You checked that you are deploying your cluster to a region with an accompanying Red HatEnterprise Linux CoreOS (RHCOS) AMI published by Red Hat If you are deploying to a regionthat requires a custom AMI such as an AWS GovCloud region you must create the install-configyaml file manually

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select aws as the platform to target

iii If you do not have an AWS profile stored on your computer enter the AWS access keyID and secret access key for the user that you configured to run the installationprogram

NOTE

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

167

NOTE

The AWS access key ID and secret access key are stored in ~awscredentials in the home directory of the current user on theinstallation host You are prompted for the credentials by the installationprogram if the credentials for the exported profile are not present in thefile Any credentials that you provide to the installation program arestored in the file

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Optional Back up the install-configyaml file

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

Additional resources

See Configuration and credential file settings in the AWS documentation for more informationabout AWS profile and credential configuration

1763 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

OpenShift Container Platform 46 Installing on AWS

168

1

2

3

4

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

169

1

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

1764 Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines you mustgenerate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines

The installation configuration file transforms into the Kubernetes manifests The manifests wrap into theIgnition configuration files which are later used to create the cluster

IMPORTANT

The Ignition config files that the installation program generates contain certificates thatexpire after 24 hours which are then renewed at that time If the cluster is shut downbefore renewing the certificates and the cluster is later restarted after the 24 hours haveelapsed the cluster automatically recovers the expired certificates The exception is thatyou must manually approve the pending node-bootstrapper certificate signing requests(CSRs) to recover kubelet certificates See the documentation for Recovering fromexpired control plane certificates for more information

Prerequisites

You obtained the OpenShift Container Platform installation program

You created the install-configyaml installation configuration file

Procedure

1 Change to the directory that contains the installation program and generate the Kubernetesmanifests for the cluster

Example output

For ltinstallation_directorygt specify the installation directory that contains the install-configyaml file you created

2 Remove the Kubernetes manifest files that define the control plane machines

By removing these files you prevent the cluster from automatically generating control planemachines

3 Remove the Kubernetes manifest files that define the worker machines

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

INFO Credentials loaded from the myprofile profile in file homemyuserawscredentialsINFO Consuming Install Config from target directoryINFO Manifests created in install_dirmanifests and install_diropenshift

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_master-machines-yaml

OpenShift Container Platform 46 Installing on AWS

170

1 2

1

Because you create and manage the worker machines yourself you do not need to initializethese machines

4 Check that the mastersSchedulable parameter in the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml Kubernetes manifestfile is set to false This setting prevents pods from being scheduled on the control planemachines

a Open the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml file

b Locate the mastersSchedulable parameter and ensure that it is set to false

c Save and exit the file

5 Optional If you do not want the Ingress Operator to create DNS records on your behalf removethe privateZone and publicZone sections from the ltinstallation_directorygtmanifestscluster-dns-02-configyml DNS configuration file

Remove this section completely

If you do so you must add ingress DNS records manually in a later step

6 To create the Ignition configuration files run the following command from the directory thatcontains the installation program

For ltinstallation_directorygt specify the same installation directory

The following files are generated in the directory

auth kubeadmin-password kubeconfig bootstrapign masterign metadatajson workerign

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_worker-machineset-yaml

apiVersion configopenshiftiov1kind DNSmetadata creationTimestamp null name clusterspec baseDomain exampleopenshiftcom privateZone 1 id mycluster-100419-private-zone publicZone 2 id exampleopenshiftcomstatus

$ openshift-install create ignition-configs --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

171

1

1

177 Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify yourcluster in Amazon Web Services (AWS) The infrastructure name is also used to locate the appropriateAWS resources during an OpenShift Container Platform installation The provided CloudFormationtemplates contain references to this infrastructure name so you must extract it

Prerequisites

You obtained the OpenShift Container Platform installation program and the pull secret foryour cluster

You generated the Ignition config files for your cluster

You installed the jq package

Procedure

To extract and view the infrastructure name from the Ignition config file metadata run thefollowing command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

The output of this command is your cluster name and a random string

178 Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShiftContainer Platform cluster to use You can customize the VPC to meet your requirements includingVPN and route tables

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the VPC

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

$ jq -r infraID ltinstallation_directorygtmetadatajson 1

openshift-vw9j6 1

OpenShift Container Platform 46 Installing on AWS

172

1

2

3

4

5

6

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

Procedure

1 Create a JSON file that contains the parameter values that the template requires

The CIDR block for the VPC

Specify a CIDR block in the format xxxx16-24

The number of availability zones to deploy the VPC in

Specify an integer between 1 and 3

The size of each subnet in each availability zone

Specify an integer between 5 and 13 where 5 is 27 and 13 is 19

2 Copy the template from the CloudFormation template for the VPC section of this topic andsave it as a YAML file on your computer This template describes the VPC that your clusterrequires

3 Launch the CloudFormation template to create a stack of AWS resources that represent theVPC

IMPORTANT

You must enter the command on a single line

[ ParameterKey VpcCidr 1 ParameterValue 1000016 2 ParameterKey AvailabilityZoneCount 3 ParameterValue 1 4 ParameterKey SubnetBits 5 ParameterValue 12 6 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

173

1

2

3

ltnamegt is the name for the CloudFormation stack such as cluster-vpc You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

VpcId The ID of your VPC

PublicSubnetIds

The IDs of the new public subnets

PrivateSubnetIds

The IDs of the new private subnets

1781 CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for yourOpenShift Container Platform cluster

Example 123 CloudFormation template for the VPC

arnawscloudformationus-east-1269333783861stackcluster-vpcdbedae40-2fd3-11eb-820e-12a48460849f

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for Best Practice VPC with 1-3 AZs

Parameters VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String AvailabilityZoneCount ConstraintDescription The number of availability zones (Min 1 Max 3) MinValue 1 MaxValue 3

OpenShift Container Platform 46 Installing on AWS

174

Default 1 Description How many AZs to create VPC subnets for (Min 1 Max 3) Type Number SubnetBits ConstraintDescription CIDR block parameter must be in the form xxxx19-27 MinValue 5 MaxValue 13 Default 12 Description Size of each subnet to create within the availability zones (Min 5 = 27 Max 13 = 19) Type Number

Metadata AWSCloudFormationInterface ParameterGroups - Label default Network Configuration Parameters - VpcCidr - SubnetBits - Label default Availability Zones Parameters - AvailabilityZoneCount ParameterLabels AvailabilityZoneCount default Availability Zone Count VpcCidr default VPC CIDR SubnetBits default Bits Per Subnet

Conditions DoAz3 Equals [3 Ref AvailabilityZoneCount] DoAz2 Or [Equals [2 Ref AvailabilityZoneCount] Condition DoAz3]

Resources VPC Type AWSEC2VPC Properties EnableDnsSupport true EnableDnsHostnames true CidrBlock Ref VpcCidr PublicSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [0 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PublicSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC

CHAPTER 1 INSTALLING ON AWS

175

CidrBlock Select [1 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PublicSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [2 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion InternetGateway Type AWSEC2InternetGateway GatewayToInternet Type AWSEC2VPCGatewayAttachment Properties VpcId Ref VPC InternetGatewayId Ref InternetGateway PublicRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PublicRoute Type AWSEC2Route DependsOn GatewayToInternet Properties RouteTableId Ref PublicRouteTable DestinationCidrBlock 00000 GatewayId Ref InternetGateway PublicSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PublicSubnet2 RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation3 Condition DoAz3 Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet3 RouteTableId Ref PublicRouteTable PrivateSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [3 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion

OpenShift Container Platform 46 Installing on AWS

176

PrivateRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PrivateSubnet RouteTableId Ref PrivateRouteTable NAT DependsOn - GatewayToInternet Type AWSEC2NatGateway Properties AllocationId FnGetAtt - EIP - AllocationId SubnetId Ref PublicSubnet EIP Type AWSEC2EIP Properties Domain vpc Route Type AWSEC2Route Properties RouteTableId Ref PrivateRouteTable DestinationCidrBlock 00000 NatGatewayId Ref NAT PrivateSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [4 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PrivateRouteTable2 Type AWSEC2RouteTable Condition DoAz2 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PrivateSubnet2 RouteTableId Ref PrivateRouteTable2 NAT2 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz2

CHAPTER 1 INSTALLING ON AWS

177

Properties AllocationId FnGetAtt - EIP2 - AllocationId SubnetId Ref PublicSubnet2 EIP2 Type AWSEC2EIP Condition DoAz2 Properties Domain vpc Route2 Type AWSEC2Route Condition DoAz2 Properties RouteTableId Ref PrivateRouteTable2 DestinationCidrBlock 00000 NatGatewayId Ref NAT2 PrivateSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [5 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion PrivateRouteTable3 Type AWSEC2RouteTable Condition DoAz3 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation3 Type AWSEC2SubnetRouteTableAssociation Condition DoAz3 Properties SubnetId Ref PrivateSubnet3 RouteTableId Ref PrivateRouteTable3 NAT3 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz3 Properties AllocationId FnGetAtt - EIP3 - AllocationId SubnetId Ref PublicSubnet3 EIP3 Type AWSEC2EIP Condition DoAz3 Properties Domain vpc

OpenShift Container Platform 46 Installing on AWS

178

Route3 Type AWSEC2Route Condition DoAz3 Properties RouteTableId Ref PrivateRouteTable3 DestinationCidrBlock 00000 NatGatewayId Ref NAT3 S3Endpoint Type AWSEC2VPCEndpoint Properties PolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Action - Resource - RouteTableIds - Ref PublicRouteTable - Ref PrivateRouteTable - If [DoAz2 Ref PrivateRouteTable2 Ref AWSNoValue] - If [DoAz3 Ref PrivateRouteTable3 Ref AWSNoValue] ServiceName Join - - - comamazonaws - Ref AWSRegion - s3 VpcId Ref VPC

Outputs VpcId Description ID of the new VPC Value Ref VPC PublicSubnetIds Description Subnet IDs of the public subnets Value Join [ [Ref PublicSubnet If [DoAz2 Ref PublicSubnet2 Ref AWSNoValue] If [DoAz3 Ref PublicSubnet3 Ref AWSNoValue]] ] PrivateSubnetIds Description Subnet IDs of the private subnets Value Join [ [Ref PrivateSubnet If [DoAz2 Ref PrivateSubnet2 Ref AWSNoValue] If [DoAz3 Ref PrivateSubnet3 Ref AWSNoValue]] ]

CHAPTER 1 INSTALLING ON AWS

179

1

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

179 Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS)that your OpenShift Container Platform cluster can use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the networking and load balancing components that yourOpenShift Container Platform cluster requires The template also creates a hosted zone and subnettags

You can run the template multiple times within a single Virtual Private Cloud (VPC)

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-configyaml file for your cluster You can obtain details about your hosted zone by running thefollowing command

For the ltroute53_domaingt specify the Route 53 base domain that you used when yougenerated the install-configyaml file for the cluster

Example output

In the example output the hosted zone ID is Z21IXYZABCZ2A4

2 Create a JSON file that contains the parameter values that the template requires

$ aws route53 list-hosted-zones-by-name --dns-name ltroute53_domaingt 1

myclusterexamplecom False 100HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA hostedzoneZ21IXYZABCZ2A4 myclusterexamplecom 10

OpenShift Container Platform 46 Installing on AWS

180

1

2

3

4

5

6

7

8

9

A short representative cluster name to use for host names etc

Specify the cluster name that you used when you generated the install-configyaml filefor the cluster

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The Route 53 public zone ID to register the targets with

Specify the Route 53 public zone ID which as a format similar to Z21IXYZABCZ2A4 Youcan obtain this value from the AWS console

The Route 53 zone to register the targets with

Specify the Route 53 base domain that you used when you generated the install-configyaml file for the cluster Do not include the trailing period () that is displayed in theAWS console

The public subnets that you created for your VPC

[ ParameterKey ClusterName 1 ParameterValue mycluster 2 ParameterKey InfrastructureName 3 ParameterValue mycluster-ltrandom_stringgt 4 ParameterKey HostedZoneId 5 ParameterValue ltrandom_stringgt 6 ParameterKey HostedZoneName 7 ParameterValue examplecom 8 ParameterKey PublicSubnets 9 ParameterValue subnet-ltrandom_stringgt 10 ParameterKey PrivateSubnets 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey VpcId 13 ParameterValue vpc-ltrandom_stringgt 14 ]

CHAPTER 1 INSTALLING ON AWS

181

10

11

12

13

14

1

2

3

4

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

3 Copy the template from the CloudFormation template for the network and load balancerssection of this topic and save it as a YAML file on your computer This template describes thenetworking and load balancing objects that your cluster requires

IMPORTANT

If you are deploying your cluster to an AWS government region you must updatethe InternalApiServerRecord in the CloudFormation template to use CNAMErecords Records of type ALIAS are not supported for AWS government regions

4 Launch the CloudFormation template to create a stack of AWS resources that provide thenetworking and load balancing components

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-dns You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole resources

Example output

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-dnscd3e5de0-2fd4-11eb-5cf0-12be5c33a183

OpenShift Container Platform 46 Installing on AWS

182

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

PrivateHostedZoneId

Hosted zone ID for the private DNS

ExternalApiLoadBalancerName

Full name of the external API load balancer

InternalApiLoadBalancerName

Full name of the internal API load balancer

ApiServerDnsName

Full host name of the API server

RegisterNlbIpTargetsLambda

Lambda ARN useful to help registerderegister IP targets for these load balancers

ExternalApiTargetGroupArn

ARN of external API target group

InternalApiTargetGroupArn

ARN of internal API target group

InternalServiceTargetGroupArn

ARN of internal service target group

1791 CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and loadbalancers that you need for your OpenShift Container Platform cluster

Example 124 CloudFormation template for the network and load balancers

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Network Elements (Route53 amp LBs)

Parameters ClusterName

CHAPTER 1 INSTALLING ON AWS

183

AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Cluster name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short representative cluster name to use for host names and other identifying names Type String InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String HostedZoneId Description The Route53 public zone ID to register the targets with such as Z21IXYZABCZ2A4 Type String HostedZoneName Description The Route53 zone to register the targets with such as examplecom Omit the trailing period Type String Default examplecom PublicSubnets Description The internet-facing subnets Type ListltAWSEC2SubnetIdgt PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - ClusterName - InfrastructureName - Label default Network Configuration Parameters - VpcId - PublicSubnets - PrivateSubnets - Label default DNS Parameters - HostedZoneName - HostedZoneId ParameterLabels

OpenShift Container Platform 46 Installing on AWS

184

ClusterName default Cluster Name InfrastructureName default Infrastructure Name VpcId default VPC ID PublicSubnets default Public Subnets PrivateSubnets default Private Subnets HostedZoneName default Public Hosted Zone Name HostedZoneId default Public Hosted Zone ID

Resources ExtApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName ext]] IpAddressType ipv4 Subnets Ref PublicSubnets Type network

IntApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName int]] Scheme internal IpAddressType ipv4 Subnets Ref PrivateSubnets Type network

IntDns Type AWSRoute53HostedZone Properties HostedZoneConfig Comment Managed by CloudFormation Name Join [ [Ref ClusterName Ref HostedZoneName]] HostedZoneTags - Key Name Value Join [- [Ref InfrastructureName int]] - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value owned VPCs - VPCId Ref VpcId VPCRegion Ref AWSRegion

ExternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref HostedZoneId RecordSets - Name Join [

CHAPTER 1 INSTALLING ON AWS

185

[api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt ExtApiElbCanonicalHostedZoneID DNSName GetAtt ExtApiElbDNSName

InternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref IntDns RecordSets - Name Join [ [api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName - Name Join [ [api-int Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName

ExternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref ExternalApiTargetGroup LoadBalancerArn Ref ExtApiElb Port 6443 Protocol TCP

ExternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP

OpenShift Container Platform 46 Installing on AWS

186

TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalApiTargetGroup LoadBalancerArn Ref IntApiElb Port 6443 Protocol TCP

InternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalServiceInternalListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalServiceTargetGroup LoadBalancerArn Ref IntApiElb Port 22623 Protocol TCP

InternalServiceTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath healthz HealthCheckPort 22623 HealthCheckProtocol HTTPS

CHAPTER 1 INSTALLING ON AWS

187

HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 22623 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

RegisterTargetLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName nlb lambda role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalApiTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalServiceTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref ExternalApiTargetGroup

RegisterNlbIpTargets Type AWSLambdaFunction Properties Handler indexhandler

OpenShift Container Platform 46 Installing on AWS

188

Role FnGetAtt - RegisterTargetLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) elb = boto3client(elbv2) if event[RequestType] == Delete elbderegister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) elif event[RequestType] == Create elbregister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][TargetArn]+event[ResourceProperties][TargetIp]) Runtime python37 Timeout 120

RegisterSubnetTagsLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName subnet-tags-lambda-role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName subnet-tagging-policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ ec2DeleteTags ec2CreateTags ] Resource arnawsec2subnet - Effect Allow Action [ ec2DescribeSubnets ec2DescribeTags ] Resource

CHAPTER 1 INSTALLING ON AWS

189

RegisterSubnetTags Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterSubnetTagsLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) ec2_client = boto3client(ec2) if event[RequestType] == Delete for subnet_id in event[ResourceProperties][Subnets] ec2_clientdelete_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName]]) elif event[RequestType] == Create for subnet_id in event[ResourceProperties][Subnets] ec2_clientcreate_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName] Value shared]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][InfrastructureName]+event[ResourceProperties][Subnets][0]) Runtime python37 Timeout 120

RegisterPublicSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PublicSubnets

RegisterPrivateSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PrivateSubnets

Outputs PrivateHostedZoneId Description Hosted zone ID for the private DNS which is required for private records Value Ref IntDns ExternalApiLoadBalancerName Description Full name of the external API load balancer Value GetAtt ExtApiElbLoadBalancerFullName InternalApiLoadBalancerName Description Full name of the internal API load balancer Value GetAtt IntApiElbLoadBalancerFullName ApiServerDnsName Description Full hostname of the API server which is required for the Ignition config files

OpenShift Container Platform 46 Installing on AWS

190

IMPORTANT

If you are deploying your cluster to an AWS government region you must update the InternalApiServerRecord to use CNAME records Records of type ALIAS are notsupported for AWS government regions For example

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

You can view details about your hosted zones by navigating to the AWS Route 53 console

See Listing public hosted zones in the AWS documentation for more information about listingpublic hosted zones

1710 Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift ContainerPlatform cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the security groups and roles that your OpenShift ContainerPlatform cluster requires

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Value Join [ [api-int Ref ClusterName Ref HostedZoneName]] RegisterNlbIpTargetsLambda Description Lambda ARN useful to help register or deregister IP targets for these load balancers Value GetAtt RegisterNlbIpTargetsArn ExternalApiTargetGroupArn Description ARN of the external API target group Value Ref ExternalApiTargetGroup InternalApiTargetGroupArn Description ARN of the internal API target group Value Ref InternalApiTargetGroup InternalServiceTargetGroupArn Description ARN of the internal service target group Value Ref InternalServiceTargetGroup

Type CNAMETTL 10ResourceRecords- GetAtt IntApiElbDNSName

CHAPTER 1 INSTALLING ON AWS

191

1

2

3

4

5

6

7

8

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Create a JSON file that contains the parameter values that the template requires

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The CIDR block for the VPC

Specify the CIDR block parameter that you used for the VPC that you defined in the form xxxx16-24

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

2 Copy the template from the CloudFormation template for security objects section of this

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey VpcCidr 3 ParameterValue 1000016 4 ParameterKey PrivateSubnets 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey VpcId 7 ParameterValue vpc-ltrandom_stringgt 8 ]

OpenShift Container Platform 46 Installing on AWS

192

1

2

3

4

2 Copy the template from the CloudFormation template for security objects section of thistopic and save it as a YAML file on your computer This template describes the security groupsand roles that your cluster requires

3 Launch the CloudFormation template to create a stack of AWS resources that represent thesecurity groups and roles

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-sec You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

MasterSecurityGroupId

Master Security Group ID

WorkerSecurityGroupId

Worker Security Group ID

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-sec03bd4210-2ed7-11eb-6d7a-13fc0b61e9db

$ aws cloudformation describe-stacks --stack-name ltnamegt

CHAPTER 1 INSTALLING ON AWS

193

MasterInstanceProfile

Master IAM Instance Profile

WorkerInstanceProfile

Worker IAM Instance Profile

17101 CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need foryour OpenShift Container Platform cluster

Example 125 CloudFormation template for security objects

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Security Elements (Security Groups amp IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Network Configuration Parameters

OpenShift Container Platform 46 Installing on AWS

194

- VpcId - VpcCidr - PrivateSubnets ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID VpcCidr default VPC CIDR PrivateSubnets default Private Subnets

Resources MasterSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Master Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr - IpProtocol tcp ToPort 6443 FromPort 6443 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22623 ToPort 22623 CidrIp Ref VpcCidr VpcId Ref VpcId

WorkerSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Worker Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr VpcId Ref VpcId

MasterIngressEtcd Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId

CHAPTER 1 INSTALLING ON AWS

195

SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description etcd FromPort 2379 ToPort 2380 IpProtocol tcp

MasterIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressWorkerVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressWorkerGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

OpenShift Container Platform 46 Installing on AWS

196

MasterIngressWorkerInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressWorkerInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services

CHAPTER 1 INSTALLING ON AWS

197

FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressWorkerIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIngressWorkerIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressMasterVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressGeneve Type AWSEC2SecurityGroupIngress

OpenShift Container Platform 46 Installing on AWS

198

Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressMasterGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressMasterInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressMasterInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999

CHAPTER 1 INSTALLING ON AWS

199

IpProtocol udp

WorkerIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes secure kubelet port FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal Kubernetes communication FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressMasterIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressMasterIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId

OpenShift Container Platform 46 Installing on AWS

200

SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2AttachVolume - ec2AuthorizeSecurityGroupIngress - ec2CreateSecurityGroup - ec2CreateTags - ec2CreateVolume - ec2DeleteSecurityGroup - ec2DeleteVolume - ec2Describe - ec2DetachVolume - ec2ModifyInstanceAttribute - ec2ModifyVolume - ec2RevokeSecurityGroupIngress - elasticloadbalancingAddTags - elasticloadbalancingAttachLoadBalancerToSubnets - elasticloadbalancingApplySecurityGroupsToLoadBalancer - elasticloadbalancingCreateListener - elasticloadbalancingCreateLoadBalancer - elasticloadbalancingCreateLoadBalancerPolicy - elasticloadbalancingCreateLoadBalancerListeners - elasticloadbalancingCreateTargetGroup - elasticloadbalancingConfigureHealthCheck - elasticloadbalancingDeleteListener - elasticloadbalancingDeleteLoadBalancer - elasticloadbalancingDeleteLoadBalancerListeners - elasticloadbalancingDeleteTargetGroup - elasticloadbalancingDeregisterInstancesFromLoadBalancer - elasticloadbalancingDeregisterTargets - elasticloadbalancingDescribe - elasticloadbalancingDetachLoadBalancerFromSubnets - elasticloadbalancingModifyListener - elasticloadbalancingModifyLoadBalancerAttributes

CHAPTER 1 INSTALLING ON AWS

201

- elasticloadbalancingModifyTargetGroup - elasticloadbalancingModifyTargetGroupAttributes - elasticloadbalancingRegisterInstancesWithLoadBalancer - elasticloadbalancingRegisterTargets - elasticloadbalancingSetLoadBalancerPoliciesForBackendServer - elasticloadbalancingSetLoadBalancerPoliciesOfListener - kmsDescribeKey Resource

MasterInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref MasterIamRole

WorkerIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName worker policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2DescribeInstances - ec2DescribeRegions Resource

WorkerInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref WorkerIamRole

Outputs MasterSecurityGroupId Description Master Security Group ID Value GetAtt MasterSecurityGroupGroupId

WorkerSecurityGroupId Description Worker Security Group ID Value GetAtt WorkerSecurityGroupGroupId

MasterInstanceProfile Description Master IAM Instance Profile Value Ref MasterInstanceProfile

OpenShift Container Platform 46 Installing on AWS

202

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1711 RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs valid for the various Amazon WebServices (AWS) zones you can specify for your OpenShift Container Platform nodes

NOTE

You can also install to regions that do not have a RHCOS AMI published by importing yourown AMI

Table 118 RHCOS AMIs

AWS zone AWS AMI

af-south-1 ami-09921c9c1c36e695c

ap-east-1 ami-01ee8446e9af6b197

ap-northeast-1 ami-04e5b5722a55846ea

ap-northeast-2 ami-0fdc25c8a0273a742

ap-south-1 ami-09e3deb397cc526a8

ap-southeast-1 ami-0630e03f75e02eec4

ap-southeast-2 ami-069450613262ba03c

ca-central-1 ami-012518cdbd3057dfd

eu-central-1 ami-0bd7175ff5b1aef0c

eu-north-1 ami-06c9ec42d0a839ad2

eu-south-1 ami-0614d7440a0363d71

eu-west-1 ami-01b89df58b5d4d5fa

WorkerInstanceProfile Description Worker IAM Instance Profile Value Ref WorkerInstanceProfile

CHAPTER 1 INSTALLING ON AWS

203

eu-west-2 ami-06f6e31ddd554f89d

eu-west-3 ami-0dc82e2517ded15a1

me-south-1 ami-07d181e3aa0f76067

sa-east-1 ami-0cd44e6dd20e6c7fa

us-east-1 ami-04a16d506e5b0e246

us-east-2 ami-0a1f868ad58ea59a7

us-west-1 ami-0a65d76e3a6f6622f

us-west-2 ami-0dd9008abadc519f1

AWS zone AWS AMI

17111 AWS regions without a published RHCOS AMI

You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regionswithout native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI)or the AWS software development kit (SDK) If a published AMI is not available for an AWS region youcan upload a custom AMI prior to installing the cluster This is required if you are deploying your clusterto an AWS government region

If you are deploying to a non-government region that does not have a published RHCOS AMI and youdo not specify a custom AMI the installation program copies the us-east-1 AMI to the user accountautomatically Then the installation program creates the control plane machines with encrypted EBSvolumes using the default or user-specified Key Management Service (KMS) key This allows the AMI tofollow the same process workflow as published RHCOS AMIs

A region without native support for an RHCOS AMI is not available to select from the terminal duringcluster creation because it is not published However you can install to this region by configuring thecustom AMI in the install-configyaml file

17112 Uploading a custom RHCOS AMI in AWS

If you are deploying to a custom Amazon Web Services (AWS) region you must upload a custom RedHat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region

Prerequisites

You configured an AWS account

You created an Amazon S3 bucket with the required IAM service role

You uploaded your RHCOS VMDK file to Amazon S3 The RHCOS VMDK file must be thehighest version that is less than or equal to the OpenShift Container Platform version you areinstalling

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Using

OpenShift Container Platform 46 Installing on AWS

204

1

1

1

1

2

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer

Procedure

1 Export your AWS profile as an environment variable

The AWS profile name that holds your AWS credentials like govcloud

2 Export the region to associate with your custom AMI as an environment variable

The AWS region like us-gov-east-1

3 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable

The RHCOS VMDK version like 460

4 Export the Amazon S3 bucket name as an environment variable

5 Create the containersjson file and define your RHCOS VMDK file

6 Import the RHCOS disk as an Amazon EBS snapshot

The description of your RHCOS disk being imported like rhcos-$RHCOS_VERSION-x86_64-awsx86_64

The file path to the JSON file describing your RHCOS disk The JSON file should containyour Amazon S3 bucket name and key

$ export AWS_PROFILE=ltaws_profilegt 1

$ export AWS_DEFAULT_REGION=ltaws_regiongt 1

$ export RHCOS_VERSION=ltversiongt 1

$ export VMIMPORT_BUCKET_NAME=lts3_bucket_namegt

$ cat ltltEOF gt containersjson Description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 Format vmdk UserBucket S3Bucket $VMIMPORT_BUCKET_NAME S3Key rhcos-$RHCOS_VERSION-x86_64-awsx86_64vmdk EOF

$ aws ec2 import-snapshot --region $AWS_DEFAULT_REGION --description ltdescriptiongt 1 --disk-container ltfile_pathgtcontainersjson 2

CHAPTER 1 INSTALLING ON AWS

205

1

2

3

4

7 Check the status of the image import

Example output

Copy the SnapshotId to register the image

8 Create a custom RHCOS AMI from the RHCOS snapshot

The RHCOS VMDK architecture type like x86_64 s390x or ppc64le

The Description from the imported snapshot

The name of the RHCOS AMI

The SnapshotID from the imported snapshot

To learn more about these APIs see the AWS documentation for importing snapshots and creatingEBS-backed AMIs

1712 Creating the bootstrap node in AWS

$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region $AWS_DEFAULT_REGION

ImportSnapshotTasks [ Description rhcos-460-x86_64-awsx86_64 ImportTaskId import-snap-fh6i8uil SnapshotTaskDetail Description rhcos-460-x86_64-awsx86_64 DiskImageSize 8190566400 Format VMDK SnapshotId snap-06331325870076318 Status completed UserBucket S3Bucket external-images S3Key rhcos-460-x86_64-awsx86_64vmdk ]

$ aws ec2 register-image --region $AWS_DEFAULT_REGION --architecture x86_64 1 --description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 2 --ena-support --name rhcos-$RHCOS_VERSION-x86_64-awsx86_64 3 --virtualization-type hvm --root-device-name devxvda --block-device-mappings DeviceName=devxvdaEbs=DeleteOnTermination=trueSnapshotId=ltsnapshot_IDgt 4

OpenShift Container Platform 46 Installing on AWS

206

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift ContainerPlatform cluster initialization

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the bootstrap node that your OpenShift Container Platforminstallation requires

NOTE

If you do not use the provided CloudFormation template to create your bootstrap nodeyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

Procedure

1 Provide a location to serve the bootstrapign Ignition config file to your cluster This file islocated in your installation directory One way to do this is to create an S3 bucket in yourclusterrsquos region and upload the Ignition config file to it

IMPORTANT

The provided CloudFormation Template assumes that the Ignition config filesfor your cluster are served from an S3 bucket If you choose to serve the filesfrom another location you must modify the templates

IMPORTANT

If you are deploying to a region that has endpoints that differ from the AWS SDKor you are providing your own custom endpoints you must use a presigned URLfor your S3 bucket instead of the s3 schema

NOTE

The bootstrap Ignition config file does contain secrets like X509 keys Thefollowing steps provide basic security for the S3 bucket To provide additionalsecurity you can enable an S3 bucket policy to allow only certain users such asthe OpenShift IAM user to access objects that the bucket contains You canavoid S3 entirely and serve your bootstrap Ignition config file from any addressthat the bootstrap machine can reach

CHAPTER 1 INSTALLING ON AWS

207

1

1

a Create the bucket

ltcluster-namegt-infra is the bucket name When creating the install-configyaml filereplace ltcluster-namegt with the name specified for the cluster

b Upload the bootstrapign Ignition config file to the bucket

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

c Verify that the file uploaded

Example output

2 Create a JSON file that contains the parameter values that the template requires

$ aws s3 mb s3ltcluster-namegt-infra 1

$ aws s3 cp ltinstallation_directorygtbootstrapign s3ltcluster-namegt-infrabootstrapign 1

$ aws s3 ls s3ltcluster-namegt-infra

2019-04-03 161516 314878 bootstrapign

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AllowedBootstrapSshCidr 5 ParameterValue 00000 6 ParameterKey PublicSubnet 7 ParameterValue subnet-ltrandom_stringgt 8 ParameterKey MasterSecurityGroupId 9 ParameterValue sg-ltrandom_stringgt 10 ParameterKey VpcId 11 ParameterValue vpc-ltrandom_stringgt 12

OpenShift Container Platform 46 Installing on AWS

208

1

2

3

4

5

6

7

8

9

10

11

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node

Specify a valid AWSEC2ImageId value

CIDR block to allow SSH access to the bootstrap node

Specify a CIDR block in the format xxxx16-24

The public subnet that is associated with your VPC to launch the bootstrap node into

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The master security group ID (for registering temporary rules)

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The VPC created resources will belong to

ParameterKey BootstrapIgnitionLocation 13 ParameterValue s3ltbucket_namegtbootstrapign 14 ParameterKey AutoRegisterELB 15 ParameterValue yes 16 ParameterKey RegisterNlbIpTargetsLambdaArn 17 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 18 ParameterKey ExternalApiTargetGroupArn 19 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 20 ParameterKey InternalApiTargetGroupArn 21 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 22 ParameterKey InternalServiceTargetGroupArn 23 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 24 ]

CHAPTER 1 INSTALLING ON AWS

209

12

13

14

15

16

17

18

19

20

21

22

23

24

1

Specify the VpcId value from the output of the CloudFormation template for the VPC

Location to fetch bootstrap Ignition config file from

Specify the S3 bucket and file name in the form s3ltbucket_namegtbootstrapign

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

3 Copy the template from the CloudFormation template for the bootstrap machine section ofthis topic and save it as a YAML file on your computer This template describes the bootstrapmachine that your cluster requires

4 Launch the CloudFormation template to create a stack of AWS resources that represent thebootstrap node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-bootstrap You needthe name of this stack if you remove the cluster

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

OpenShift Container Platform 46 Installing on AWS

210

2

3

4

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

BootstrapInstanceId

The bootstrap Instance ID

BootstrapPublicIp

The bootstrap node public IP address

BootstrapPrivateIp

The bootstrap node private IP address

17121 CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need foryour OpenShift Container Platform cluster

Example 126 CloudFormation template for the bootstrap machine

arnawscloudformationus-east-1269333783861stackcluster-bootstrap12944486-2add-11eb-9dee-12dace8e3a83

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Bootstrap (EC2 Instance Security Groups and IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String

CHAPTER 1 INSTALLING ON AWS

211

RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AllowedBootstrapSshCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])(([0-9]|1[0-9]|2[0-9]|3[0-2]))$ ConstraintDescription CIDR block parameter must be in the form xxxx0-32 Default 00000 Description CIDR block to allow SSH access to the bootstrap node Type String PublicSubnet Description The public subnet to launch the bootstrap node into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID for registering temporary rules Type AWSEC2SecurityGroupId VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId BootstrapIgnitionLocation Default s3my-s3-bucketbootstrapign Description Ignition config file location Type String AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId

OpenShift Container Platform 46 Installing on AWS

212

- Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID AllowedBootstrapSshCidr default Allowed SSH Source PublicSubnet default Public Subnet RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Bootstrap Ignition Source MasterSecurityGroupId default Master Security Group ID AutoRegisterELB default Use Provided ELB Automation

Conditions DoRegistration Equals [yes Ref AutoRegisterELB]

Resources BootstrapIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName bootstrap policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action ec2Describe

CHAPTER 1 INSTALLING ON AWS

213

Resource - Effect Allow Action ec2AttachVolume Resource - Effect Allow Action ec2DetachVolume Resource - Effect Allow Action s3GetObject Resource

BootstrapInstanceProfile Type AWSIAMInstanceProfile Properties Path Roles - Ref BootstrapIamRole

BootstrapSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Bootstrap Security Group SecurityGroupIngress - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref AllowedBootstrapSshCidr - IpProtocol tcp ToPort 19531 FromPort 19531 CidrIp 00000 VpcId Ref VpcId

BootstrapInstance Type AWSEC2Instance Properties ImageId Ref RhcosAmi IamInstanceProfile Ref BootstrapInstanceProfile InstanceType i3large NetworkInterfaces - AssociatePublicIpAddress true DeviceIndex 0 GroupSet - Ref BootstrapSecurityGroup - Ref MasterSecurityGroupId SubnetId Ref PublicSubnet UserData FnBase64 Sub - ignitionconfigreplacesource$S3Locversion310 - S3Loc Ref BootstrapIgnitionLocation

RegisterBootstrapApiTarget Condition DoRegistration Type CustomNLBRegister

OpenShift Container Platform 46 Installing on AWS

214

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise LinuxCoreOS (RHCOS) AMIs for the AWS zones

1713 Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the control plane nodes

IMPORTANT

The CloudFormation template creates a stack that represents three control plane nodes

NOTE

Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

Outputs BootstrapInstanceId Description Bootstrap Instance ID Value Ref BootstrapInstance

BootstrapPublicIp Description The bootstrap node public IP address Value GetAtt BootstrapInstancePublicIp

BootstrapPrivateIp Description The bootstrap node private IP address Value GetAtt BootstrapInstancePrivateIp

CHAPTER 1 INSTALLING ON AWS

215

NOTE

If you do not use the provided CloudFormation template to create your control planenodes you must review the provided information and manually create the infrastructureIf your cluster does not initialize correctly you might have to contact Red Hat supportwith your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

Procedure

1 Create a JSON file that contains the parameter values that the template requires

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AutoRegisterDNS 5 ParameterValue yes 6 ParameterKey PrivateHostedZoneId 7 ParameterValue ltrandom_stringgt 8 ParameterKey PrivateHostedZoneName 9 ParameterValue myclusterexamplecom 10 ParameterKey Master0Subnet 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey Master1Subnet 13

OpenShift Container Platform 46 Installing on AWS

216

ParameterValue subnet-ltrandom_stringgt 14 ParameterKey Master2Subnet 15 ParameterValue subnet-ltrandom_stringgt 16 ParameterKey MasterSecurityGroupId 17 ParameterValue sg-ltrandom_stringgt 18 ParameterKey IgnitionLocation 19 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configmaster 20 ParameterKey CertificateAuthorities 21 ParameterValue datatextplaincharset=utf-8base64ABCxYz== 22 ParameterKey MasterInstanceProfileName 23 ParameterValue ltroles_stackgt-MasterInstanceProfile-ltrandom_stringgt 24 ParameterKey MasterInstanceType 25 ParameterValue m4xlarge 26 ParameterKey AutoRegisterELB 27 ParameterValue yes 28 ParameterKey RegisterNlbIpTargetsLambdaArn 29 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 30 ParameterKey ExternalApiTargetGroupArn 31 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 32 ParameterKey InternalApiTargetGroupArn 33 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 34 ParameterKey InternalServiceTargetGroupArn 35 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 36 ]

CHAPTER 1 INSTALLING ON AWS

217

1

2

3

4

5

6

7

8

9

10

11 13 15

12 14 16

17

18

19

20

21

22

23

24

25

26

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control planemachines

Specify an AWSEC2ImageId value

Whether or not to perform DNS etcd registration

Specify yes or no If you specify yes you must provide hosted zone information

The Route 53 private zone ID to register the etcd targets with

Specify the PrivateHostedZoneId value from the output of the CloudFormation templatefor DNS and load balancing

The Route 53 zone to register the targets with

Specify ltcluster_namegtltdomain_namegt where ltdomain_namegt is the Route 53 basedomain that you used when you generated install-configyaml file for the cluster Do notinclude the trailing period () that is displayed in the AWS console

A subnet preferably private to launch the control plane machines on

Specify a subnet from the PrivateSubnets value from the output of theCloudFormation template for DNS and load balancing

The master security group ID to associate with master nodes

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch control plane Ignition config file from

Specify the generated Ignition config file location httpsapi-intltcluster_namegtltdomain_namegt22623configmaster

The base64 encoded certificate authority string to use

Specify the value from the masterign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with master nodes

Specify the MasterInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4xlarge

m42xlarge

OpenShift Container Platform 46 Installing on AWS

218

27

28

29

30

31

32

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c42xlarge

c44xlarge

c48xlarge

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 specify an m5 type such as m5xlarge instead

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

CHAPTER 1 INSTALLING ON AWS

219

33

34

35

36

1

2

3

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

2 Copy the template from the CloudFormation template for control plane machines section ofthis topic and save it as a YAML file on your computer This template describes the control planemachines that your cluster requires

3 If you specified an m5 instance type as the value for MasterInstanceType add that instancetype to the MasterInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent thecontrol plane nodes

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-control-plane Youneed the name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents three control planenodes

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-control-plane21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

220

17131 CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you needfor your OpenShift Container Platform cluster

Example 127 CloudFormation template for control plane machines

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 master instances)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AutoRegisterDNS Default yes AllowedValues - yes - no Description Do you want to invoke DNS etcd registration which requires Hosted Zone information Type String PrivateHostedZoneId Description The Route53 private zone ID to register the etcd targets with such as Z21IXYZABCZ2A4 Type String PrivateHostedZoneName Description The Route53 zone to register the targets with such as clusterexamplecom Omit the trailing period Type String Master0Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master1Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master2Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configmaster Description Ignition config file location Type String

CHAPTER 1 INSTALLING ON AWS

221

CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String MasterInstanceProfileName Description IAM profile to associate with master nodes Type String MasterInstanceType Default m5xlarge Type String AllowedValues - m4xlarge - m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c42xlarge - c44xlarge - c48xlarge - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String

Metadata

OpenShift Container Platform 46 Installing on AWS

222

AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label default DNS Parameters - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID Master0Subnet default Master-0 Subnet Master1Subnet default Master-1 Subnet Master2Subnet default Master-2 Subnet MasterInstanceType default Master Instance Type MasterInstanceProfileName default Master Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Master Ignition Source

CHAPTER 1 INSTALLING ON AWS

223

CertificateAuthorities default Ignition CA String MasterSecurityGroupId default Master Security Group ID AutoRegisterDNS default Use Provided DNS Automation AutoRegisterELB default Use Provided ELB Automation PrivateHostedZoneName default Private Hosted Zone Name PrivateHostedZoneId default Private Hosted Zone ID

Conditions DoRegistration Equals [yes Ref AutoRegisterELB] DoDns Equals [yes Ref AutoRegisterDNS]

Resources Master0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master0Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster0 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalApiTarget

OpenShift Container Platform 46 Installing on AWS

224

Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master0PrivateIp

Master1 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master1Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster1 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties

CHAPTER 1 INSTALLING ON AWS

225

ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master1PrivateIp

Master2 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master2Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster2 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

OpenShift Container Platform 46 Installing on AWS

226

RegisterMaster2InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master2PrivateIp

EtcdSrvRecords Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [_etcd-server-ssl_tcp Ref PrivateHostedZoneName]] ResourceRecords - Join [ [0 10 2380 Join [ [etcd-0 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-1 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-2 Ref PrivateHostedZoneName]]] ] TTL 60 Type SRV

Etcd0Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-0 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master0PrivateIp TTL 60 Type A

Etcd1Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-1 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master1PrivateIp TTL 60 Type A

Etcd2Record Condition DoDns Type AWSRoute53RecordSet

CHAPTER 1 INSTALLING ON AWS

227

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1714 Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent a worker node

IMPORTANT

The CloudFormation template creates a stack that represents one worker node Youmust create a stack for each worker node

NOTE

If you do not use the provided CloudFormation template to create your worker nodesyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-2 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master2PrivateIp TTL 60 Type A

Outputs PrivateIPs Description The control-plane node private IP addresses Value Join [ [GetAtt Master0PrivateIp GetAtt Master1PrivateIp GetAtt Master2PrivateIp] ]

OpenShift Container Platform 46 Installing on AWS

228

1

2

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

Procedure

1 Create a JSON file that contains the parameter values that the CloudFormation templaterequires

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey Subnet 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey WorkerSecurityGroupId 7 ParameterValue sg-ltrandom_stringgt 8 ParameterKey IgnitionLocation 9 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configworker 10 ParameterKey CertificateAuthorities 11 ParameterValue 12 ParameterKey WorkerInstanceProfileName 13 ParameterValue 14 ParameterKey WorkerInstanceType 15 ParameterValue m4large 16 ]

CHAPTER 1 INSTALLING ON AWS

229

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes

Specify an AWSEC2ImageId value

A subnet preferably private to launch the worker nodes on

Specify a subnet from the PrivateSubnets value from the output of the CloudFormationtemplate for DNS and load balancing

The worker security group ID to associate with worker nodes

Specify the WorkerSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch bootstrap Ignition config file from

Specify the generated Ignition config location httpsapi-intltcluster_namegtltdomain_namegt22623configworker

Base64 encoded certificate authority string to use

Specify the value from the workerign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with worker nodes

Specify the WorkerInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4large

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5large

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

OpenShift Container Platform 46 Installing on AWS

230

1

2

3

m516xlarge

c4large

c4xlarge

c42xlarge

c44xlarge

c48xlarge

r4large

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5 types instead

2 Copy the template from the CloudFormation template for worker machines section of thistopic and save it as a YAML file on your computer This template describes the networkingobjects and load balancers that your cluster requires

3 If you specified an m5 instance type as the value for WorkerInstanceType add that instancetype to the WorkerInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent aworker node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-worker-1 You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

231

file

Example output

NOTE

The CloudFormation template creates a stack that represents one worker node

5 Confirm that the template components exist

6 Continue to create worker stacks until you have created enough worker machines for yourcluster You can create additional worker stacks by referencing the same template andparameter files and specifying a different stack name

IMPORTANT

You must create at least two worker machines so you must create at least twostacks that use this CloudFormation template

17141 CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need foryour OpenShift Container Platform cluster

Example 128 CloudFormation template for worker machines

arnawscloudformationus-east-1269333783861stackcluster-worker-1729ee301-1c2a-11eb-348f-sd9888c65b59

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId WorkerSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation

OpenShift Container Platform 46 Installing on AWS

232

Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configworker Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String WorkerInstanceProfileName Description IAM profile to associate with master nodes Type String WorkerInstanceType Default m5large Type String AllowedValues - m4large - m4xlarge - m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5large - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c4large - c4xlarge - c42xlarge - c44xlarge - c48xlarge - r4large - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName

CHAPTER 1 INSTALLING ON AWS

233

- Label default Network Configuration Parameters - Subnet ParameterLabels Subnet default Subnet InfrastructureName default Infrastructure Name WorkerInstanceType default Worker Instance Type WorkerInstanceProfileName default Worker Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID IgnitionLocation default Worker Ignition Source CertificateAuthorities default Ignition CA String WorkerSecurityGroupId default Worker Security Group ID

Resources Worker0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref WorkerInstanceProfileName InstanceType Ref WorkerInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref WorkerSecurityGroupId SubnetId Ref Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

Outputs PrivateIP Description The compute node private IP address Value GetAtt Worker0PrivateIp

OpenShift Container Platform 46 Installing on AWS

234

1

2

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1715 Initializing the bootstrap sequence on AWS with user-provisionedinfrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS) you can start thebootstrap sequence that initializes the OpenShift Container Platform control plane

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

You created the worker nodes

Procedure

1 Change to the directory that contains the installation program and start the bootstrap processthat initializes the OpenShift Container Platform control plane

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different installation details specify warn debug or error instead of info

Example output

$ openshift-install wait-for bootstrap-complete --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Waiting up to 20m0s for the Kubernetes API at httpsapimyclusterexamplecom6443INFO API v1190+9f84db3 upINFO Waiting up to 30m0s for bootstrapping to completeINFO It is now safe to remove the bootstrap resourcesINFO Time elapsed 1s

CHAPTER 1 INSTALLING ON AWS

235

If the command exits without a FATAL warning your OpenShift Container Platform controlplane has initialized

NOTE

After the control plane initializes it sets up the compute nodes and installsadditional services in the form of Operators

Additional resources

See Monitoring installation progress for details about monitoring the installation bootstrap andcontrol plane logs as an OpenShift Container Platform installation progresses

See Gathering bootstrap node diagnostic data for information about troubleshooting issuesrelated to the bootstrap process

You can view details about the running instances that are created by using the AWS EC2console

1716 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

17161 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

$ tar xvzf ltfilegt

$ echo $PATH

OpenShift Container Platform 46 Installing on AWS

236

17162 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

17163 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1717 Logging in to the cluster by using the CLI

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

$ echo $PATH

$ oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

237

1

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1718 Approving the certificate signing requests for your machines

When you add machines to a cluster two pending certificate signing requests (CSRs) are generated foreach machine that you added You must confirm that these CSRs are approved or if necessary approvethem yourself The client requests must be approved first followed by the server requests

Prerequisites

You added machines to your cluster

Procedure

1 Confirm that the cluster recognizes the machines

Example output

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 63m v1190master-1 Ready master 63m v1190master-2 Ready master 64m v1190worker-0 NotReady worker 76s v1190worker-1 NotReady worker 70s v1190

OpenShift Container Platform 46 Installing on AWS

238

The output lists all of the machines that you created

NOTE

The preceding output might not include the compute nodes also known asworker nodes until some CSRs are approved

2 Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster

Example output

In this example two machines are joining the cluster You might see more approved CSRs in thelist

3 If the CSRs were not approved after all of the pending CSRs for the machines you added are in Pending status approve the CSRs for your cluster machines

NOTE

Because the CSRs rotate automatically approve your CSRs within an hour ofadding the machines to the cluster If you do not approve them within an hour thecertificates will rotate and more than two certificates will be present for eachnode You must approve all of these certificates After you approve the initialCSRs the subsequent node client CSRs are automatically approved by thecluster kube-controller-manager

NOTE

For clusters running on platforms that are not machine API enabled such as baremetal and other user-provisioned infrastructure you must implement a methodof automatically approving the kubelet serving certificate requests (CSRs) If arequest is not approved then the oc exec oc rsh and oc logs commandscannot succeed because a serving certificate is required when the API serverconnects to the kubelet Any operation that contacts the Kubelet endpointrequires this certificate approval to be in place The method must watch for newCSRs confirm that the CSR was submitted by the node-bootstrapper serviceaccount in the systemnode or systemadmin groups and confirm the identityof the node

To approve them individually run the following command for each valid CSR

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-8b2br 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pendingcsr-8vnps 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pending

$ oc adm certificate approve ltcsr_namegt 1

CHAPTER 1 INSTALLING ON AWS

239

1

1

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

NOTE

Some Operators might not become available until some CSRs are approved

4 Now that your client requests are approved you must review the server requests for eachmachine that you added to the cluster

Example output

5 If the remaining CSRs are not approved and are in the Pending status approve the CSRs foryour cluster machines

To approve them individually run the following command for each valid CSR

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

6 After all client and server CSRs have been approved the machines have the Ready statusVerify this by running the following command

Example output

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs --no-run-if-empty oc adm certificate approve

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-bfd72 5m26s systemnodeip-10-0-50-126us-east-2computeinternal Pendingcsr-c57lv 5m26s systemnodeip-10-0-95-157us-east-2computeinternal Pending

$ oc adm certificate approve ltcsr_namegt 1

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs oc adm certificate approve

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 73m v1200master-1 Ready master 73m v1200

OpenShift Container Platform 46 Installing on AWS

240

NOTE

It can take a few minutes after approval of the server CSRs for the machines totransition to the Ready status

Additional information

For more information on CSRs see Certificate Signing Requests

1719 Initial Operator configuration

After the control plane initializes you must immediately configure some Operators so that they allbecome available

Prerequisites

Your control plane has initialized

Procedure

1 Watch the cluster components come online

Example output

master-2 Ready master 74m v1200worker-0 Ready worker 11m v1200worker-1 Ready worker 11m v1200

$ watch -n5 oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCEauthentication 460 True False False 3h56mcloud-credential 460 True False False 29hcluster-autoscaler 460 True False False 29hconfig-operator 460 True False False 6h39mconsole 460 True False False 3h59mcsi-snapshot-controller 460 True False False 4h12mdns 460 True False False 4h15metcd 460 True False False 29himage-registry 460 True False False 3h59mingress 460 True False False 4h30minsights 460 True False False 29hkube-apiserver 460 True False False 29hkube-controller-manager 460 True False False 29hkube-scheduler 460 True False False 29hkube-storage-version-migrator 460 True False False 4h2mmachine-api 460 True False False 29hmachine-approver 460 True False False 6h34mmachine-config 460 True False False 3h56mmarketplace 460 True False False 4h2mmonitoring 460 True False False 6h31mnetwork 460 True False False 29hnode-tuning 460 True False False 4h30m

CHAPTER 1 INSTALLING ON AWS

241

2 Configure the Operators that are not available

17191 Image registry storage configuration

Amazon Web Services provides default storage which means the Image Registry Operator is availableafter installation However if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage you must manually configure registry storage

Instructions are shown for configuring a persistent volume which is required for production clustersWhere applicable instructions are shown for configuring an empty directory as the storage locationwhich is available for only non-production clusters

Additional instructions are provided for allowing the image registry to use block storage types by usingthe Recreate rollout strategy during upgrades

You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShiftContainer Platform to hidden regions See Configuring the registry for AWS user-provisionedinfrastructure for more information

171911 Configuring registry storage for AWS with user-provisioned infrastructure

During installation your cloud credentials are sufficient to create an Amazon S3 bucket and the RegistryOperator will automatically configure storage

If the Registry Operator cannot create an S3 bucket and automatically configure storage you cancreate an S3 bucket and configure storage with the following procedure

Prerequisites

You have a cluster on AWS with user-provisioned infrastructure

For Amazon S3 storage the secret is expected to contain two keys

REGISTRY_STORAGE_S3_ACCESSKEY

REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage

1 Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old

2 Fill in the storage configuration in configsimageregistryoperatoropenshiftiocluster

openshift-apiserver 460 True False False 3h56mopenshift-controller-manager 460 True False False 4h36mopenshift-samples 460 True False False 4h30moperator-lifecycle-manager 460 True False False 29hoperator-lifecycle-manager-catalog 460 True False False 29hoperator-lifecycle-manager-packageserver 460 True False False 3h59mservice-ca 460 True False False 29hstorage 460 True False False 4h30m

$ oc edit configsimageregistryoperatoropenshiftiocluster

OpenShift Container Platform 46 Installing on AWS

242

Example configuration

WARNING

To secure your registry images in AWS block public access to the S3 bucket

171912 Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator For non-production clusters you can setthe image registry to an empty directory If you do so all images are lost if you restart the registry

Procedure

To set the image registry storage to an empty directory

WARNING

Configure this option for only non-production clusters

If you run this command before the Image Registry Operator initializes its components the oc patch command fails with the following error

Wait a few minutes and run the command again

1720 Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster remove the bootstrap resourcesfrom Amazon Web Services (AWS)

Prerequisites

You completed the initial Operator configuration for your cluster

storage s3 bucket ltbucket-namegt region ltregion-namegt

$ oc patch configsimageregistryoperatoropenshiftio cluster --type merge --patch specstorageemptyDir

Error from server (NotFound) configsimageregistryoperatoropenshiftio cluster not found

CHAPTER 1 INSTALLING ON AWS

243

1

Procedure

1 Delete the bootstrap resources If you used the CloudFormation template delete its stack

Delete the stack by using the AWS CLI

ltnamegt is the name of your bootstrap stack

Delete the stack by using the AWS CloudFormation console

1721 Creating the Ingress DNS Records

If you removed the DNS Zone configuration manually create DNS records that point to the Ingress loadbalancer You can create either a wildcard record or specific records While the following procedure usesA records you can use other record types that you require such as CNAME or alias

Prerequisites

You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) thatuses infrastructure that you provisioned

You installed the OpenShift CLI (oc)

You installed the jq package

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix)

Procedure

1 Determine the routes to create

To create a wildcard record use appsltcluster_namegtltdomain_namegt where ltcluster_namegt is your cluster name and ltdomain_namegt is the Route 53 base domainfor your OpenShift Container Platform cluster

To create specific records you must create a record for each route that your cluster uses asshown in the output of the following command

Example output

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP address

$ aws cloudformation delete-stack --stack-name ltnamegt 1

$ oc get --all-namespaces -o jsonpath=range items[]range statusingress[]hostnendend routes

oauth-openshiftappsltcluster_namegtltdomain_namegtconsole-openshift-consoleappsltcluster_namegtltdomain_namegtdownloads-openshift-consoleappsltcluster_namegtltdomain_namegtalertmanager-main-openshift-monitoringappsltcluster_namegtltdomain_namegtgrafana-openshift-monitoringappsltcluster_namegtltdomain_namegtprometheus-k8s-openshift-monitoringappsltcluster_namegtltdomain_namegt

OpenShift Container Platform 46 Installing on AWS

244

1

1 2

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP addressthat it uses which is shown in the EXTERNAL-IP column

Example output

3 Locate the hosted zone ID for the load balancer

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer that you obtained

Example output

The output of this command is the load balancer hosted zone ID

4 Obtain the public hosted zone ID for your clusterrsquos domain

For ltdomain_namegt specify the Route 53 base domain for your OpenShift ContainerPlatform cluster

Example output

The public hosted zone ID for your domain is shown in the command output In this example it is Z3URY6TWQ91KVV

5 Add the alias records to your private zone

$ oc -n openshift-ingress get service router-default

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGErouter-default LoadBalancer 1723062215 ab328us-east-2elbamazonawscom 8031499TCP44330693TCP 5m

$ aws elb describe-load-balancers | jq -r LoadBalancerDescriptions[] | select(DNSName == ltexternal_ipgt)CanonicalHostedZoneNameID 1

Z3AADJGX6KTTL2

$ aws route53 list-hosted-zones-by-name --dns-name ltdomain_namegt 1 --query HostedZones[ ConfigPrivateZone = `true` ampamp Name == `ltdomain_namegt`]Id 2 --output text

hostedzoneZ3URY6TWQ91KVV

$ aws route53 change-resource-record-sets --hosted-zone-id ltprivate_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATE

CHAPTER 1 INSTALLING ON AWS

245

1

2

3

4

1

2

3

4

For ltprivate_hosted_zone_idgt specify the value from the output of the CloudFormationtemplate for DNS and load balancing

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

6 Add the records to your public zone

For ltpublic_hosted_zone_idgt specify the public hosted zone for your domain

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operator

gt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

$ aws route53 change-resource-record-sets --hosted-zone-id ltpublic_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

OpenShift Container Platform 46 Installing on AWS

246

1

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

1722 Completing an AWS installation on user-provisioned infrastructure

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure monitor the deployment to completion

Prerequisites

You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure

You installed the oc CLI

Procedure

From the directory that contains the installation program complete the cluster installation

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

1723 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can log

$ openshift-install --dir=ltinstallation_directorygt wait-for install-complete 1

INFO Waiting up to 40m0s for the cluster at httpsapimyclusterexamplecom6443 to initializeINFO Waiting up to 10m0s for the openshift-console route to be createdINFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Fe5en-ymBEc-Wt6NLINFO Time elapsed 1s

CHAPTER 1 INSTALLING ON AWS

247

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

1724 Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

See Working with stacks in the AWS documentation for more information about AWSCloudFormation stacks

1725 Next steps

Validating an installation

Customize your cluster

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

OpenShift Container Platform 46 Installing on AWS

248

If necessary you can opt out of remote health reporting

18 INSTALLING A CLUSTER ON AWS THAT USES MIRROREDINSTALLATION CONTENT

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)using infrastructure that you provide and an internal mirror of the installation release content

IMPORTANT

While you can install an OpenShift Container Platform cluster by using mirroredinstallation release content your cluster still requires Internet access to use the AWSAPIs

One way to create this infrastructure is to use the provided CloudFormation templates You can modifythe templates to customize your infrastructure or use the information that they contain to create AWSobjects according to your companyrsquos policies

IMPORTANT

The steps for performing a user-provisioned infrastructure installation are provided as anexample only Installing a cluster with infrastructure you provide requires knowledge ofthe cloud provider and the installation process of OpenShift Container Platform SeveralCloudFormation templates are provided to assist in completing these steps or to helpmodel your own You are also free to create the required resources through othermethods the templates are just an example

181 Prerequisites

You created a mirror registry on your mirror host and obtained the imageContentSources datafor your version of OpenShift Container Platform

IMPORTANT

Because the installation media is on the mirror host you can use that computerto complete all installation steps

You reviewed details about the OpenShift Container Platform installation and updateprocesses

You configured an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

CHAPTER 1 INSTALLING ON AWS

249

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix) in the AWS documentation

If you use a firewall and plan to use the Telemetry service you configured the firewall to allowthe sites that your cluster requires access to

NOTE

Be sure to also review this site list if you are configuring a proxy

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

182 About installations in restricted networks

In OpenShift Container Platform 46 you can perform an installation that does not require an activeconnection to the Internet to obtain software components Restricted network installations can becompleted using installer-provisioned infrastructure or user-provisioned infrastructure depending onthe cloud platform to which you are installing the cluster

If you choose to perform a restricted network installation on a cloud platform you still require access toits cloud APIs Some cloud functions like Amazon Web Servicersquos IAM service require Internet access soyou might still require Internet access Depending on your network you might require less Internetaccess for an installation on bare metal hardware or on VMware vSphere

To complete a restricted network installation you must create a registry that mirrors the contents of theOpenShift Container Platform registry and contains the installation media You can create this registryon a mirror host which can access both the Internet and your closed network or by using other methodsthat meet your restrictions

IMPORTANT

Because of the complexity of the configuration for user-provisioned installationsconsider completing a standard user-provisioned infrastructure installation before youattempt a restricted network installation using user-provisioned infrastructureCompleting this test installation might make it easier to isolate and troubleshoot anyissues that might arise during your installation in a restricted network

1821 Additional limits

Clusters in restricted networks have the following additional limitations and restrictions

The ClusterVersion status includes an Unable to retrieve available updates error

By default you cannot use the contents of the Developer Catalog because you cannot accessthe required image stream tags

183 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

OpenShift Container Platform 46 Installing on AWS

250

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

184 Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services(AWS) you must manually create both the machines and their supporting infrastructure

For more information about the integration testing for different platforms see the OpenShift ContainerPlatform 4x Tested Integrations page

By using the provided CloudFormation templates you can create stacks of AWS resources thatrepresent the following components

An AWS Virtual Private Cloud (VPC)

Networking and load balancing components

Security groups and roles

An OpenShift Container Platform bootstrap node

OpenShift Container Platform control plane nodes

An OpenShift Container Platform compute node

Alternatively you can manually create the components or you can reuse existing infrastructure thatmeets the cluster requirements Review the CloudFormation templates for more details about how thecomponents interrelate

1841 Cluster machines

CHAPTER 1 INSTALLING ON AWS

251

You need AWSEC2Instance objects for the following machines

A bootstrap machine This machine is required during installation but you can remove it afteryour cluster deploys

Three control plane machines The control plane machines are not governed by a machine set

Compute machines You must create at least two compute machines which are also known asworker machines during installation These machines are not governed by a machine set

You can use the following instance types for the cluster machines with the provided CloudFormationtemplates

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5types instead

Table 119 Instance types for machines

Instance type Bootstrap Control plane Compute

i3large x

m4large x

m4xlarge x x

m42xlarge x x

m44xlarge x x

m48xlarge x x

m410xlarge x x

m416xlarge x x

m5large x

m5xlarge x x

m52xlarge x x

m54xlarge x x

m58xlarge x x

m510xlarge x x

OpenShift Container Platform 46 Installing on AWS

252

m516xlarge x x

c4large x

c4xlarge x

c42xlarge x x

c44xlarge x x

c48xlarge x x

r4large x

r4xlarge x x

r42xlarge x x

r44xlarge x x

r48xlarge x x

r416xlarge x x

Instance type Bootstrap Control plane Compute

You might be able to use other instance types that meet the specifications of these instance types

1842 Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructurethat you provision you must provide a mechanism for approving cluster certificate signing requests(CSRs) after installation The kube-controller-manager only approves the kubelet client CSRs The machine-approver cannot guarantee the validity of a serving certificate that is requested by usingkubelet credentials because it cannot confirm that the correct machine issued the request You mustdetermine and implement a method of verifying the validity of the kubelet serving certificate requestsand approving them

1843 Other infrastructure components

A VPC

DNS entries

Load balancers (classic or network) and listeners

A public and a private Route 53 zone

Security groups

CHAPTER 1 INSTALLING ON AWS

253

IAM roles

S3 buckets

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

OpenShift Container Platform 46 Installing on AWS

254

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a privatehosted zone similar to the one that the installation program uses if it provisions the clusterrsquosinfrastructure You must create a DNS entry that resolves to your load balancer An entry for apiltcluster_namegtltdomaingt must point to the external load balancer and an entry for api-intltcluster_namegtltdomaingt must point to the internal load balancer

The cluster also requires load balancers and listeners for port 6443 which are required for theKubernetes API and its extensions and port 22623 which are required for the Ignition config files fornew machines The targets will be the master nodes Port 6443 must be accessible to both clientsexternal to the cluster and nodes within the cluster Port 22623 must be accessible to nodes within thecluster

Component AWS type Description

DNS AWSRoute53HostedZone

The hosted zone for your internal DNS

etcd recordsets

AWSRoute53RecordSet

The registration records for etcd for your control plane machines

CHAPTER 1 INSTALLING ON AWS

255

Public loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your public subnets

External APIserver record

AWSRoute53RecordSetGroup

Alias records for the external API server

Externallistener

AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the external load balancer

External targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the external load balancer

Private loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your private subnets

Internal APIserver record

AWSRoute53RecordSetGroup

Alias records for the internal API server

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 22623 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Component AWS type Description

OpenShift Container Platform 46 Installing on AWS

256

Security groups

The control plane and worker machines require access to the following ports

Group Type IP Protocol Port range

MasterSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

tcp 6443

tcp 22623

WorkerSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

BootstrapSecurityGroup

AWSEC2SecurityGroup

tcp 22

tcp 19531

Control plane Ingress

The control plane machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd tcp 2379- 2380

MasterIngressVxlan

Vxlan packets udp 4789

MasterIngressWorkerVxlan

Vxlan packets udp 4789

MasterIngressInternal

Internal cluster communication and Kubernetesproxy metrics

tcp 9000 - 9999

MasterIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

MasterIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

CHAPTER 1 INSTALLING ON AWS

257

MasterIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Worker Ingress

The worker machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets udp 4789

WorkerIngressWorkerVxlan

Vxlan packets udp 4789

WorkerIngressInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

WorkerIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Roles and instance profiles

OpenShift Container Platform 46 Installing on AWS

258

You must grant the machines permissions in AWS The provided CloudFormation templates grant themachines Allow permissions for the following AWSIAMRole objects and provide a AWSIAMInstanceProfile for each set of roles If you do not use the templates you can grant themachines the following broad permissions or the following individual permissions

Role Effect Action Resource

Master Allow ec2

Allow elasticloadbalancing

Allow iamPassRole

Allow s3GetObject

Worker Allow ec2Describe

Bootstrap Allow ec2Describe

Allow ec2AttachVolume

Allow ec2DetachVolume

1844 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 129 Required EC2 permissions for installation

tagTagResources

tagUntagResources

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

CHAPTER 1 INSTALLING ON AWS

259

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

OpenShift Container Platform 46 Installing on AWS

260

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 130 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 131 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

CHAPTER 1 INSTALLING ON AWS

261

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

elasticloadbalancingSetLoadBalancerPoliciesOfListener

Example 132 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

OpenShift Container Platform 46 Installing on AWS

262

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 133 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

route53ListHostedZonesByName

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 134 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

CHAPTER 1 INSTALLING ON AWS

263

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 135 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

s3GetObjectAcl

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 136 Required permissions to delete base cluster resources

OpenShift Container Platform 46 Installing on AWS

264

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 137 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

ec2DeleteVpc

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

CHAPTER 1 INSTALLING ON AWS

265

Example 138 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 139 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

185 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster the

OpenShift Container Platform 46 Installing on AWS

266

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram If you install a cluster on infrastructure that you provision you must provide this key toyour clusterrsquos machines

186 Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisionedinfrastructure you must generate the files that the installation program needs to deploy your clusterand modify them so that the cluster creates only the machines that it will use You generate and

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

267

customize the install-configyaml file Kubernetes manifests and Ignition config files You also have theoption to first set up a separate var partition during the preparation phases of installation

1861 Optional Creating a separate var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installerHowever there are cases where you might want to create separate partitions in a part of the filesystemthat you expect to grow

OpenShift Container Platform supports the addition of a single partition to attach storage to either the var partition or a subdirectory of var For example

varlibcontainers Holds container-related content that can grow as more images andcontainers are added to a system

varlibetcd Holds data that you might want to keep separate for purposes such asperformance optimization of etcd storage

var Holds data that you might want to keep separate for purposes such as auditing

Storing the contents of a var directory separately makes it easier to grow storage for those areas asneeded and reinstall OpenShift Container Platform at a later date and keep that data intact With thismethod you will not have to pull all your containers again nor will you have to copy massive log fileswhen you update systems

Because var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS)the following procedure sets up the separate var partition by creating a machine config that is insertedduring the openshift-install preparation phases of an OpenShift Container Platform installation

IMPORTANT

If you follow the steps to create a separate var partition in this procedure it is notnecessary to create the Kubernetes manifest and Ignition config files again as describedlater in this section

Procedure

1 Create a directory to hold the OpenShift Container Platform installation files

2 Run openshift-install to create a set of files in the manifest and openshift subdirectoriesAnswer the system questions as you are prompted

3 Create a MachineConfig object and add it to a file in the openshift directory For examplename the file 98-var-partitionyaml change the disk device name to the name of the storage

$ mkdir $HOMEclusterconfig

$ openshift-install create manifests --dir $HOMEclusterconfig SSH Public Key $ ls $HOMEclusterconfigopenshift99_kubeadmin-password-secretyaml99_openshift-cluster-api_master-machines-0yaml99_openshift-cluster-api_master-machines-1yaml99_openshift-cluster-api_master-machines-2yaml

OpenShift Container Platform 46 Installing on AWS

268

1

2

device on the worker systems and set the storage size as appropriate This attaches storage toa separate var directory

The storage device name of the disk that you want to partition

When adding a data partition to the boot disk a minimum value of 25000 MiB (Mebibytes)is recommended The root file system is automatically resized to fill all available space upto the specified offset If no value is specified or if the specified value is smaller than therecommended minimum the resulting root file system will be too small and futurereinstalls of RHCOS might overwrite the beginning of the data partition

4 Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories

Now you can use the Ignition config files as input to the installation procedures to install Red HatEnterprise Linux CoreOS (RHCOS) systems

apiVersion machineconfigurationopenshiftiov1kind MachineConfigmetadata labels machineconfigurationopenshiftiorole worker name 98-var-partitionspec config ignition version 310 storage disks - device devltdevice_namegt 1 partitions - sizeMiB ltpartition_sizegt startMiB ltpartition_start_offsetgt 2 label var filesystems - path var device devdiskby-partlabelvar format xfs systemd units - name varmount enabled true contents | [Unit] Before=local-fstarget [Mount] Where=var What=devdiskby-partlabelvar [Install] WantedBy=local-fstarget

$ openshift-install create ignition-configs --dir $HOMEclusterconfig$ ls $HOMEclusterconfigauth bootstrapign masterign metadatajson workerign

CHAPTER 1 INSTALLING ON AWS

269

1

1862 Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deployyour cluster

Prerequisites

You obtained the OpenShift Container Platform installation program for user-provisionedinfrastructure and the pull secret for your cluster For a restricted network installation thesefiles are on your mirror host

You checked that you are deploying your cluster to a region with an accompanying Red HatEnterprise Linux CoreOS (RHCOS) AMI published by Red Hat If you are deploying to a regionthat requires a custom AMI such as an AWS GovCloud region you must create the install-configyaml file manually

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select aws as the platform to target

iii If you do not have an AWS profile stored on your computer enter the AWS access keyID and secret access key for the user that you configured to run the installationprogram

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

OpenShift Container Platform 46 Installing on AWS

270

NOTE

The AWS access key ID and secret access key are stored in ~awscredentials in the home directory of the current user on theinstallation host You are prompted for the credentials by the installationprogram if the credentials for the exported profile are not present in thefile Any credentials that you provide to the installation program arestored in the file

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Edit the install-configyaml file to provide the additional information that is required for aninstallation in a restricted network

a Update the pullSecret value to contain the authentication information for your registry

For ltlocal_registrygt specify the registry domain name and optionally the port that yourmirror registry uses to serve content For example registryexamplecom or registryexamplecom5000 For ltcredentialsgt specify the base64-encoded user nameand password for your mirror registry

b Add the additionalTrustBundle parameter and value The value must be the contents ofthe certificate file that you used for your mirror registry which can be an exiting trustedcertificate authority or the self-signed certificate that you generated for the mirror registry

c Add the image content resources

Use the imageContentSources section from the output of the command to mirror therepository or the values that you used when you mirrored the content from the media thatyou brought into your restricted network

pullSecret authsltlocal_registrygt auth ltcredentialsgtemail youexamplecom

additionalTrustBundle | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----

imageContentSources- mirrors - ltlocal_registrygtltlocal_repository_namegtrelease source quayioopenshift-release-devocp-release- mirrors - ltlocal_registrygtltlocal_repository_namegtrelease source registrysvcciopenshiftorgocprelease

CHAPTER 1 INSTALLING ON AWS

271

d Optional Set the publishing strategy to Internal

By setting this option you create an internal Ingress Controller and a private load balancer

3 Optional Back up the install-configyaml file

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

Additional resources

See Configuration and credential file settings in the AWS documentation for more informationabout AWS profile and credential configuration

1863 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

publish Internal

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3

OpenShift Container Platform 46 Installing on AWS

272

1

2

3

4

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

1864 Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines you mustgenerate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines

The installation configuration file transforms into the Kubernetes manifests The manifests wrap into theIgnition configuration files which are later used to create the cluster

additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

273

1

IMPORTANT

The Ignition config files that the installation program generates contain certificates thatexpire after 24 hours which are then renewed at that time If the cluster is shut downbefore renewing the certificates and the cluster is later restarted after the 24 hours haveelapsed the cluster automatically recovers the expired certificates The exception is thatyou must manually approve the pending node-bootstrapper certificate signing requests(CSRs) to recover kubelet certificates See the documentation for Recovering fromexpired control plane certificates for more information

Prerequisites

You obtained the OpenShift Container Platform installation program For a restricted networkinstallation these files are on your mirror host

You created the install-configyaml installation configuration file

Procedure

1 Change to the directory that contains the installation program and generate the Kubernetesmanifests for the cluster

Example output

For ltinstallation_directorygt specify the installation directory that contains the install-configyaml file you created

2 Remove the Kubernetes manifest files that define the control plane machines

By removing these files you prevent the cluster from automatically generating control planemachines

3 Remove the Kubernetes manifest files that define the worker machines

Because you create and manage the worker machines yourself you do not need to initializethese machines

4 Check that the mastersSchedulable parameter in the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml Kubernetes manifestfile is set to false This setting prevents pods from being scheduled on the control planemachines

a Open the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml file

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

INFO Credentials loaded from the myprofile profile in file homemyuserawscredentialsINFO Consuming Install Config from target directoryINFO Manifests created in install_dirmanifests and install_diropenshift

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_master-machines-yaml

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_worker-machineset-yaml

OpenShift Container Platform 46 Installing on AWS

274

1 2

1

b Locate the mastersSchedulable parameter and ensure that it is set to false

c Save and exit the file

5 Optional If you do not want the Ingress Operator to create DNS records on your behalf removethe privateZone and publicZone sections from the ltinstallation_directorygtmanifestscluster-dns-02-configyml DNS configuration file

Remove this section completely

If you do so you must add ingress DNS records manually in a later step

6 To create the Ignition configuration files run the following command from the directory thatcontains the installation program

For ltinstallation_directorygt specify the same installation directory

The following files are generated in the directory

auth kubeadmin-password kubeconfig bootstrapign masterign metadatajson workerign

187 Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify yourcluster in Amazon Web Services (AWS) The infrastructure name is also used to locate the appropriateAWS resources during an OpenShift Container Platform installation The provided CloudFormationtemplates contain references to this infrastructure name so you must extract it

Prerequisites

You obtained the OpenShift Container Platform installation program and the pull secret for

apiVersion configopenshiftiov1kind DNSmetadata creationTimestamp null name clusterspec baseDomain exampleopenshiftcom privateZone 1 id mycluster-100419-private-zone publicZone 2 id exampleopenshiftcomstatus

$ openshift-install create ignition-configs --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

275

1

1

You obtained the OpenShift Container Platform installation program and the pull secret foryour cluster

You generated the Ignition config files for your cluster

You installed the jq package

Procedure

To extract and view the infrastructure name from the Ignition config file metadata run thefollowing command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

The output of this command is your cluster name and a random string

188 Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShiftContainer Platform cluster to use You can customize the VPC to meet your requirements includingVPN and route tables

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the VPC

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

Procedure

1 Create a JSON file that contains the parameter values that the template requires

$ jq -r infraID ltinstallation_directorygtmetadatajson 1

openshift-vw9j6 1

[

OpenShift Container Platform 46 Installing on AWS

276

1

2

3

4

5

6

1

2

3

The CIDR block for the VPC

Specify a CIDR block in the format xxxx16-24

The number of availability zones to deploy the VPC in

Specify an integer between 1 and 3

The size of each subnet in each availability zone

Specify an integer between 5 and 13 where 5 is 27 and 13 is 19

2 Copy the template from the CloudFormation template for the VPC section of this topic andsave it as a YAML file on your computer This template describes the VPC that your clusterrequires

3 Launch the CloudFormation template to create a stack of AWS resources that represent theVPC

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-vpc You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

ParameterKey VpcCidr 1 ParameterValue 1000016 2 ParameterKey AvailabilityZoneCount 3 ParameterValue 1 4 ParameterKey SubnetBits 5 ParameterValue 12 6 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

277

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

VpcId The ID of your VPC

PublicSubnetIds

The IDs of the new public subnets

PrivateSubnetIds

The IDs of the new private subnets

1881 CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for yourOpenShift Container Platform cluster

Example 140 CloudFormation template for the VPC

arnawscloudformationus-east-1269333783861stackcluster-vpcdbedae40-2fd3-11eb-820e-12a48460849f

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for Best Practice VPC with 1-3 AZs

Parameters VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String AvailabilityZoneCount ConstraintDescription The number of availability zones (Min 1 Max 3) MinValue 1 MaxValue 3 Default 1 Description How many AZs to create VPC subnets for (Min 1 Max 3) Type Number SubnetBits ConstraintDescription CIDR block parameter must be in the form xxxx19-27 MinValue 5 MaxValue 13 Default 12 Description Size of each subnet to create within the availability zones (Min 5 = 27 Max 13 = 19) Type Number

OpenShift Container Platform 46 Installing on AWS

278

Metadata AWSCloudFormationInterface ParameterGroups - Label default Network Configuration Parameters - VpcCidr - SubnetBits - Label default Availability Zones Parameters - AvailabilityZoneCount ParameterLabels AvailabilityZoneCount default Availability Zone Count VpcCidr default VPC CIDR SubnetBits default Bits Per Subnet

Conditions DoAz3 Equals [3 Ref AvailabilityZoneCount] DoAz2 Or [Equals [2 Ref AvailabilityZoneCount] Condition DoAz3]

Resources VPC Type AWSEC2VPC Properties EnableDnsSupport true EnableDnsHostnames true CidrBlock Ref VpcCidr PublicSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [0 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PublicSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [1 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PublicSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [2 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select

CHAPTER 1 INSTALLING ON AWS

279

- 2 - FnGetAZs Ref AWSRegion InternetGateway Type AWSEC2InternetGateway GatewayToInternet Type AWSEC2VPCGatewayAttachment Properties VpcId Ref VPC InternetGatewayId Ref InternetGateway PublicRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PublicRoute Type AWSEC2Route DependsOn GatewayToInternet Properties RouteTableId Ref PublicRouteTable DestinationCidrBlock 00000 GatewayId Ref InternetGateway PublicSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PublicSubnet2 RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation3 Condition DoAz3 Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet3 RouteTableId Ref PublicRouteTable PrivateSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [3 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PrivateRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PrivateSubnet RouteTableId Ref PrivateRouteTable NAT DependsOn

OpenShift Container Platform 46 Installing on AWS

280

- GatewayToInternet Type AWSEC2NatGateway Properties AllocationId FnGetAtt - EIP - AllocationId SubnetId Ref PublicSubnet EIP Type AWSEC2EIP Properties Domain vpc Route Type AWSEC2Route Properties RouteTableId Ref PrivateRouteTable DestinationCidrBlock 00000 NatGatewayId Ref NAT PrivateSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [4 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PrivateRouteTable2 Type AWSEC2RouteTable Condition DoAz2 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PrivateSubnet2 RouteTableId Ref PrivateRouteTable2 NAT2 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz2 Properties AllocationId FnGetAtt - EIP2 - AllocationId SubnetId Ref PublicSubnet2 EIP2 Type AWSEC2EIP Condition DoAz2 Properties Domain vpc

CHAPTER 1 INSTALLING ON AWS

281

Route2 Type AWSEC2Route Condition DoAz2 Properties RouteTableId Ref PrivateRouteTable2 DestinationCidrBlock 00000 NatGatewayId Ref NAT2 PrivateSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [5 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion PrivateRouteTable3 Type AWSEC2RouteTable Condition DoAz3 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation3 Type AWSEC2SubnetRouteTableAssociation Condition DoAz3 Properties SubnetId Ref PrivateSubnet3 RouteTableId Ref PrivateRouteTable3 NAT3 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz3 Properties AllocationId FnGetAtt - EIP3 - AllocationId SubnetId Ref PublicSubnet3 EIP3 Type AWSEC2EIP Condition DoAz3 Properties Domain vpc Route3 Type AWSEC2Route Condition DoAz3 Properties RouteTableId Ref PrivateRouteTable3 DestinationCidrBlock 00000 NatGatewayId Ref NAT3 S3Endpoint Type AWSEC2VPCEndpoint

OpenShift Container Platform 46 Installing on AWS

282

189 Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS)that your OpenShift Container Platform cluster can use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the networking and load balancing components that yourOpenShift Container Platform cluster requires The template also creates a hosted zone and subnettags

You can run the template multiple times within a single Virtual Private Cloud (VPC)

Properties PolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Action - Resource - RouteTableIds - Ref PublicRouteTable - Ref PrivateRouteTable - If [DoAz2 Ref PrivateRouteTable2 Ref AWSNoValue] - If [DoAz3 Ref PrivateRouteTable3 Ref AWSNoValue] ServiceName Join - - - comamazonaws - Ref AWSRegion - s3 VpcId Ref VPC

Outputs VpcId Description ID of the new VPC Value Ref VPC PublicSubnetIds Description Subnet IDs of the public subnets Value Join [ [Ref PublicSubnet If [DoAz2 Ref PublicSubnet2 Ref AWSNoValue] If [DoAz3 Ref PublicSubnet3 Ref AWSNoValue]] ] PrivateSubnetIds Description Subnet IDs of the private subnets Value Join [ [Ref PrivateSubnet If [DoAz2 Ref PrivateSubnet2 Ref AWSNoValue] If [DoAz3 Ref PrivateSubnet3 Ref AWSNoValue]] ]

CHAPTER 1 INSTALLING ON AWS

283

1

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-configyaml file for your cluster You can obtain details about your hosted zone by running thefollowing command

For the ltroute53_domaingt specify the Route 53 base domain that you used when yougenerated the install-configyaml file for the cluster

Example output

In the example output the hosted zone ID is Z21IXYZABCZ2A4

2 Create a JSON file that contains the parameter values that the template requires

$ aws route53 list-hosted-zones-by-name --dns-name ltroute53_domaingt 1

myclusterexamplecom False 100HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA hostedzoneZ21IXYZABCZ2A4 myclusterexamplecom 10

[ ParameterKey ClusterName 1 ParameterValue mycluster 2 ParameterKey InfrastructureName 3 ParameterValue mycluster-ltrandom_stringgt 4 ParameterKey HostedZoneId 5 ParameterValue ltrandom_stringgt 6 ParameterKey HostedZoneName 7

OpenShift Container Platform 46 Installing on AWS

284

1

2

3

4

5

6

7

8

9

10

11

12

13

14

A short representative cluster name to use for host names etc

Specify the cluster name that you used when you generated the install-configyaml filefor the cluster

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The Route 53 public zone ID to register the targets with

Specify the Route 53 public zone ID which as a format similar to Z21IXYZABCZ2A4 Youcan obtain this value from the AWS console

The Route 53 zone to register the targets with

Specify the Route 53 base domain that you used when you generated the install-configyaml file for the cluster Do not include the trailing period () that is displayed in theAWS console

The public subnets that you created for your VPC

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

3 Copy the template from the CloudFormation template for the network and load balancerssection of this topic and save it as a YAML file on your computer This template describes thenetworking and load balancing objects that your cluster requires

ParameterValue examplecom 8 ParameterKey PublicSubnets 9 ParameterValue subnet-ltrandom_stringgt 10 ParameterKey PrivateSubnets 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey VpcId 13 ParameterValue vpc-ltrandom_stringgt 14 ]

CHAPTER 1 INSTALLING ON AWS

285

1

2

3

4

IMPORTANT

If you are deploying your cluster to an AWS government region you must updatethe InternalApiServerRecord in the CloudFormation template to use CNAMErecords Records of type ALIAS are not supported for AWS government regions

4 Launch the CloudFormation template to create a stack of AWS resources that provide thenetworking and load balancing components

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-dns You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole resources

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

PrivateHostedZoneId

Hosted zone ID for the private DNS

ExternalApiLoadBalancerName

Full name of the external API load balancer

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-dnscd3e5de0-2fd4-11eb-5cf0-12be5c33a183

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

286

InternalApiLoadBalancerName

Full name of the internal API load balancer

ApiServerDnsName

Full host name of the API server

RegisterNlbIpTargetsLambda

Lambda ARN useful to help registerderegister IP targets for these load balancers

ExternalApiTargetGroupArn

ARN of external API target group

InternalApiTargetGroupArn

ARN of internal API target group

InternalServiceTargetGroupArn

ARN of internal service target group

1891 CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and loadbalancers that you need for your OpenShift Container Platform cluster

Example 141 CloudFormation template for the network and load balancers

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Network Elements (Route53 amp LBs)

Parameters ClusterName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Cluster name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short representative cluster name to use for host names and other identifying names Type String InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster

CHAPTER 1 INSTALLING ON AWS

287

Type String HostedZoneId Description The Route53 public zone ID to register the targets with such as Z21IXYZABCZ2A4 Type String HostedZoneName Description The Route53 zone to register the targets with such as examplecom Omit the trailing period Type String Default examplecom PublicSubnets Description The internet-facing subnets Type ListltAWSEC2SubnetIdgt PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - ClusterName - InfrastructureName - Label default Network Configuration Parameters - VpcId - PublicSubnets - PrivateSubnets - Label default DNS Parameters - HostedZoneName - HostedZoneId ParameterLabels ClusterName default Cluster Name InfrastructureName default Infrastructure Name VpcId default VPC ID PublicSubnets default Public Subnets PrivateSubnets default Private Subnets HostedZoneName default Public Hosted Zone Name HostedZoneId default Public Hosted Zone ID

Resources

OpenShift Container Platform 46 Installing on AWS

288

ExtApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName ext]] IpAddressType ipv4 Subnets Ref PublicSubnets Type network

IntApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName int]] Scheme internal IpAddressType ipv4 Subnets Ref PrivateSubnets Type network

IntDns Type AWSRoute53HostedZone Properties HostedZoneConfig Comment Managed by CloudFormation Name Join [ [Ref ClusterName Ref HostedZoneName]] HostedZoneTags - Key Name Value Join [- [Ref InfrastructureName int]] - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value owned VPCs - VPCId Ref VpcId VPCRegion Ref AWSRegion

ExternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref HostedZoneId RecordSets - Name Join [ [api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt ExtApiElbCanonicalHostedZoneID DNSName GetAtt ExtApiElbDNSName

InternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref IntDns RecordSets - Name Join [

CHAPTER 1 INSTALLING ON AWS

289

[api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName - Name Join [ [api-int Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName

ExternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref ExternalApiTargetGroup LoadBalancerArn Ref ExtApiElb Port 6443 Protocol TCP

ExternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalApiTargetGroup LoadBalancerArn Ref IntApiElb

OpenShift Container Platform 46 Installing on AWS

290

Port 6443 Protocol TCP

InternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalServiceInternalListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalServiceTargetGroup LoadBalancerArn Ref IntApiElb Port 22623 Protocol TCP

InternalServiceTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath healthz HealthCheckPort 22623 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 22623 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

RegisterTargetLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName nlb lambda role]] AssumeRolePolicyDocument

CHAPTER 1 INSTALLING ON AWS

291

Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalApiTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalServiceTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref ExternalApiTargetGroup

RegisterNlbIpTargets Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterTargetLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) elb = boto3client(elbv2) if event[RequestType] == Delete elbderegister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) elif event[RequestType] == Create elbregister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=

OpenShift Container Platform 46 Installing on AWS

292

[Id event[ResourceProperties][TargetIp]]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][TargetArn]+event[ResourceProperties][TargetIp]) Runtime python37 Timeout 120

RegisterSubnetTagsLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName subnet-tags-lambda-role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName subnet-tagging-policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ ec2DeleteTags ec2CreateTags ] Resource arnawsec2subnet - Effect Allow Action [ ec2DescribeSubnets ec2DescribeTags ] Resource

RegisterSubnetTags Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterSubnetTagsLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) ec2_client = boto3client(ec2)

CHAPTER 1 INSTALLING ON AWS

293

if event[RequestType] == Delete for subnet_id in event[ResourceProperties][Subnets] ec2_clientdelete_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName]]) elif event[RequestType] == Create for subnet_id in event[ResourceProperties][Subnets] ec2_clientcreate_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName] Value shared]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][InfrastructureName]+event[ResourceProperties][Subnets][0]) Runtime python37 Timeout 120

RegisterPublicSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PublicSubnets

RegisterPrivateSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PrivateSubnets

Outputs PrivateHostedZoneId Description Hosted zone ID for the private DNS which is required for private records Value Ref IntDns ExternalApiLoadBalancerName Description Full name of the external API load balancer Value GetAtt ExtApiElbLoadBalancerFullName InternalApiLoadBalancerName Description Full name of the internal API load balancer Value GetAtt IntApiElbLoadBalancerFullName ApiServerDnsName Description Full hostname of the API server which is required for the Ignition config files Value Join [ [api-int Ref ClusterName Ref HostedZoneName]] RegisterNlbIpTargetsLambda Description Lambda ARN useful to help register or deregister IP targets for these load balancers Value GetAtt RegisterNlbIpTargetsArn ExternalApiTargetGroupArn Description ARN of the external API target group Value Ref ExternalApiTargetGroup InternalApiTargetGroupArn Description ARN of the internal API target group Value Ref InternalApiTargetGroup InternalServiceTargetGroupArn Description ARN of the internal service target group Value Ref InternalServiceTargetGroup

OpenShift Container Platform 46 Installing on AWS

294

IMPORTANT

If you are deploying your cluster to an AWS government region you must update the InternalApiServerRecord to use CNAME records Records of type ALIAS are notsupported for AWS government regions For example

Additional resources

See Listing public hosted zones in the AWS documentation for more information about listingpublic hosted zones

1810 Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift ContainerPlatform cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the security groups and roles that your OpenShift ContainerPlatform cluster requires

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Create a JSON file that contains the parameter values that the template requires

Type CNAMETTL 10ResourceRecords- GetAtt IntApiElbDNSName

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey VpcCidr 3 ParameterValue 1000016 4

CHAPTER 1 INSTALLING ON AWS

295

1

2

3

4

5

6

7

8

1

2

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The CIDR block for the VPC

Specify the CIDR block parameter that you used for the VPC that you defined in the form xxxx16-24

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

2 Copy the template from the CloudFormation template for security objects section of thistopic and save it as a YAML file on your computer This template describes the security groupsand roles that your cluster requires

3 Launch the CloudFormation template to create a stack of AWS resources that represent thesecurity groups and roles

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-sec You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ParameterKey PrivateSubnets 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey VpcId 7 ParameterValue vpc-ltrandom_stringgt 8 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

OpenShift Container Platform 46 Installing on AWS

296

3

4

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

MasterSecurityGroupId

Master Security Group ID

WorkerSecurityGroupId

Worker Security Group ID

MasterInstanceProfile

Master IAM Instance Profile

WorkerInstanceProfile

Worker IAM Instance Profile

18101 CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need foryour OpenShift Container Platform cluster

Example 142 CloudFormation template for security objects

arnawscloudformationus-east-1269333783861stackcluster-sec03bd4210-2ed7-11eb-6d7a-13fc0b61e9db

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Security Elements (Security Groups amp IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1

CHAPTER 1 INSTALLING ON AWS

297

ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Network Configuration Parameters - VpcId - VpcCidr - PrivateSubnets ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID VpcCidr default VPC CIDR PrivateSubnets default Private Subnets

Resources MasterSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Master Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr

OpenShift Container Platform 46 Installing on AWS

298

- IpProtocol tcp ToPort 6443 FromPort 6443 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22623 ToPort 22623 CidrIp Ref VpcCidr VpcId Ref VpcId

WorkerSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Worker Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr VpcId Ref VpcId

MasterIngressEtcd Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description etcd FromPort 2379 ToPort 2380 IpProtocol tcp

MasterIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressWorkerVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressGeneve

CHAPTER 1 INSTALLING ON AWS

299

Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressWorkerGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressWorkerInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressWorkerInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000

OpenShift Container Platform 46 Installing on AWS

300

ToPort 9999 IpProtocol udp

MasterIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressWorkerIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIngressWorkerIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties

CHAPTER 1 INSTALLING ON AWS

301

GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressMasterVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressMasterGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

OpenShift Container Platform 46 Installing on AWS

302

WorkerIngressMasterInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressMasterInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes secure kubelet port FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal Kubernetes communication FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId

CHAPTER 1 INSTALLING ON AWS

303

Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressMasterIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressMasterIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2AttachVolume - ec2AuthorizeSecurityGroupIngress

OpenShift Container Platform 46 Installing on AWS

304

- ec2CreateSecurityGroup - ec2CreateTags - ec2CreateVolume - ec2DeleteSecurityGroup - ec2DeleteVolume - ec2Describe - ec2DetachVolume - ec2ModifyInstanceAttribute - ec2ModifyVolume - ec2RevokeSecurityGroupIngress - elasticloadbalancingAddTags - elasticloadbalancingAttachLoadBalancerToSubnets - elasticloadbalancingApplySecurityGroupsToLoadBalancer - elasticloadbalancingCreateListener - elasticloadbalancingCreateLoadBalancer - elasticloadbalancingCreateLoadBalancerPolicy - elasticloadbalancingCreateLoadBalancerListeners - elasticloadbalancingCreateTargetGroup - elasticloadbalancingConfigureHealthCheck - elasticloadbalancingDeleteListener - elasticloadbalancingDeleteLoadBalancer - elasticloadbalancingDeleteLoadBalancerListeners - elasticloadbalancingDeleteTargetGroup - elasticloadbalancingDeregisterInstancesFromLoadBalancer - elasticloadbalancingDeregisterTargets - elasticloadbalancingDescribe - elasticloadbalancingDetachLoadBalancerFromSubnets - elasticloadbalancingModifyListener - elasticloadbalancingModifyLoadBalancerAttributes - elasticloadbalancingModifyTargetGroup - elasticloadbalancingModifyTargetGroupAttributes - elasticloadbalancingRegisterInstancesWithLoadBalancer - elasticloadbalancingRegisterTargets - elasticloadbalancingSetLoadBalancerPoliciesForBackendServer - elasticloadbalancingSetLoadBalancerPoliciesOfListener - kmsDescribeKey Resource

MasterInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref MasterIamRole

WorkerIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole

CHAPTER 1 INSTALLING ON AWS

305

1811 RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs valid for the various Amazon WebServices (AWS) zones you can specify for your OpenShift Container Platform nodes

NOTE

You can also install to regions that do not have a RHCOS AMI published by importing yourown AMI

Table 120 RHCOS AMIs

AWS zone AWS AMI

af-south-1 ami-09921c9c1c36e695c

ap-east-1 ami-01ee8446e9af6b197

Policies - PolicyName Join [- [Ref InfrastructureName worker policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2DescribeInstances - ec2DescribeRegions Resource

WorkerInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref WorkerIamRole

Outputs MasterSecurityGroupId Description Master Security Group ID Value GetAtt MasterSecurityGroupGroupId

WorkerSecurityGroupId Description Worker Security Group ID Value GetAtt WorkerSecurityGroupGroupId

MasterInstanceProfile Description Master IAM Instance Profile Value Ref MasterInstanceProfile

WorkerInstanceProfile Description Worker IAM Instance Profile Value Ref WorkerInstanceProfile

OpenShift Container Platform 46 Installing on AWS

306

ap-northeast-1 ami-04e5b5722a55846ea

ap-northeast-2 ami-0fdc25c8a0273a742

ap-south-1 ami-09e3deb397cc526a8

ap-southeast-1 ami-0630e03f75e02eec4

ap-southeast-2 ami-069450613262ba03c

ca-central-1 ami-012518cdbd3057dfd

eu-central-1 ami-0bd7175ff5b1aef0c

eu-north-1 ami-06c9ec42d0a839ad2

eu-south-1 ami-0614d7440a0363d71

eu-west-1 ami-01b89df58b5d4d5fa

eu-west-2 ami-06f6e31ddd554f89d

eu-west-3 ami-0dc82e2517ded15a1

me-south-1 ami-07d181e3aa0f76067

sa-east-1 ami-0cd44e6dd20e6c7fa

us-east-1 ami-04a16d506e5b0e246

us-east-2 ami-0a1f868ad58ea59a7

us-west-1 ami-0a65d76e3a6f6622f

us-west-2 ami-0dd9008abadc519f1

AWS zone AWS AMI

1812 Creating the bootstrap node in AWS

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift ContainerPlatform cluster initialization

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the bootstrap node that your OpenShift Container Platforminstallation requires

NOTE

CHAPTER 1 INSTALLING ON AWS

307

1

NOTE

If you do not use the provided CloudFormation template to create your bootstrap nodeyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

Procedure

1 Provide a location to serve the bootstrapign Ignition config file to your cluster This file islocated in your installation directory One way to do this is to create an S3 bucket in yourclusterrsquos region and upload the Ignition config file to it

IMPORTANT

The provided CloudFormation Template assumes that the Ignition config filesfor your cluster are served from an S3 bucket If you choose to serve the filesfrom another location you must modify the templates

IMPORTANT

If you are deploying to a region that has endpoints that differ from the AWS SDKor you are providing your own custom endpoints you must use a presigned URLfor your S3 bucket instead of the s3 schema

NOTE

The bootstrap Ignition config file does contain secrets like X509 keys Thefollowing steps provide basic security for the S3 bucket To provide additionalsecurity you can enable an S3 bucket policy to allow only certain users such asthe OpenShift IAM user to access objects that the bucket contains You canavoid S3 entirely and serve your bootstrap Ignition config file from any addressthat the bootstrap machine can reach

a Create the bucket

ltcluster-namegt-infra is the bucket name When creating the install-configyaml filereplace ltcluster-namegt with the name specified for the cluster

$ aws s3 mb s3ltcluster-namegt-infra 1

OpenShift Container Platform 46 Installing on AWS

308

1

b Upload the bootstrapign Ignition config file to the bucket

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

c Verify that the file uploaded

Example output

2 Create a JSON file that contains the parameter values that the template requires

$ aws s3 cp ltinstallation_directorygtbootstrapign s3ltcluster-namegt-infrabootstrapign 1

$ aws s3 ls s3ltcluster-namegt-infra

2019-04-03 161516 314878 bootstrapign

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AllowedBootstrapSshCidr 5 ParameterValue 00000 6 ParameterKey PublicSubnet 7 ParameterValue subnet-ltrandom_stringgt 8 ParameterKey MasterSecurityGroupId 9 ParameterValue sg-ltrandom_stringgt 10 ParameterKey VpcId 11 ParameterValue vpc-ltrandom_stringgt 12 ParameterKey BootstrapIgnitionLocation 13 ParameterValue s3ltbucket_namegtbootstrapign 14 ParameterKey AutoRegisterELB 15 ParameterValue yes 16

CHAPTER 1 INSTALLING ON AWS

309

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node

Specify a valid AWSEC2ImageId value

CIDR block to allow SSH access to the bootstrap node

Specify a CIDR block in the format xxxx16-24

The public subnet that is associated with your VPC to launch the bootstrap node into

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The master security group ID (for registering temporary rules)

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The VPC created resources will belong to

Specify the VpcId value from the output of the CloudFormation template for the VPC

Location to fetch bootstrap Ignition config file from

Specify the S3 bucket and file name in the form s3ltbucket_namegtbootstrapign

Whether or not to register a network load balancer (NLB)

ParameterKey RegisterNlbIpTargetsLambdaArn 17 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 18 ParameterKey ExternalApiTargetGroupArn 19 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 20 ParameterKey InternalApiTargetGroupArn 21 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 22 ParameterKey InternalServiceTargetGroupArn 23 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 24 ]

OpenShift Container Platform 46 Installing on AWS

310

16

17

18

19

20

21

22

23

24

1

2

3

4

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to an

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

3 Copy the template from the CloudFormation template for the bootstrap machine section ofthis topic and save it as a YAML file on your computer This template describes the bootstrapmachine that your cluster requires

4 Launch the CloudFormation template to create a stack of AWS resources that represent thebootstrap node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-bootstrap You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

CHAPTER 1 INSTALLING ON AWS

311

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

BootstrapInstanceId

The bootstrap Instance ID

BootstrapPublicIp

The bootstrap node public IP address

BootstrapPrivateIp

The bootstrap node private IP address

18121 CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need foryour OpenShift Container Platform cluster

Example 143 CloudFormation template for the bootstrap machine

arnawscloudformationus-east-1269333783861stackcluster-bootstrap12944486-2add-11eb-9dee-12dace8e3a83

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Bootstrap (EC2 Instance Security Groups and IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AllowedBootstrapSshCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])(([0-9]|1[0-9]|2[0-9]|3[0-2]))$ ConstraintDescription CIDR block parameter must be in the form xxxx0-32 Default 00000 Description CIDR block to allow SSH access to the bootstrap node

OpenShift Container Platform 46 Installing on AWS

312

Type String PublicSubnet Description The public subnet to launch the bootstrap node into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID for registering temporary rules Type AWSEC2SecurityGroupId VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId BootstrapIgnitionLocation Default s3my-s3-bucketbootstrapign Description Ignition config file location Type String AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label default Load Balancer Automation Parameters

CHAPTER 1 INSTALLING ON AWS

313

- AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID AllowedBootstrapSshCidr default Allowed SSH Source PublicSubnet default Public Subnet RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Bootstrap Ignition Source MasterSecurityGroupId default Master Security Group ID AutoRegisterELB default Use Provided ELB Automation

Conditions DoRegistration Equals [yes Ref AutoRegisterELB]

Resources BootstrapIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName bootstrap policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action ec2Describe Resource - Effect Allow Action ec2AttachVolume Resource - Effect Allow Action ec2DetachVolume Resource - Effect Allow Action s3GetObject

OpenShift Container Platform 46 Installing on AWS

314

Resource

BootstrapInstanceProfile Type AWSIAMInstanceProfile Properties Path Roles - Ref BootstrapIamRole

BootstrapSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Bootstrap Security Group SecurityGroupIngress - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref AllowedBootstrapSshCidr - IpProtocol tcp ToPort 19531 FromPort 19531 CidrIp 00000 VpcId Ref VpcId

BootstrapInstance Type AWSEC2Instance Properties ImageId Ref RhcosAmi IamInstanceProfile Ref BootstrapInstanceProfile InstanceType i3large NetworkInterfaces - AssociatePublicIpAddress true DeviceIndex 0 GroupSet - Ref BootstrapSecurityGroup - Ref MasterSecurityGroupId SubnetId Ref PublicSubnet UserData FnBase64 Sub - ignitionconfigreplacesource$S3Locversion310 - S3Loc Ref BootstrapIgnitionLocation

RegisterBootstrapApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties

CHAPTER 1 INSTALLING ON AWS

315

Additional resources

See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise LinuxCoreOS (RHCOS) AMIs for the AWS zones

1813 Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the control plane nodes

IMPORTANT

The CloudFormation template creates a stack that represents three control plane nodes

NOTE

If you do not use the provided CloudFormation template to create your control planenodes you must review the provided information and manually create the infrastructureIf your cluster does not initialize correctly you might have to contact Red Hat supportwith your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

Outputs BootstrapInstanceId Description Bootstrap Instance ID Value Ref BootstrapInstance

BootstrapPublicIp Description The bootstrap node public IP address Value GetAtt BootstrapInstancePublicIp

BootstrapPrivateIp Description The bootstrap node private IP address Value GetAtt BootstrapInstancePrivateIp

OpenShift Container Platform 46 Installing on AWS

316

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

Procedure

1 Create a JSON file that contains the parameter values that the template requires

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AutoRegisterDNS 5 ParameterValue yes 6 ParameterKey PrivateHostedZoneId 7 ParameterValue ltrandom_stringgt 8 ParameterKey PrivateHostedZoneName 9 ParameterValue myclusterexamplecom 10 ParameterKey Master0Subnet 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey Master1Subnet 13 ParameterValue subnet-ltrandom_stringgt 14 ParameterKey Master2Subnet 15 ParameterValue subnet-ltrandom_stringgt 16 ParameterKey MasterSecurityGroupId 17 ParameterValue sg-ltrandom_stringgt 18 ParameterKey IgnitionLocation 19

CHAPTER 1 INSTALLING ON AWS

317

1

2

3

4

5

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control planemachines

Specify an AWSEC2ImageId value

Whether or not to perform DNS etcd registration

ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configmaster 20 ParameterKey CertificateAuthorities 21 ParameterValue datatextplaincharset=utf-8base64ABCxYz== 22 ParameterKey MasterInstanceProfileName 23 ParameterValue ltroles_stackgt-MasterInstanceProfile-ltrandom_stringgt 24 ParameterKey MasterInstanceType 25 ParameterValue m4xlarge 26 ParameterKey AutoRegisterELB 27 ParameterValue yes 28 ParameterKey RegisterNlbIpTargetsLambdaArn 29 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 30 ParameterKey ExternalApiTargetGroupArn 31 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 32 ParameterKey InternalApiTargetGroupArn 33 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 34 ParameterKey InternalServiceTargetGroupArn 35 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 36 ]

OpenShift Container Platform 46 Installing on AWS

318

6

7

8

9

10

11 13 15

12 14 16

17

18

19

20

21

22

23

24

25

26

Specify yes or no If you specify yes you must provide hosted zone information

The Route 53 private zone ID to register the etcd targets with

Specify the PrivateHostedZoneId value from the output of the CloudFormation templatefor DNS and load balancing

The Route 53 zone to register the targets with

Specify ltcluster_namegtltdomain_namegt where ltdomain_namegt is the Route 53 basedomain that you used when you generated install-configyaml file for the cluster Do notinclude the trailing period () that is displayed in the AWS console

A subnet preferably private to launch the control plane machines on

Specify a subnet from the PrivateSubnets value from the output of theCloudFormation template for DNS and load balancing

The master security group ID to associate with master nodes

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch control plane Ignition config file from

Specify the generated Ignition config file location httpsapi-intltcluster_namegtltdomain_namegt22623configmaster

The base64 encoded certificate authority string to use

Specify the value from the masterign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with master nodes

Specify the MasterInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5xlarge

m52xlarge

CHAPTER 1 INSTALLING ON AWS

319

27

28

29

30

31

32

33

34

35

36

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c42xlarge

c44xlarge

c48xlarge

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 specify an m5 type such as m5xlarge instead

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

OpenShift Container Platform 46 Installing on AWS

320

1

2

3

2 Copy the template from the CloudFormation template for control plane machines section ofthis topic and save it as a YAML file on your computer This template describes the control planemachines that your cluster requires

3 If you specified an m5 instance type as the value for MasterInstanceType add that instancetype to the MasterInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent thecontrol plane nodes

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-control-plane Youneed the name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents three control planenodes

5 Confirm that the template components exist

18131 CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you needfor your OpenShift Container Platform cluster

Example 144 CloudFormation template for control plane machines

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-control-plane21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 master instances)

CHAPTER 1 INSTALLING ON AWS

321

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AutoRegisterDNS Default yes AllowedValues - yes - no Description Do you want to invoke DNS etcd registration which requires Hosted Zone information Type String PrivateHostedZoneId Description The Route53 private zone ID to register the etcd targets with such as Z21IXYZABCZ2A4 Type String PrivateHostedZoneName Description The Route53 zone to register the targets with such as clusterexamplecom Omit the trailing period Type String Master0Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master1Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master2Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configmaster Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String MasterInstanceProfileName Description IAM profile to associate with master nodes Type String MasterInstanceType Default m5xlarge Type String AllowedValues - m4xlarge

OpenShift Container Platform 46 Installing on AWS

322

- m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c42xlarge - c44xlarge - c48xlarge - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - MasterInstanceType - RhcosAmi - IgnitionLocation

CHAPTER 1 INSTALLING ON AWS

323

- CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label default DNS Parameters - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID Master0Subnet default Master-0 Subnet Master1Subnet default Master-1 Subnet Master2Subnet default Master-2 Subnet MasterInstanceType default Master Instance Type MasterInstanceProfileName default Master Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Master Ignition Source CertificateAuthorities default Ignition CA String MasterSecurityGroupId default Master Security Group ID AutoRegisterDNS default Use Provided DNS Automation AutoRegisterELB default Use Provided ELB Automation PrivateHostedZoneName default Private Hosted Zone Name PrivateHostedZoneId default Private Hosted Zone ID

OpenShift Container Platform 46 Installing on AWS

324

Conditions DoRegistration Equals [yes Ref AutoRegisterELB] DoDns Equals [yes Ref AutoRegisterDNS]

Resources Master0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master0Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster0 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn

CHAPTER 1 INSTALLING ON AWS

325

TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master0PrivateIp

Master1 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master1Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster1 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master1PrivateIp

OpenShift Container Platform 46 Installing on AWS

326

Master2 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master2Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster2 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master2PrivateIp

EtcdSrvRecords Condition DoDns Type AWSRoute53RecordSet

CHAPTER 1 INSTALLING ON AWS

327

Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [_etcd-server-ssl_tcp Ref PrivateHostedZoneName]] ResourceRecords - Join [ [0 10 2380 Join [ [etcd-0 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-1 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-2 Ref PrivateHostedZoneName]]] ] TTL 60 Type SRV

Etcd0Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-0 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master0PrivateIp TTL 60 Type A

Etcd1Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-1 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master1PrivateIp TTL 60 Type A

Etcd2Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-2 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master2PrivateIp TTL 60 Type A

Outputs PrivateIPs Description The control-plane node private IP addresses Value

OpenShift Container Platform 46 Installing on AWS

328

1814 Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent a worker node

IMPORTANT

The CloudFormation template creates a stack that represents one worker node Youmust create a stack for each worker node

NOTE

If you do not use the provided CloudFormation template to create your worker nodesyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

Procedure

1 Create a JSON file that contains the parameter values that the CloudFormation templaterequires

Join [ [GetAtt Master0PrivateIp GetAtt Master1PrivateIp GetAtt Master2PrivateIp] ]

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2

CHAPTER 1 INSTALLING ON AWS

329

1

2

3

4

5

6

7

8

9

10

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes

Specify an AWSEC2ImageId value

A subnet preferably private to launch the worker nodes on

Specify a subnet from the PrivateSubnets value from the output of the CloudFormationtemplate for DNS and load balancing

The worker security group ID to associate with worker nodes

Specify the WorkerSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch bootstrap Ignition config file from

Specify the generated Ignition config location httpsapi-intltcluster_namegtltdomain_namegt22623configworker

ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey Subnet 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey WorkerSecurityGroupId 7 ParameterValue sg-ltrandom_stringgt 8 ParameterKey IgnitionLocation 9 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configworker 10 ParameterKey CertificateAuthorities 11 ParameterValue 12 ParameterKey WorkerInstanceProfileName 13 ParameterValue 14 ParameterKey WorkerInstanceType 15 ParameterValue m4large 16 ]

OpenShift Container Platform 46 Installing on AWS

330

11

12

13

14

15

16

Base64 encoded certificate authority string to use

Specify the value from the workerign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with worker nodes

Specify the WorkerInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4large

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5large

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c4large

c4xlarge

c42xlarge

c44xlarge

c48xlarge

r4large

r4xlarge

r42xlarge

CHAPTER 1 INSTALLING ON AWS

331

1

2

3

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5 types instead

2 Copy the template from the CloudFormation template for worker machines section of thistopic and save it as a YAML file on your computer This template describes the networkingobjects and load balancers that your cluster requires

3 If you specified an m5 instance type as the value for WorkerInstanceType add that instancetype to the WorkerInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent aworker node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-worker-1 You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents one worker node

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-worker-1729ee301-1c2a-11eb-348f-sd9888c65b59

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

332

6 Continue to create worker stacks until you have created enough worker machines for yourcluster You can create additional worker stacks by referencing the same template andparameter files and specifying a different stack name

IMPORTANT

You must create at least two worker machines so you must create at least twostacks that use this CloudFormation template

18141 CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need foryour OpenShift Container Platform cluster

Example 145 CloudFormation template for worker machines

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId WorkerSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configworker Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String WorkerInstanceProfileName Description IAM profile to associate with master nodes Type String WorkerInstanceType Default m5large Type String AllowedValues - m4large - m4xlarge - m42xlarge

CHAPTER 1 INSTALLING ON AWS

333

- m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5large - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c4large - c4xlarge - c42xlarge - c44xlarge - c48xlarge - r4large - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label default Network Configuration Parameters - Subnet ParameterLabels Subnet default Subnet InfrastructureName default Infrastructure Name WorkerInstanceType default Worker Instance Type WorkerInstanceProfileName default Worker Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID IgnitionLocation default Worker Ignition Source

OpenShift Container Platform 46 Installing on AWS

334

1815 Initializing the bootstrap sequence on AWS with user-provisionedinfrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS) you can start thebootstrap sequence that initializes the OpenShift Container Platform control plane

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

CertificateAuthorities default Ignition CA String WorkerSecurityGroupId default Worker Security Group ID

Resources Worker0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref WorkerInstanceProfileName InstanceType Ref WorkerInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref WorkerSecurityGroupId SubnetId Ref Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

Outputs PrivateIP Description The compute node private IP address Value GetAtt Worker0PrivateIp

CHAPTER 1 INSTALLING ON AWS

335

1

2

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

You created the worker nodes

Procedure

1 Change to the directory that contains the installation program and start the bootstrap processthat initializes the OpenShift Container Platform control plane

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different installation details specify warn debug or error instead of info

Example output

If the command exits without a FATAL warning your OpenShift Container Platform controlplane has initialized

NOTE

After the control plane initializes it sets up the compute nodes and installsadditional services in the form of Operators

Additional resources

See Monitoring installation progress for details about monitoring the installation bootstrap andcontrol plane logs as an OpenShift Container Platform installation progresses

See Gathering bootstrap node diagnostic data for information about troubleshooting issuesrelated to the bootstrap process

1816 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The

$ openshift-install wait-for bootstrap-complete --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Waiting up to 20m0s for the Kubernetes API at httpsapimyclusterexamplecom6443INFO API v1190+9f84db3 upINFO Waiting up to 30m0s for bootstrapping to completeINFO It is now safe to remove the bootstrap resourcesINFO Time elapsed 1s

OpenShift Container Platform 46 Installing on AWS

336

1

kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1817 Approving the certificate signing requests for your machines

When you add machines to a cluster two pending certificate signing requests (CSRs) are generated foreach machine that you added You must confirm that these CSRs are approved or if necessary approvethem yourself The client requests must be approved first followed by the server requests

Prerequisites

You added machines to your cluster

Procedure

1 Confirm that the cluster recognizes the machines

Example output

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 63m v1190master-1 Ready master 63m v1190master-2 Ready master 64m v1190worker-0 NotReady worker 76s v1190worker-1 NotReady worker 70s v1190

CHAPTER 1 INSTALLING ON AWS

337

The output lists all of the machines that you created

NOTE

The preceding output might not include the compute nodes also known asworker nodes until some CSRs are approved

2 Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster

Example output

In this example two machines are joining the cluster You might see more approved CSRs in thelist

3 If the CSRs were not approved after all of the pending CSRs for the machines you added are in Pending status approve the CSRs for your cluster machines

NOTE

Because the CSRs rotate automatically approve your CSRs within an hour ofadding the machines to the cluster If you do not approve them within an hour thecertificates will rotate and more than two certificates will be present for eachnode You must approve all of these certificates After you approve the initialCSRs the subsequent node client CSRs are automatically approved by thecluster kube-controller-manager

NOTE

For clusters running on platforms that are not machine API enabled such as baremetal and other user-provisioned infrastructure you must implement a methodof automatically approving the kubelet serving certificate requests (CSRs) If arequest is not approved then the oc exec oc rsh and oc logs commandscannot succeed because a serving certificate is required when the API serverconnects to the kubelet Any operation that contacts the Kubelet endpointrequires this certificate approval to be in place The method must watch for newCSRs confirm that the CSR was submitted by the node-bootstrapper serviceaccount in the systemnode or systemadmin groups and confirm the identityof the node

To approve them individually run the following command for each valid CSR

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-8b2br 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pendingcsr-8vnps 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pending

$ oc adm certificate approve ltcsr_namegt 1

OpenShift Container Platform 46 Installing on AWS

338

1

1

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

NOTE

Some Operators might not become available until some CSRs are approved

4 Now that your client requests are approved you must review the server requests for eachmachine that you added to the cluster

Example output

5 If the remaining CSRs are not approved and are in the Pending status approve the CSRs foryour cluster machines

To approve them individually run the following command for each valid CSR

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

6 After all client and server CSRs have been approved the machines have the Ready statusVerify this by running the following command

Example output

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs --no-run-if-empty oc adm certificate approve

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-bfd72 5m26s systemnodeip-10-0-50-126us-east-2computeinternal Pendingcsr-c57lv 5m26s systemnodeip-10-0-95-157us-east-2computeinternal Pending

$ oc adm certificate approve ltcsr_namegt 1

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs oc adm certificate approve

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 73m v1200master-1 Ready master 73m v1200

CHAPTER 1 INSTALLING ON AWS

339

NOTE

It can take a few minutes after approval of the server CSRs for the machines totransition to the Ready status

Additional information

For more information on CSRs see Certificate Signing Requests

1818 Initial Operator configuration

After the control plane initializes you must immediately configure some Operators so that they allbecome available

Prerequisites

Your control plane has initialized

Procedure

1 Watch the cluster components come online

Example output

master-2 Ready master 74m v1200worker-0 Ready worker 11m v1200worker-1 Ready worker 11m v1200

$ watch -n5 oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCEauthentication 460 True False False 3h56mcloud-credential 460 True False False 29hcluster-autoscaler 460 True False False 29hconfig-operator 460 True False False 6h39mconsole 460 True False False 3h59mcsi-snapshot-controller 460 True False False 4h12mdns 460 True False False 4h15metcd 460 True False False 29himage-registry 460 True False False 3h59mingress 460 True False False 4h30minsights 460 True False False 29hkube-apiserver 460 True False False 29hkube-controller-manager 460 True False False 29hkube-scheduler 460 True False False 29hkube-storage-version-migrator 460 True False False 4h2mmachine-api 460 True False False 29hmachine-approver 460 True False False 6h34mmachine-config 460 True False False 3h56mmarketplace 460 True False False 4h2mmonitoring 460 True False False 6h31mnetwork 460 True False False 29hnode-tuning 460 True False False 4h30m

OpenShift Container Platform 46 Installing on AWS

340

2 Configure the Operators that are not available

18181 Image registry storage configuration

Amazon Web Services provides default storage which means the Image Registry Operator is availableafter installation However if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage you must manually configure registry storage

Instructions are shown for configuring a persistent volume which is required for production clustersWhere applicable instructions are shown for configuring an empty directory as the storage locationwhich is available for only non-production clusters

Additional instructions are provided for allowing the image registry to use block storage types by usingthe Recreate rollout strategy during upgrades

181811 Configuring registry storage for AWS with user-provisioned infrastructure

During installation your cloud credentials are sufficient to create an Amazon S3 bucket and the RegistryOperator will automatically configure storage

If the Registry Operator cannot create an S3 bucket and automatically configure storage you cancreate an S3 bucket and configure storage with the following procedure

Prerequisites

You have a cluster on AWS with user-provisioned infrastructure

For Amazon S3 storage the secret is expected to contain two keys

REGISTRY_STORAGE_S3_ACCESSKEY

REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage

1 Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old

2 Fill in the storage configuration in configsimageregistryoperatoropenshiftiocluster

Example configuration

openshift-apiserver 460 True False False 3h56mopenshift-controller-manager 460 True False False 4h36mopenshift-samples 460 True False False 4h30moperator-lifecycle-manager 460 True False False 29hoperator-lifecycle-manager-catalog 460 True False False 29hoperator-lifecycle-manager-packageserver 460 True False False 3h59mservice-ca 460 True False False 29hstorage 460 True False False 4h30m

$ oc edit configsimageregistryoperatoropenshiftiocluster

storage

CHAPTER 1 INSTALLING ON AWS

341

WARNING

To secure your registry images in AWS block public access to the S3 bucket

181812 Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator For non-production clusters you can setthe image registry to an empty directory If you do so all images are lost if you restart the registry

Procedure

To set the image registry storage to an empty directory

WARNING

Configure this option for only non-production clusters

If you run this command before the Image Registry Operator initializes its components the oc patch command fails with the following error

Wait a few minutes and run the command again

1819 Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster remove the bootstrap resourcesfrom Amazon Web Services (AWS)

Prerequisites

You completed the initial Operator configuration for your cluster

Procedure

1 Delete the bootstrap resources If you used the CloudFormation template delete its stack

s3 bucket ltbucket-namegt region ltregion-namegt

$ oc patch configsimageregistryoperatoropenshiftio cluster --type merge --patch specstorageemptyDir

Error from server (NotFound) configsimageregistryoperatoropenshiftio cluster not found

OpenShift Container Platform 46 Installing on AWS

342

1

Delete the stack by using the AWS CLI

ltnamegt is the name of your bootstrap stack

Delete the stack by using the AWS CloudFormation console

1820 Creating the Ingress DNS Records

If you removed the DNS Zone configuration manually create DNS records that point to the Ingress loadbalancer You can create either a wildcard record or specific records While the following procedure usesA records you can use other record types that you require such as CNAME or alias

Prerequisites

You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) thatuses infrastructure that you provisioned

You installed the OpenShift CLI (oc)

You installed the jq package

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix)

Procedure

1 Determine the routes to create

To create a wildcard record use appsltcluster_namegtltdomain_namegt where ltcluster_namegt is your cluster name and ltdomain_namegt is the Route 53 base domainfor your OpenShift Container Platform cluster

To create specific records you must create a record for each route that your cluster uses asshown in the output of the following command

Example output

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP addressthat it uses which is shown in the EXTERNAL-IP column

$ aws cloudformation delete-stack --stack-name ltnamegt 1

$ oc get --all-namespaces -o jsonpath=range items[]range statusingress[]hostnendend routes

oauth-openshiftappsltcluster_namegtltdomain_namegtconsole-openshift-consoleappsltcluster_namegtltdomain_namegtdownloads-openshift-consoleappsltcluster_namegtltdomain_namegtalertmanager-main-openshift-monitoringappsltcluster_namegtltdomain_namegtgrafana-openshift-monitoringappsltcluster_namegtltdomain_namegtprometheus-k8s-openshift-monitoringappsltcluster_namegtltdomain_namegt

$ oc -n openshift-ingress get service router-default

CHAPTER 1 INSTALLING ON AWS

343

1

1 2

Example output

3 Locate the hosted zone ID for the load balancer

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer that you obtained

Example output

The output of this command is the load balancer hosted zone ID

4 Obtain the public hosted zone ID for your clusterrsquos domain

For ltdomain_namegt specify the Route 53 base domain for your OpenShift ContainerPlatform cluster

Example output

The public hosted zone ID for your domain is shown in the command output In this example it is Z3URY6TWQ91KVV

5 Add the alias records to your private zone

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGErouter-default LoadBalancer 1723062215 ab328us-east-2elbamazonawscom 8031499TCP44330693TCP 5m

$ aws elb describe-load-balancers | jq -r LoadBalancerDescriptions[] | select(DNSName == ltexternal_ipgt)CanonicalHostedZoneNameID 1

Z3AADJGX6KTTL2

$ aws route53 list-hosted-zones-by-name --dns-name ltdomain_namegt 1 --query HostedZones[ ConfigPrivateZone = `true` ampamp Name == `ltdomain_namegt`]Id 2 --output text

hostedzoneZ3URY6TWQ91KVV

$ aws route53 change-resource-record-sets --hosted-zone-id ltprivate_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3

OpenShift Container Platform 46 Installing on AWS

344

1

2

3

4

1

2

3

4

For ltprivate_hosted_zone_idgt specify the value from the output of the CloudFormationtemplate for DNS and load balancing

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

6 Add the records to your public zone

For ltpublic_hosted_zone_idgt specify the public hosted zone for your domain

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

1821 Completing an AWS installation on user-provisioned infrastructure

gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

$ aws route53 change-resource-record-sets --hosted-zone-id ltpublic_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

CHAPTER 1 INSTALLING ON AWS

345

1

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure monitor the deployment to completion

Prerequisites

You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure

You installed the oc CLI

Procedure

1 From the directory that contains the installation program complete the cluster installation

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

2 Register your cluster on the Cluster registration page

1822 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

$ openshift-install --dir=ltinstallation_directorygt wait-for install-complete 1

INFO Waiting up to 40m0s for the cluster at httpsapimyclusterexamplecom6443 to initializeINFO Waiting up to 10m0s for the openshift-console route to be createdINFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Fe5en-ymBEc-Wt6NLINFO Time elapsed 1s

OpenShift Container Platform 46 Installing on AWS

346

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

1823 Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

See Working with stacks in the AWS documentation for more information about AWSCloudFormation stacks

1824 Next steps

Validate an installation

Customize your cluster

If the mirror registry that you used to install your cluster has a trusted CA add it to the cluster byconfiguring additional trust stores

If necessary you can opt out of remote health reporting

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

347

1

2

19 UNINSTALLING A CLUSTER ON AWS

You can remove a cluster that you deployed to Amazon Web Services (AWS)

191 Removing a cluster that uses installer-provisioned infrastructure

You can remove a cluster that uses installer-provisioned infrastructure from your cloud

Prerequisites

Have a copy of the installation program that you used to deploy the cluster

Have the files that the installation program generated when you created your cluster

Procedure

1 From the directory that contains the installation program on the computer that you used toinstall the cluster run the following command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different details specify warn debug or error instead of info

NOTE

You must specify the directory that contains the cluster definition files for yourcluster The installation program requires the metadatajson file in this directoryto delete the cluster

2 Optional Delete the ltinstallation_directorygt directory and the OpenShift Container Platforminstallation program

$ openshift-install destroy cluster --dir=ltinstallation_directorygt --log-level=info 1 2

OpenShift Container Platform 46 Installing on AWS

348

  • Table of Contents
  • CHAPTER 1 INSTALLING ON AWS
    • 11 CONFIGURING AN AWS ACCOUNT
      • 111 Configuring Route 53
        • 1111 Ingress Operator endpoint configuration for AWS Route 53
          • 112 AWS account limits
          • 113 Required AWS permissions
          • 114 Creating an IAM user
          • 115 Supported AWS regions
          • 116 Next steps
            • 12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS
              • 121 Prerequisites
              • 122 Internet and Telemetry access for OpenShift Container Platform
              • 123 Generating an SSH private key and adding it to the agent
              • 124 Obtaining the installation program
              • 125 Creating the installation configuration file
                • 1251 Installation configuration parameters
                • 1252 Sample customized install-configyaml file for AWS
                  • 126 Deploying the cluster
                  • 127 Installing the OpenShift CLI by downloading the binary
                    • 1271 Installing the OpenShift CLI on Linux
                    • 1272 Installing the OpenShift CLI on Windows
                    • 1273 Installing the OpenShift CLI on macOS
                      • 128 Logging in to the cluster by using the CLI
                      • 129 Logging in to the cluster by using the web console
                      • 1210 Next steps
                        • 13 INSTALLING A CLUSTER ON AWS WITH NETWORK CUSTOMIZATIONS
                          • 131 Prerequisites
                          • 132 Internet and Telemetry access for OpenShift Container Platform
                          • 133 Generating an SSH private key and adding it to the agent
                          • 134 Obtaining the installation program
                          • 135 Creating the installation configuration file
                            • 1351 Installation configuration parameters
                            • 1352 Network configuration parameters
                            • 1353 Sample customized install-configyaml file for AWS
                              • 136 Modifying advanced network configuration parameters
                              • 137 Configuring an Ingress Controller Network Load Balancer on a new AWS cluster
                              • 138 Cluster Network Operator configuration
                                • 1381 Configuration parameters for the OpenShift SDN default CNI network provider
                                • 1382 Configuration parameters for the OVN-Kubernetes default CNI network provider
                                • 1383 Cluster Network Operator example configuration
                                  • 139 Configuring hybrid networking with OVN-Kubernetes
                                  • 1310 Deploying the cluster
                                  • 1311 Installing the OpenShift CLI by downloading the binary
                                    • 13111 Installing the OpenShift CLI on Linux
                                    • 13112 Installing the OpenShift CLI on Windows
                                    • 13113 Installing the OpenShift CLI on macOS
                                      • 1312 Logging in to the cluster by using the CLI
                                      • 1313 Logging in to the cluster by using the web console
                                      • 1314 Next steps
                                        • 14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC
                                          • 141 Prerequisites
                                          • 142 About using a custom VPC
                                            • 1421 Requirements for using your VPC
                                            • 1422 VPC validation
                                            • 1423 Division of permissions
                                            • 1424 Isolation between clusters
                                              • 143 Internet and Telemetry access for OpenShift Container Platform
                                              • 144 Generating an SSH private key and adding it to the agent
                                              • 145 Obtaining the installation program
                                              • 146 Creating the installation configuration file
                                                • 1461 Installation configuration parameters
                                                • 1462 Sample customized install-configyaml file for AWS
                                                • 1463 Configuring the cluster-wide proxy during installation
                                                  • 147 Deploying the cluster
                                                  • 148 Installing the OpenShift CLI by downloading the binary
                                                    • 1481 Installing the OpenShift CLI on Linux
                                                    • 1482 Installing the OpenShift CLI on Windows
                                                    • 1483 Installing the OpenShift CLI on macOS
                                                      • 149 Logging in to the cluster by using the CLI
                                                      • 1410 Logging in to the cluster by using the web console
                                                      • 1411 Next steps
                                                        • 15 INSTALLING A PRIVATE CLUSTER ON AWS
                                                          • 151 Prerequisites
                                                          • 152 Private clusters
                                                            • 1521 Private clusters in AWS
                                                              • 153 About using a custom VPC
                                                                • 1531 Requirements for using your VPC
                                                                • 1532 VPC validation
                                                                • 1533 Division of permissions
                                                                • 1534 Isolation between clusters
                                                                  • 154 Internet and Telemetry access for OpenShift Container Platform
                                                                  • 155 Generating an SSH private key and adding it to the agent
                                                                  • 156 Obtaining the installation program
                                                                  • 157 Manually creating the installation configuration file
                                                                    • 1571 Installation configuration parameters
                                                                    • 1572 Sample customized install-configyaml file for AWS
                                                                    • 1573 Configuring the cluster-wide proxy during installation
                                                                      • 158 Deploying the cluster
                                                                      • 159 Installing the OpenShift CLI by downloading the binary
                                                                        • 1591 Installing the OpenShift CLI on Linux
                                                                        • 1592 Installing the OpenShift CLI on Windows
                                                                        • 1593 Installing the OpenShift CLI on macOS
                                                                          • 1510 Logging in to the cluster by using the CLI
                                                                          • 1511 Logging in to the cluster by using the web console
                                                                          • 1512 Next steps
                                                                            • 16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION
                                                                              • 161 Prerequisites
                                                                              • 162 AWS government regions
                                                                              • 163 Private clusters
                                                                                • 1631 Private clusters in AWS
                                                                                  • 164 About using a custom VPC
                                                                                    • 1641 Requirements for using your VPC
                                                                                    • 1642 VPC validation
                                                                                    • 1643 Division of permissions
                                                                                    • 1644 Isolation between clusters
                                                                                      • 165 Internet and Telemetry access for OpenShift Container Platform
                                                                                      • 166 Generating an SSH private key and adding it to the agent
                                                                                      • 167 Obtaining the installation program
                                                                                      • 168 Manually creating the installation configuration file
                                                                                        • 1681 Installation configuration parameters
                                                                                        • 1682 Sample customized install-configyaml file for AWS
                                                                                        • 1683 AWS regions without a published RHCOS AMI
                                                                                        • 1684 Uploading a custom RHCOS AMI in AWS
                                                                                        • 1685 Configuring the cluster-wide proxy during installation
                                                                                          • 169 Deploying the cluster
                                                                                          • 1610 Installing the OpenShift CLI by downloading the binary
                                                                                            • 16101 Installing the OpenShift CLI on Linux
                                                                                            • 16102 Installing the OpenShift CLI on Windows
                                                                                            • 16103 Installing the OpenShift CLI on macOS
                                                                                              • 1611 Logging in to the cluster by using the CLI
                                                                                              • 1612 Logging in to the cluster by using the web console
                                                                                              • 1613 Next steps
                                                                                                • 17 INSTALLING A CLUSTER ON USER-PROVISIONED INFRASTRUCTURE IN AWS BY USING CLOUDFORMATION TEMPLATES
                                                                                                  • 171 Prerequisites
                                                                                                  • 172 Internet and Telemetry access for OpenShift Container Platform
                                                                                                  • 173 Required AWS infrastructure components
                                                                                                    • 1731 Cluster machines
                                                                                                    • 1732 Certificate signing requests management
                                                                                                    • 1733 Other infrastructure components
                                                                                                    • 1734 Required AWS permissions
                                                                                                      • 174 Obtaining the installation program
                                                                                                      • 175 Generating an SSH private key and adding it to the agent
                                                                                                      • 176 Creating the installation files for AWS
                                                                                                        • 1761 Optional Creating a separate var partition
                                                                                                        • 1762 Creating the installation configuration file
                                                                                                        • 1763 Configuring the cluster-wide proxy during installation
                                                                                                        • 1764 Creating the Kubernetes manifest and Ignition config files
                                                                                                          • 177 Extracting the infrastructure name
                                                                                                          • 178 Creating a VPC in AWS
                                                                                                            • 1781 CloudFormation template for the VPC
                                                                                                              • 179 Creating networking and load balancing components in AWS
                                                                                                                • 1791 CloudFormation template for the network and load balancers
                                                                                                                  • 1710 Creating security group and roles in AWS
                                                                                                                    • 17101 CloudFormation template for security objects
                                                                                                                      • 1711 RHCOS AMIs for the AWS infrastructure
                                                                                                                        • 17111 AWS regions without a published RHCOS AMI
                                                                                                                        • 17112 Uploading a custom RHCOS AMI in AWS
                                                                                                                          • 1712 Creating the bootstrap node in AWS
                                                                                                                            • 17121 CloudFormation template for the bootstrap machine
                                                                                                                              • 1713 Creating the control plane machines in AWS
                                                                                                                                • 17131 CloudFormation template for control plane machines
                                                                                                                                  • 1714 Creating the worker nodes in AWS
                                                                                                                                    • 17141 CloudFormation template for worker machines
                                                                                                                                      • 1715 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
                                                                                                                                      • 1716 Installing the OpenShift CLI by downloading the binary
                                                                                                                                        • 17161 Installing the OpenShift CLI on Linux
                                                                                                                                        • 17162 Installing the OpenShift CLI on Windows
                                                                                                                                        • 17163 Installing the OpenShift CLI on macOS
                                                                                                                                          • 1717 Logging in to the cluster by using the CLI
                                                                                                                                          • 1718 Approving the certificate signing requests for your machines
                                                                                                                                          • 1719 Initial Operator configuration
                                                                                                                                            • 17191 Image registry storage configuration
                                                                                                                                              • 1720 Deleting the bootstrap resources
                                                                                                                                              • 1721 Creating the Ingress DNS Records
                                                                                                                                              • 1722 Completing an AWS installation on user-provisioned infrastructure
                                                                                                                                              • 1723 Logging in to the cluster by using the web console
                                                                                                                                              • 1724 Additional resources
                                                                                                                                              • 1725 Next steps
                                                                                                                                                • 18 INSTALLING A CLUSTER ON AWS THAT USES MIRRORED INSTALLATION CONTENT
                                                                                                                                                  • 181 Prerequisites
                                                                                                                                                  • 182 About installations in restricted networks
                                                                                                                                                    • 1821 Additional limits
                                                                                                                                                      • 183 Internet and Telemetry access for OpenShift Container Platform
                                                                                                                                                      • 184 Required AWS infrastructure components
                                                                                                                                                        • 1841 Cluster machines
                                                                                                                                                        • 1842 Certificate signing requests management
                                                                                                                                                        • 1843 Other infrastructure components
                                                                                                                                                        • 1844 Required AWS permissions
                                                                                                                                                          • 185 Generating an SSH private key and adding it to the agent
                                                                                                                                                          • 186 Creating the installation files for AWS
                                                                                                                                                            • 1861 Optional Creating a separate var partition
                                                                                                                                                            • 1862 Creating the installation configuration file
                                                                                                                                                            • 1863 Configuring the cluster-wide proxy during installation
                                                                                                                                                            • 1864 Creating the Kubernetes manifest and Ignition config files
                                                                                                                                                              • 187 Extracting the infrastructure name
                                                                                                                                                              • 188 Creating a VPC in AWS
                                                                                                                                                                • 1881 CloudFormation template for the VPC
                                                                                                                                                                  • 189 Creating networking and load balancing components in AWS
                                                                                                                                                                    • 1891 CloudFormation template for the network and load balancers
                                                                                                                                                                      • 1810 Creating security group and roles in AWS
                                                                                                                                                                        • 18101 CloudFormation template for security objects
                                                                                                                                                                          • 1811 RHCOS AMIs for the AWS infrastructure
                                                                                                                                                                          • 1812 Creating the bootstrap node in AWS
                                                                                                                                                                            • 18121 CloudFormation template for the bootstrap machine
                                                                                                                                                                              • 1813 Creating the control plane machines in AWS
                                                                                                                                                                                • 18131 CloudFormation template for control plane machines
                                                                                                                                                                                  • 1814 Creating the worker nodes in AWS
                                                                                                                                                                                    • 18141 CloudFormation template for worker machines
                                                                                                                                                                                      • 1815 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
                                                                                                                                                                                      • 1816 Logging in to the cluster by using the CLI
                                                                                                                                                                                      • 1817 Approving the certificate signing requests for your machines
                                                                                                                                                                                      • 1818 Initial Operator configuration
                                                                                                                                                                                        • 18181 Image registry storage configuration
                                                                                                                                                                                          • 1819 Deleting the bootstrap resources
                                                                                                                                                                                          • 1820 Creating the Ingress DNS Records
                                                                                                                                                                                          • 1821 Completing an AWS installation on user-provisioned infrastructure
                                                                                                                                                                                          • 1822 Logging in to the cluster by using the web console
                                                                                                                                                                                          • 1823 Additional resources
                                                                                                                                                                                          • 1824 Next steps
                                                                                                                                                                                            • 19 UNINSTALLING A CLUSTER ON AWS
                                                                                                                                                                                              • 191 Removing a cluster that uses installer-provisioned infrastructure
Page 4: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6

Table of Contents

CHAPTER 1 INSTALLING ON AWS11 CONFIGURING AN AWS ACCOUNT

111 Configuring Route 531111 Ingress Operator endpoint configuration for AWS Route 53

112 AWS account limits113 Required AWS permissions114 Creating an IAM user115 Supported AWS regions116 Next steps

12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS121 Prerequisites122 Internet and Telemetry access for OpenShift Container Platform123 Generating an SSH private key and adding it to the agent124 Obtaining the installation program125 Creating the installation configuration file

1251 Installation configuration parameters1252 Sample customized install-configyaml file for AWS

126 Deploying the cluster127 Installing the OpenShift CLI by downloading the binary

1271 Installing the OpenShift CLI on Linux1272 Installing the OpenShift CLI on Windows1273 Installing the OpenShift CLI on macOS

128 Logging in to the cluster by using the CLI129 Logging in to the cluster by using the web console1210 Next steps

13 INSTALLING A CLUSTER ON AWS WITH NETWORK CUSTOMIZATIONS131 Prerequisites132 Internet and Telemetry access for OpenShift Container Platform133 Generating an SSH private key and adding it to the agent134 Obtaining the installation program135 Creating the installation configuration file

1351 Installation configuration parameters1352 Network configuration parameters1353 Sample customized install-configyaml file for AWS

136 Modifying advanced network configuration parameters137 Configuring an Ingress Controller Network Load Balancer on a new AWS cluster138 Cluster Network Operator configuration

1381 Configuration parameters for the OpenShift SDN default CNI network provider1382 Configuration parameters for the OVN-Kubernetes default CNI network provider1383 Cluster Network Operator example configuration

139 Configuring hybrid networking with OVN-Kubernetes1310 Deploying the cluster1311 Installing the OpenShift CLI by downloading the binary

13111 Installing the OpenShift CLI on Linux13112 Installing the OpenShift CLI on Windows13113 Installing the OpenShift CLI on macOS

1312 Logging in to the cluster by using the CLI1313 Logging in to the cluster by using the web console1314 Next steps

14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC141 Prerequisites

77778

10181819

2020202122232432353636373738383939404041

424244525355575859606061

62646465656666676767

Table of Contents

1

142 About using a custom VPC1421 Requirements for using your VPC1422 VPC validation1423 Division of permissions1424 Isolation between clusters

143 Internet and Telemetry access for OpenShift Container Platform144 Generating an SSH private key and adding it to the agent145 Obtaining the installation program146 Creating the installation configuration file

1461 Installation configuration parameters1462 Sample customized install-configyaml file for AWS1463 Configuring the cluster-wide proxy during installation

147 Deploying the cluster148 Installing the OpenShift CLI by downloading the binary

1481 Installing the OpenShift CLI on Linux1482 Installing the OpenShift CLI on Windows1483 Installing the OpenShift CLI on macOS

149 Logging in to the cluster by using the CLI1410 Logging in to the cluster by using the web console1411 Next steps

15 INSTALLING A PRIVATE CLUSTER ON AWS151 Prerequisites152 Private clusters

1521 Private clusters in AWS15211 Limitations

153 About using a custom VPC1531 Requirements for using your VPC1532 VPC validation1533 Division of permissions1534 Isolation between clusters

154 Internet and Telemetry access for OpenShift Container Platform155 Generating an SSH private key and adding it to the agent156 Obtaining the installation program157 Manually creating the installation configuration file

1571 Installation configuration parameters1572 Sample customized install-configyaml file for AWS1573 Configuring the cluster-wide proxy during installation

158 Deploying the cluster159 Installing the OpenShift CLI by downloading the binary

1591 Installing the OpenShift CLI on Linux1592 Installing the OpenShift CLI on Windows1593 Installing the OpenShift CLI on macOS

1510 Logging in to the cluster by using the CLI1511 Logging in to the cluster by using the web console1512 Next steps

16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION161 Prerequisites162 AWS government regions163 Private clusters

1631 Private clusters in AWS16311 Limitations

164 About using a custom VPC1641 Requirements for using your VPC

686870717171727374758486888990909191

929393939394949494979797979899

100101

109111

113114114115115116116117117117118118119119119

120

OpenShift Container Platform 46 Installing on AWS

2

1642 VPC validation1643 Division of permissions1644 Isolation between clusters

165 Internet and Telemetry access for OpenShift Container Platform166 Generating an SSH private key and adding it to the agent167 Obtaining the installation program168 Manually creating the installation configuration file

1681 Installation configuration parameters1682 Sample customized install-configyaml file for AWS1683 AWS regions without a published RHCOS AMI1684 Uploading a custom RHCOS AMI in AWS1685 Configuring the cluster-wide proxy during installation

169 Deploying the cluster1610 Installing the OpenShift CLI by downloading the binary

16101 Installing the OpenShift CLI on Linux16102 Installing the OpenShift CLI on Windows16103 Installing the OpenShift CLI on macOS

1611 Logging in to the cluster by using the CLI1612 Logging in to the cluster by using the web console1613 Next steps

17 INSTALLING A CLUSTER ON USER-PROVISIONED INFRASTRUCTURE IN AWS BY USINGCLOUDFORMATION TEMPLATES

171 Prerequisites172 Internet and Telemetry access for OpenShift Container Platform173 Required AWS infrastructure components

1731 Cluster machines1732 Certificate signing requests management1733 Other infrastructure components1734 Required AWS permissions

174 Obtaining the installation program175 Generating an SSH private key and adding it to the agent176 Creating the installation files for AWS

1761 Optional Creating a separate var partition1762 Creating the installation configuration file1763 Configuring the cluster-wide proxy during installation1764 Creating the Kubernetes manifest and Ignition config files

177 Extracting the infrastructure name178 Creating a VPC in AWS

1781 CloudFormation template for the VPC179 Creating networking and load balancing components in AWS

1791 CloudFormation template for the network and load balancers1710 Creating security group and roles in AWS

17101 CloudFormation template for security objects1711 RHCOS AMIs for the AWS infrastructure

17111 AWS regions without a published RHCOS AMI17112 Uploading a custom RHCOS AMI in AWS

1712 Creating the bootstrap node in AWS17121 CloudFormation template for the bootstrap machine

1713 Creating the control plane machines in AWS17131 CloudFormation template for control plane machines

1714 Creating the worker nodes in AWS17141 CloudFormation template for worker machines

1715 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure

122122122123123124125126134136137139141

142142143143144144145

146146146147148149149155163163164165167168170172172174180183191

194203204204206

211215221

228232235

Table of Contents

3

1716 Installing the OpenShift CLI by downloading the binary17161 Installing the OpenShift CLI on Linux17162 Installing the OpenShift CLI on Windows17163 Installing the OpenShift CLI on macOS

1717 Logging in to the cluster by using the CLI1718 Approving the certificate signing requests for your machines1719 Initial Operator configuration

17191 Image registry storage configuration171911 Configuring registry storage for AWS with user-provisioned infrastructure171912 Configuring storage for the image registry in non-production clusters

1720 Deleting the bootstrap resources1721 Creating the Ingress DNS Records1722 Completing an AWS installation on user-provisioned infrastructure1723 Logging in to the cluster by using the web console1724 Additional resources1725 Next steps

18 INSTALLING A CLUSTER ON AWS THAT USES MIRRORED INSTALLATION CONTENT181 Prerequisites182 About installations in restricted networks

1821 Additional limits183 Internet and Telemetry access for OpenShift Container Platform184 Required AWS infrastructure components

1841 Cluster machines1842 Certificate signing requests management1843 Other infrastructure components1844 Required AWS permissions

185 Generating an SSH private key and adding it to the agent186 Creating the installation files for AWS

1861 Optional Creating a separate var partition1862 Creating the installation configuration file1863 Configuring the cluster-wide proxy during installation1864 Creating the Kubernetes manifest and Ignition config files

187 Extracting the infrastructure name188 Creating a VPC in AWS

1881 CloudFormation template for the VPC189 Creating networking and load balancing components in AWS

1891 CloudFormation template for the network and load balancers1810 Creating security group and roles in AWS

18101 CloudFormation template for security objects1811 RHCOS AMIs for the AWS infrastructure1812 Creating the bootstrap node in AWS

18121 CloudFormation template for the bootstrap machine1813 Creating the control plane machines in AWS

18131 CloudFormation template for control plane machines1814 Creating the worker nodes in AWS

18141 CloudFormation template for worker machines1815 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure1816 Logging in to the cluster by using the CLI1817 Approving the certificate signing requests for your machines1818 Initial Operator configuration

18181 Image registry storage configuration181811 Configuring registry storage for AWS with user-provisioned infrastructure181812 Configuring storage for the image registry in non-production clusters

236236237237237238241242242243243244247247248248249249250250250251251

253253259266267268270272273275276278283287295297306307312316321

329333335336337340341341

342

OpenShift Container Platform 46 Installing on AWS

4

1819 Deleting the bootstrap resources1820 Creating the Ingress DNS Records1821 Completing an AWS installation on user-provisioned infrastructure1822 Logging in to the cluster by using the web console1823 Additional resources1824 Next steps

19 UNINSTALLING A CLUSTER ON AWS191 Removing a cluster that uses installer-provisioned infrastructure

342343345346347347348348

Table of Contents

5

OpenShift Container Platform 46 Installing on AWS

6

CHAPTER 1 INSTALLING ON AWS

11 CONFIGURING AN AWS ACCOUNT

Before you can install OpenShift Container Platform you must configure an Amazon Web Services(AWS) account

111 Configuring Route 53

To install OpenShift Container Platform the Amazon Web Services (AWS) account you use must have adedicated public hosted zone in your Route 53 service This zone must be authoritative for the domainThe Route 53 service provides cluster DNS resolution and name lookup for external connections to thecluster

Procedure

1 Identify your domain or subdomain and registrar You can transfer an existing domain andregistrar or obtain a new one through AWS or another source

NOTE

If you purchase a new domain through AWS it takes time for the relevant DNSchanges to propagate For more information about purchasing domains throughAWS see Registering Domain Names Using Amazon Route 53 in the AWSdocumentation

2 If you are using an existing domain and registrar migrate its DNS to AWS See Making AmazonRoute 53 the DNS Service for an Existing Domain in the AWS documentation

3 Create a public hosted zone for your domain or subdomain See Creating a Public Hosted Zonein the AWS documentationUse an appropriate root domain such as openshiftcorpcom or subdomain such as clustersopenshiftcorpcom

4 Extract the new authoritative name servers from the hosted zone records See Getting theName Servers for a Public Hosted Zone in the AWS documentation

5 Update the registrar records for the AWS Route 53 name servers that your domain uses Forexample if you registered your domain to a Route 53 service in a different accounts see thefollowing topic in the AWS documentation Adding or Changing Name Servers or Glue Records

6 If you are using a subdomain add its delegation records to the parent domain This givesAmazon Route 53 responsibility for the subdomain Follow the delegation procedure outlined bythe DNS provider of the parent domain See Creating a subdomain that uses Amazon Route 53as the DNS service without migrating the parent domain in the AWS documentation for anexample high level procedure

1111 Ingress Operator endpoint configuration for AWS Route 53

If you install in either Amazon Web Services (AWS) GovCloud (US) US-West or US-East region theIngress Operator uses us-gov-west-1 region for Route53 and tagging API clients

The Ingress Operator uses httpstaggingus-gov-west-1amazonawscom as the tagging APIendpoint if a tagging custom endpoint is configured that includes the string us-gov-east-1

CHAPTER 1 INSTALLING ON AWS

7

1

2

For more information on AWS GovCloud (US) endpoints see the Service Endpoints in the AWSdocumentation about GovCloud (US)

IMPORTANT

Private disconnected installations are not supported for AWS GovCloud when you installin the us-gov-east-1 region

Example Route 53 configuration

Route 53 defaults to httpsroute53us-govamazonawscom for both AWS GovCloud (US)regions

Only the US-West region has endpoints for tagging Omit this parameter if your cluster is inanother region

112 AWS account limits

The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) componentsand the default Service Limits affect your ability to install OpenShift Container Platform clusters If youuse certain cluster configurations deploy your cluster in certain AWS regions or run multiple clustersfrom your account you might need to request additional resources for your AWS account

The following table summarizes the AWS components whose limits can impact your ability to install andrun OpenShift Container Platform clusters

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

platform aws region us-gov-west-1 serviceEndpoints - name ec2 url httpsec2us-gov-west-1amazonawscom - name elasticloadbalancing url httpselasticloadbalancingus-gov-west-1amazonawscom - name route53 url httpsroute53us-govamazonawscom 1 - name tagging url httpstaggingus-gov-west-1amazonawscom 2

OpenShift Container Platform 46 Installing on AWS

8

InstanceLimits

Varies Varies By default each cluster creates the followinginstances

One bootstrap machine which is removedafter installation

Three master nodes

Three worker nodes

These instance type counts are within a newaccountrsquos default limit To deploy more workernodes enable autoscaling deploy large workloadsor use a different instance type review your accountlimits to ensure that your cluster can deploy themachines that you need

In most regions the bootstrap and worker machinesuses an m4large machines and the mastermachines use m4xlarge instances In some regionsincluding all regions that do not support theseinstance types m5large and m5xlarge instancesare used instead

Elastic IPs(EIPs)

0 to 1 5 EIPs peraccount

To provision the cluster in a highly availableconfiguration the installation program creates apublic and private subnet for each availability zonewithin a region Each private subnet requires a NATGateway and each NAT gateway requires aseparate elastic IP Review the AWS region map todetermine how many availability zones are in eachregion To take advantage of the default highavailability install the cluster in a region with at leastthree availability zones To install a cluster in aregion with more than five availability zones youmust increase the EIP limit

IMPORTANT

To use the us-east-1 region youmust increase the EIP limit for youraccount

VirtualPrivateClouds(VPCs)

5 5 VPCs perregion

Each cluster creates its own VPC

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

CHAPTER 1 INSTALLING ON AWS

9

ElasticLoadBalancing(ELBNLB)

3 20 per region By default each cluster creates internal andexternal network load balancers for the master APIserver and a single classic elastic load balancer forthe router Deploying more Kubernetes Serviceobjects with type LoadBalancer will createadditional load balancers

NATGateways

5 5 per availabilityzone

The cluster deploys one NAT gateway in eachavailability zone

ElasticNetworkInterfaces(ENIs)

At least 12 350 per region The default installation creates 21 ENIs and an ENIfor each availability zone in your region Forexample the us-east-1 region contains sixavailability zones so a cluster that is deployed inthat zone uses 27 ENIs Review the AWS region mapto determine how many availability zones are in eachregion

Additional ENIs are created for additional machinesand elastic load balancers that are created bycluster usage and deployed workloads

VPCGateway

20 20 per account Each cluster creates a single VPC Gateway for S3access

S3 buckets 99 100 buckets peraccount

Because the installation process creates atemporary bucket and the registry component ineach cluster creates a bucket you can create only99 OpenShift Container Platform clusters per AWSaccount

SecurityGroups

250 2500 peraccount

Each cluster creates 10 distinct security groups

Component

Number ofclustersavailable bydefault

Default AWSlimit

Description

113 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 11 Required EC2 permissions for installation

tagTagResources

tagUntagResources

OpenShift Container Platform 46 Installing on AWS

10

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

CHAPTER 1 INSTALLING ON AWS

11

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 12 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

OpenShift Container Platform 46 Installing on AWS

12

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 13 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

CHAPTER 1 INSTALLING ON AWS

13

elasticloadbalancingSetLoadBalancerPoliciesOfListener

Example 14 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 15 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

OpenShift Container Platform 46 Installing on AWS

14

route53ListHostedZonesByName

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 16 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 17 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

CHAPTER 1 INSTALLING ON AWS

15

s3GetObjectAcl

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 18 Required permissions to delete base cluster resources

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 19 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

OpenShift Container Platform 46 Installing on AWS

16

ec2DeleteVpc

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

Example 110 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 111 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

CHAPTER 1 INSTALLING ON AWS

17

114 Creating an IAM user

Each Amazon Web Services (AWS) account contains a root user account that is based on the emailaddress you used to create the account This is a highly-privileged account and it is recommended touse it for only initial account and billing configuration creating an initial set of users and securing theaccount

Before you install OpenShift Container Platform create a secondary IAM administrative user As youcomplete the Creating an IAM User in Your AWS Account procedure in the AWS documentation set thefollowing options

Procedure

1 Specify the IAM user name and select Programmatic access

2 Attach the AdministratorAccess policy to ensure that the account has sufficient permission tocreate the cluster This policy provides the cluster with the ability to grant credentials to eachOpenShift Container Platform component The cluster grants the components only thecredentials that they require

NOTE

While it is possible to create a policy that grants the all of the required AWSpermissions and attach it to the user this is not the preferred option The clusterwill not have the ability to grant additional credentials to individual componentsso the same credentials are used by all components

3 Optional Add metadata to the user by attaching tags

4 Confirm that the user name that you specified is granted the AdministratorAccess policy

5 Record the access key ID and secret access key values You must use these values when youconfigure your local machine to run the installation program

IMPORTANT

You cannot use a temporary session token that you generated while using amulti-factor authentication device to authenticate to AWS when you deploy acluster The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials

Additional resources

See Manually creating IAM for AWS for steps to set the Cloud Credential Operator (CCO) tomanual mode prior to installation Use this mode in environments where the cloud identity andaccess management (IAM) APIs are not reachable or if you prefer not to store an administrator-level credential secret in the cluster kube-system project

115 Supported AWS regions

You can deploy an OpenShift Container Platform cluster to the following public regions

OpenShift Container Platform 46 Installing on AWS

18

af-south-1 (Cape Town)

ap-east-1 (Hong Kong)

ap-northeast-1 (Tokyo)

ap-northeast-2 (Seoul)

ap-south-1 (Mumbai)

ap-southeast-1 (Singapore)

ap-southeast-2 (Sydney)

ca-central-1 (Central)

eu-central-1 (Frankfurt)

eu-north-1 (Stockholm)

eu-south-1 (Milan)

eu-west-1 (Ireland)

eu-west-2 (London)

eu-west-3 (Paris)

me-south-1 (Bahrain)

sa-east-1 (Satildeo Paulo)

us-east-1 (N Virginia)

us-east-2 (Ohio)

us-west-1 (N California)

us-west-2 (Oregon)

The following AWS GovCloud regions are supported

us-gov-west-1

us-gov-east-1

116 Next steps

Install an OpenShift Container Platform cluster

Quickly install a cluster with default options on installer-provisioned infrastructure

Install a cluster with cloud customizations on installer-provisioned infrastructure

Install a cluster with network customizations on installer-provisioned infrastructure

Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormationtemplates

CHAPTER 1 INSTALLING ON AWS

19

12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS

In OpenShift Container Platform version 46 you can install a customized cluster on infrastructure thatthe installation program provisions on Amazon Web Services (AWS) To customize the installation youmodify parameters in the install-configyaml file before you install the cluster

121 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

122 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

OpenShift Container Platform 46 Installing on AWS

20

1

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

123 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

CHAPTER 1 INSTALLING ON AWS

21

1

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

124 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operating

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

22

1

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

125 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

$ tar xvf openshift-install-linuxtargz

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

23

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1251 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 11 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

OpenShift Container Platform 46 Installing on AWS

24

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

25

Table 12 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

OpenShift Container Platform 46 Installing on AWS

26

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

27

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

28

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

29

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 13 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

OpenShift Container Platform 46 Installing on AWS

30

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

31

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1252 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

OpenShift Container Platform 46 Installing on AWS

32

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 amiID ami-96c6f8f7 12 serviceEndpoints 13

CHAPTER 1 INSTALLING ON AWS

33

1 10 11 14

2

3 7

4

5 8

6 9

12

13

15

16

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

- name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 14fips false 15sshKey ssh-ed25519 AAAA 16

OpenShift Container Platform 46 Installing on AWS

34

1

2

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

126 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-

CHAPTER 1 INSTALLING ON AWS

35

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

127 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1271 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

36

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1272 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

1273 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

37

1

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

128 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

129 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

38

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1210 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

13 INSTALLING A CLUSTER ON AWS WITH NETWORKCUSTOMIZATIONS

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)with customized network configuration options By customizing your network configuration your clustercan coexist with existing IP address allocations in your environment and integrate with existing MTU andVXLAN configurations

You must set most of the network configuration parameters during installation and you can modify only kubeProxy configuration parameters in a running cluster

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

39

131 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

132 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

OpenShift Container Platform 46 Installing on AWS

40

1

1

See About remote health monitoring for more information about the Telemetry service

133 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

41

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

134 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

135 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

$ tar xvf openshift-install-linuxtargz

OpenShift Container Platform 46 Installing on AWS

42

1

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

43

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1351 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 14 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

OpenShift Container Platform 46 Installing on AWS

44

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

Table 15 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

45

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

46

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

47

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

48

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

49

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 16 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

50

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

51

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1352 Network configuration parameters

You can modify your cluster network configuration parameters in the install-configyaml configurationfile The following table describes the parameters

NOTE

You cannot modify these parameters in the install-configyaml file after installation

Table 17 Required network parameters

Parameter Description Value

networkingnetworkType

The default Container Network Interface (CNI)network provider plug-in to deploy

Either OpenShiftSDN or OVNKubernetes Thedefault value is OpenShiftSDN

OpenShift Container Platform 46 Installing on AWS

52

networkingclusterNetwork[]cidr

A block of IP addresses from which pod IP addressesare allocated The OpenShiftSDN network plug-insupports multiple cluster networks The addressblocks for multiple cluster networks must not overlapSelect address pools large enough to fit youranticipated workload

An IP address allocation inCIDR format The defaultvalue is 101280014

networkingclusterNetwork[]hostPrefix

The subnet prefix length to assign to each individualnode For example if hostPrefix is set to 23 theneach node is assigned a 23 subnet out of the given cidr allowing for 510 (2^(32 - 23) - 2) pod IPaddresses

A subnet prefix The defaultvalue is 23

networkingserviceNetwork[]

A block of IP addresses for services OpenShiftSDNallows only one serviceNetwork block The addressblock must not overlap with any other network block

An IP address allocation inCIDR format The defaultvalue is 172300016

networkingmachineNetwork[]cidr

A block of IP addresses assigned to nodes created bythe OpenShift Container Platform installationprogram while installing the cluster The addressblock must not overlap with any other network blockMultiple CIDR ranges may be specified

An IP address allocation inCIDR format The defaultvalue is 1000016

Parameter Description Value

1353 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b rootVolume iops 4000 size 500 type io1 6

CHAPTER 1 INSTALLING ON AWS

53

1 10 12 15

2

3 7 11

4

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

If you do not provide these parameters and values the installation program provides thedefault value

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only one

type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking 11 clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 12 userTags adminContact jdoe costCenter 7536 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17

OpenShift Container Platform 46 Installing on AWS

54

5 8

6 9

13

14

16

17

control plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

136 Modifying advanced network configuration parameters

You can modify the advanced network configuration parameters only before you install the clusterAdvanced configuration customization lets you integrate your cluster into your existing networkenvironment by specifying an MTU or VXLAN port by allowing customization of kube-proxy settingsand by specifying a different mode for the openshiftSDNConfig parameter

IMPORTANT

Modifying the OpenShift Container Platform manifest files directly is not supported

Prerequisites

Create the install-configyaml file and complete any modifications to it

Procedure

CHAPTER 1 INSTALLING ON AWS

55

1

1

1

1 Change to the directory that contains the installation program and create the manifests

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-network-03-configyml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

3 Open the cluster-network-03-configyml file in an editor and enter a CR that describes theOperator configuration you want

The parameters for the spec parameter are only an example Specify your configurationfor the Cluster Network Operator in the CR

The CNO provides default values for the parameters in the CR so you must specify only theparameters that you want to change

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-network-03-configyml 1

$ ls ltinstallation_directorygtmanifestscluster-network-

cluster-network-01-crdymlcluster-network-02-configymlcluster-network-03-configyml

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec 1 clusterNetwork - cidr 101280014 hostPrefix 23 serviceNetwork - 172300016 defaultNetwork type OpenShiftSDN openshiftSDNConfig mode NetworkPolicy mtu 1450 vxlanPort 4789

OpenShift Container Platform 46 Installing on AWS

56

1

1

4 Save the cluster-network-03-configyml file and quit the text editor

5 Optional Back up the manifestscluster-network-03-configyml file The installation programdeletes the manifests directory when creating the cluster

NOTE

For more information on using a Network Load Balancer (NLB) on AWS see ConfiguringIngress cluster traffic on AWS using a Network Load Balancer

137 Configuring an Ingress Controller Network Load Balancer on a new AWScluster

You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster

Prerequisites

Create the install-configyaml file and complete any modifications to it

Procedure

Create an Ingress Controller backed by an AWS NLB on a new cluster

1 Change to the directory that contains the installation program and create the manifests

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-ingress-default-ingresscontrolleryaml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

3 Open the cluster-ingress-default-ingresscontrolleryaml file in an editor and enter a CR thatdescribes the Operator configuration you want

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-ingress-default-ingresscontrolleryaml 1

$ ls ltinstallation_directorygtmanifestscluster-ingress-default-ingresscontrolleryaml

cluster-ingress-default-ingresscontrolleryaml

apiVersion operatoropenshiftiov1kind IngressController

CHAPTER 1 INSTALLING ON AWS

57

1 2

3

4 Save the cluster-ingress-default-ingresscontrolleryaml file and quit the text editor

5 Optional Back up the manifestscluster-ingress-default-ingresscontrolleryaml file Theinstallation program deletes the manifests directory when creating the cluster

138 Cluster Network Operator configuration

The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO)configuration and stored in a CR object that is named cluster The CR specifies the parameters for the Network API in the operatoropenshiftio API group

You can specify the cluster network configuration for your OpenShift Container Platform cluster bysetting the parameter values for the defaultNetwork parameter in the CNO CR The following CRdisplays the default configuration for the CNO and explains both the parameters you can configure andthe valid parameter values

Cluster Network Operator CR

Specified in the install-configyaml file

Configures the default Container Network Interface (CNI) network provider for the clusternetwork

metadata creationTimestamp null name default namespace openshift-ingress-operatorspec endpointPublishingStrategy loadBalancer scope External providerParameters type AWS aws type NLB type LoadBalancerService

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec clusterNetwork 1 - cidr 101280014 hostPrefix 23 serviceNetwork 2 - 172300016 defaultNetwork 3 kubeProxyConfig 4 iptablesSyncPeriod 30s 5 proxyArguments iptables-min-sync-period 6 - 0s

OpenShift Container Platform 46 Installing on AWS

58

4

5

6

1

2

3

4

5

The parameters for this object specify the kube-proxy configuration If you do not specify theparameter values the Cluster Network Operator applies the displayed default parameter values If

The refresh period for iptables rules The default value is 30s Valid suffixes include s m and hand are described in the Go time package documentation

NOTE

Because of performance improvements introduced in OpenShift Container Platform43 and greater adjusting the iptablesSyncPeriod parameter is no longernecessary

The minimum duration before refreshing iptables rules This parameter ensures that the refreshdoes not happen too frequently Valid suffixes include s m and h and are described in the Go timepackage

1381 Configuration parameters for the OpenShift SDN default CNI network provider

The following YAML object describes the configuration parameters for the OpenShift SDN defaultContainer Network Interface (CNI) network provider

Specified in the install-configyaml file

Specify only if you want to override part of the OpenShift SDN configuration

Configures the network isolation mode for OpenShift SDN The allowed values are Multitenant Subnet or NetworkPolicy The default value is NetworkPolicy

The maximum transmission unit (MTU) for the VXLAN overlay network This is detectedautomatically based on the MTU of the primary network interface You do not normally need tooverride the detected MTU

If the auto-detected value is not what you expected it to be confirm that the MTU on the primarynetwork interface on your nodes is correct You cannot use this option to change the MTU value ofthe primary network interface on the nodes

If your cluster requires different MTU values for different nodes you must set this value to 50 lessthan the lowest MTU value in your cluster For example if some nodes in your cluster have an MTUof 9001 and some have an MTU of 1500 you must set this value to 1450

The port to use for all VXLAN packets The default value is 4789 If you are running in a virtualizedenvironment with existing nodes that are part of another VXLAN network then you might berequired to change this For example when running an OpenShift SDN overlay on top of VMwareNSX-T you must select an alternate port for VXLAN since both SDNs use the same defaultVXLAN port number

On Amazon Web Services (AWS) you can select an alternate port for the VXLAN between port

defaultNetwork type OpenShiftSDN 1 openshiftSDNConfig 2 mode NetworkPolicy 3 mtu 1450 4 vxlanPort 4789 5

CHAPTER 1 INSTALLING ON AWS

59

1

2

3

4

On Amazon Web Services (AWS) you can select an alternate port for the VXLAN between port 9000 and port 9999

1382 Configuration parameters for the OVN-Kubernetes default CNI network provider

The following YAML object describes the configuration parameters for the OVN-Kubernetes defaultCNI network provider

Specified in the install-configyaml file

Specify only if you want to override part of the OVN-Kubernetes configuration

The maximum transmission unit (MTU) for the Geneve (Generic Network VirtualizationEncapsulation) overlay network This is detected automatically based on the MTU of the primarynetwork interface You do not normally need to override the detected MTU

If the auto-detected value is not what you expected it to be confirm that the MTU on the primarynetwork interface on your nodes is correct You cannot use this option to change the MTU value ofthe primary network interface on the nodes

If your cluster requires different MTU values for different nodes you must set this value to 100 lessthan the lowest MTU value in your cluster For example if some nodes in your cluster have an MTUof 9001 and some have an MTU of 1500 you must set this value to 1400

The UDP port for the Geneve overlay network

1383 Cluster Network Operator example configuration

A complete CR object for the CNO is displayed in the following example

Cluster Network Operator example CR

defaultNetwork type OVNKubernetes 1 ovnKubernetesConfig 2 mtu 1400 3 genevePort 6081 4

apiVersion operatoropenshiftiov1kind Networkmetadata name clusterspec clusterNetwork - cidr 101280014 hostPrefix 23 serviceNetwork - 172300016 defaultNetwork type OpenShiftSDN openshiftSDNConfig mode NetworkPolicy mtu 1450

OpenShift Container Platform 46 Installing on AWS

60

1

1

139 Configuring hybrid networking with OVN-Kubernetes

You can configure your cluster to use hybrid networking with OVN-Kubernetes This allows a hybridcluster that supports different node networking configurations For example this is necessary to runboth Linux and Windows nodes in a cluster

IMPORTANT

You must configure hybrid networking with OVN-Kubernetes during the installation ofyour cluster You cannot switch to hybrid networking after the installation process

Prerequisites

You defined OVNKubernetes for the networkingnetworkType parameter in the install-configyaml file See the installation documentation for configuring OpenShift ContainerPlatform network customizations on your chosen cloud provider for more information

Procedure

1 Create the manifests from the directory that contains the installation program

For ltinstallation_directorygt specify the name of the directory that contains the install-configyaml file for your cluster

2 Create a file that is named cluster-network-03-configyml in the ltinstallation_directorygtmanifests directory

For ltinstallation_directorygt specify the directory name that contains the manifestsdirectory for your cluster

After creating the file several network configuration files are in the manifests directory asshown

Example output

vxlanPort 4789 kubeProxyConfig iptablesSyncPeriod 30s proxyArguments iptables-min-sync-period - 0s

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

$ touch ltinstallation_directorygtmanifestscluster-network-03-configyml 1

$ ls -1 ltinstallation_directorygtmanifestscluster-network-

cluster-network-01-crdymlcluster-network-02-configymlcluster-network-03-configyml

CHAPTER 1 INSTALLING ON AWS

61

1

2

3

4

3 Open the cluster-network-03-configyml file and configure OVN-Kubernetes with hybridnetworking For example

The parameters for the spec parameter are only an example Specify your configurationfor the Cluster Network Operator in the custom resource

Specify the CIDR configuration used when adding nodes

Specify OVNKubernetes as the Container Network Interface (CNI) cluster networkprovider

Specify the CIDR configuration used for nodes on the additional overlay network The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR

4 Optional Back up the ltinstallation_directorygtmanifestscluster-network-03-configyml fileThe installation program deletes the manifests directory when creating the cluster

NOTE

For more information on using Linux and Windows nodes in the same cluster seeUnderstanding Windows container workloads

1310 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

apiVersion operatoropenshiftiov1kind Networkmetadata creationTimestamp null name clusterspec 1 clusterNetwork 2 - cidr 101280014 hostPrefix 23 externalIP policy serviceNetwork - 172300016 defaultNetwork type OVNKubernetes 3 ovnKubernetesConfig hybridOverlayConfig hybridClusterNetwork 4 - cidr 101320014 hostPrefix 23status

OpenShift Container Platform 46 Installing on AWS

62

1

2

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

CHAPTER 1 INSTALLING ON AWS

63

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

1311 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

13111 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

$ tar xvzf ltfilegt

OpenShift Container Platform 46 Installing on AWS

64

After you install the CLI it is available using the oc command

13112 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

13113 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

$ echo $PATH

CHAPTER 1 INSTALLING ON AWS

65

1

1312 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1313 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ cat ltinstallation_directorygtauthkubeadmin-password

OpenShift Container Platform 46 Installing on AWS

66

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1314 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC

In OpenShift Container Platform version 46 you can install a cluster into an existing Amazon VirtualPrivate Cloud (VPC) on Amazon Web Services (AWS) The installation program provisions the rest ofthe required infrastructure which you can further customize To customize the installation you modifyparameters in the install-configyaml file before you install the cluster

141 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

67

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

142 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

1421 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

OpenShift Container Platform 46 Installing on AWS

68

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

CHAPTER 1 INSTALLING ON AWS

69

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

1422 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

OpenShift Container Platform 46 Installing on AWS

70

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1423 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1424 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

143 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintained

CHAPTER 1 INSTALLING ON AWS

71

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

144 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

OpenShift Container Platform 46 Installing on AWS

72

1

1

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

145 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

73

1

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

146 Creating the installation configuration file

You can customize the OpenShift Container Platform cluster you install on Amazon Web Services(AWS)

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

$ tar xvf openshift-install-linuxtargz

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

OpenShift Container Platform 46 Installing on AWS

74

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select AWS as the platform to target

iii If you do not have an Amazon Web Services (AWS) profile stored on your computerenter the AWS access key ID and secret access key for the user that you configured torun the installation program

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Modify the install-configyaml file You can find more information about the availableparameters in the Installation configuration parameters section

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

1461 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

CHAPTER 1 INSTALLING ON AWS

75

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 18 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

OpenShift Container Platform 46 Installing on AWS

76

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

Table 19 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

77

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

78

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

79

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

80

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

81

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 110 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

82

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

83

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1462 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b

OpenShift Container Platform 46 Installing on AWS

84

1 10 11 15

2

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operatorsreference content

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 subnets 12 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17

CHAPTER 1 INSTALLING ON AWS

85

3 7

4

5 8

6 9

12

13

14

16

17

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

1463 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

OpenShift Container Platform 46 Installing on AWS

86

1

2

3

4

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless the

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

87

1

2

proxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

147 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

OpenShift Container Platform 46 Installing on AWS

88

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

148 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

CHAPTER 1 INSTALLING ON AWS

89

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1481 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1482 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

OpenShift Container Platform 46 Installing on AWS

90

1

After you install the CLI it is available using the oc command

1483 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

149 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Cgt oc ltcommandgt

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

CHAPTER 1 INSTALLING ON AWS

91

Example output

1410 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

$ oc whoami

systemadmin

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

OpenShift Container Platform 46 Installing on AWS

92

1411 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

15 INSTALLING A PRIVATE CLUSTER ON AWS

In OpenShift Container Platform version 46 you can install a private cluster into an existing VPC onAmazon Web Services (AWS) The installation program provisions the rest of the requiredinfrastructure which you can further customize To customize the installation you modify parameters inthe install-configyaml file before you install the cluster

151 Prerequisites

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

152 Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose externalendpoints Private clusters are accessible from only an internal network and are not visible to theInternet

By default OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpointsA private cluster sets the DNS Ingress Controller and API server to private when you deploy yourcluster This means that the cluster resources are only accessible from your internal network and are notvisible to the internet

To deploy a private cluster you must use existing networking that meets your requirements Your clusterresources might be shared between other clusters on the network

Additionally you must deploy a private cluster from a machine that has access the API services for thecloud you provision to the hosts on the network that you provision and to the internet to obtaininstallation media You can use any machine that meets these access requirements and follows your

CHAPTER 1 INSTALLING ON AWS

93

companyrsquos guidelines For example this machine can be a bastion host on your cloud network or amachine that has access to the network through a VPN

1521 Private clusters in AWS

To create a private cluster on Amazon Web Services (AWS) you must provide an existing private VPCand subnets to host the cluster The installation program must also be able to resolve the DNS recordsthat the cluster requires The installation program configures the Ingress Operator and API server foraccess from only the private network

The cluster still requires access to Internet to access the AWS APIs

The following items are not required or created when you install a private cluster

Public subnets

Public load balancers which support public ingress

A public Route 53 zone that matches the baseDomain for the cluster

The installation program does use the baseDomain that you specify to create a private Route 53 zoneand the required records for the cluster The cluster is configured so that the Operators do not createpublic records for the cluster and all cluster machines are placed in the private subnets that you specify

15211 Limitations

The ability to add public functionality to a private cluster is limited

You cannot make the Kubernetes API endpoints public after installation without takingadditional actions including creating public subnets in the VPC for each availability zone in usecreating a public load balancer and configuring the control plane security groups to allow trafficfrom Internet on 6443 (Kubernetes API port)

If you use a public Service type load balancer you must tag a public subnet in each availabilityzone with kubernetesioclusterltcluster-infra-idgt shared so that AWS can use them tocreate public load balancers

153 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

1531 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

OpenShift Container Platform 46 Installing on AWS

94

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

CHAPTER 1 INSTALLING ON AWS

95

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

OpenShift Container Platform 46 Installing on AWS

96

1532 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1533 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1534 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

154 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster The

CHAPTER 1 INSTALLING ON AWS

97

Telemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

155 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on your

OpenShift Container Platform 46 Installing on AWS

98

1

1

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

156 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for your

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

99

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

157 Manually creating the installation configuration file

For installations of a private OpenShift Container Platform cluster that are only accessible from aninternal network and are not visible to the Internet you must manually generate your installationconfiguration file

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for yourcluster

Procedure

1 Create an installation directory to store your required installation assets in

IMPORTANT

You must create a directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse an installationdirectory If you want to reuse individual files from another cluster installationyou can copy them into your directory However the file names for theinstallation assets might change between releases Use caution when copyinginstallation files from an earlier OpenShift Container Platform version

$ tar xvf openshift-install-linuxtargz

$ mkdir ltinstallation_directorygt

OpenShift Container Platform 46 Installing on AWS

100

2 Customize the following install-configyaml file template and save it in the ltinstallation_directorygt

NOTE

You must name this configuration file install-configyaml

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the next step of the installationprocess You must back it up now

1571 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 111 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

CHAPTER 1 INSTALLING ON AWS

101

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

OpenShift Container Platform 46 Installing on AWS

102

Table 112 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

CHAPTER 1 INSTALLING ON AWS

103

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

104

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

105

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

106

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 113 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

CHAPTER 1 INSTALLING ON AWS

107

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

108

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1572 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-west-2a - us-west-2b

CHAPTER 1 INSTALLING ON AWS

109

1 10 11 15

2

Required The installation program prompts you for this value

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operators

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-west-2c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-west-2 11 userTags adminContact jdoe costCenter 7536 subnets 12 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 13 serviceEndpoints 14 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 15fips false 16sshKey ssh-ed25519 AAAA 17publish Internal 18

OpenShift Container Platform 46 Installing on AWS

110

3 7

4

5 8

6 9

12

13

14

16

17

18

reference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

How to publish the user-facing endpoints of your cluster Set publish to Internal to deploy aprivate cluster which cannot be accessed from the Internet The default value is External

1573 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS

CHAPTER 1 INSTALLING ON AWS

111

1

2

3

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

OpenShift Container Platform 46 Installing on AWS

112

4

1

2

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificates

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

158 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

CHAPTER 1 INSTALLING ON AWS

113

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

159 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

1591 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

114

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

1592 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

1593 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and click

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

115

1

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1510 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1511 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

116

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1512 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)into a government region To configure the government region modify parameters in the install-configyaml file before you install the cluster

161 Prerequisites

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

117

Review details about the OpenShift Container Platform installation and update processes

Configure an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use long-lived credentialsTo generate appropriate keys see Managing Access Keys for IAM Users in theAWS documentation You can supply the keys when you run the installationprogram

If you use a firewall you must configure it to allow the sites that your cluster requires access to

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

162 AWS government regions

OpenShift Container Platform supports deploying a cluster to AWS GovCloud (US) regions AWSGovCloud is specifically designed for US government agencies at the federal state and local level aswell as contractors educational institutions and other US customers that must run sensitive workloadsin the cloud

These regions do not have published Red Hat Enterprise Linux CoreOS (RHCOS) Amazon MachineImages (AMI) to select so you must upload a custom AMI that belongs to that region

The following AWS GovCloud partitions are supported

us-gov-west-1

us-gov-east-1

The AWS GovCloud region and custom AMI must be manually configured in the install-configyaml filesince RHCOS AMIs are not provided by Red Hat for those regions

163 Private clusters

You can deploy a private OpenShift Container Platform cluster that does not expose externalendpoints Private clusters are accessible from only an internal network and are not visible to theInternet

NOTE

Public zones are not supported in Route 53 in AWS GovCloud Therefore clusters mustbe private if they are deployed to an AWS government region

By default OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpointsA private cluster sets the DNS Ingress Controller and API server to private when you deploy yourcluster This means that the cluster resources are only accessible from your internal network and are notvisible to the internet

OpenShift Container Platform 46 Installing on AWS

118

To deploy a private cluster you must use existing networking that meets your requirements Your clusterresources might be shared between other clusters on the network

Additionally you must deploy a private cluster from a machine that has access the API services for thecloud you provision to the hosts on the network that you provision and to the internet to obtaininstallation media You can use any machine that meets these access requirements and follows yourcompanyrsquos guidelines For example this machine can be a bastion host on your cloud network or amachine that has access to the network through a VPN

1631 Private clusters in AWS

To create a private cluster on Amazon Web Services (AWS) you must provide an existing private VPCand subnets to host the cluster The installation program must also be able to resolve the DNS recordsthat the cluster requires The installation program configures the Ingress Operator and API server foraccess from only the private network

The cluster still requires access to Internet to access the AWS APIs

The following items are not required or created when you install a private cluster

Public subnets

Public load balancers which support public ingress

A public Route 53 zone that matches the baseDomain for the cluster

The installation program does use the baseDomain that you specify to create a private Route 53 zoneand the required records for the cluster The cluster is configured so that the Operators do not createpublic records for the cluster and all cluster machines are placed in the private subnets that you specify

16311 Limitations

The ability to add public functionality to a private cluster is limited

You cannot make the Kubernetes API endpoints public after installation without takingadditional actions including creating public subnets in the VPC for each availability zone in usecreating a public load balancer and configuring the control plane security groups to allow trafficfrom Internet on 6443 (Kubernetes API port)

If you use a public Service type load balancer you must tag a public subnet in each availabilityzone with kubernetesioclusterltcluster-infra-idgt shared so that AWS can use them tocreate public load balancers

164 About using a custom VPC

In OpenShift Container Platform 46 you can deploy a cluster into existing subnets in an existingAmazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS) By deploying OpenShiftContainer Platform into an existing AWS VPC you might be able to avoid limit constraints in newaccounts or more easily abide by the operational constraints that your companyrsquos guidelines set If youcannot obtain the infrastructure creation permissions that are required to create the VPC yourself usethis installation option

Because the installation program cannot know what other components are also in your existing subnetsit cannot choose subnet CIDRs and so forth on your behalf You must configure networking for thesubnets that you install your cluster to yourself

CHAPTER 1 INSTALLING ON AWS

119

1641 Requirements for using your VPC

The installation program no longer creates the following components

Internet gateways

NAT gateways

Subnets

Route tables

VPCs

VPC DHCP options

VPC endpoints

If you use a custom VPC you must correctly configure it and its subnets for the installation program andthe cluster to use The installation program cannot subdivide network ranges for the cluster to use setroute tables for the subnets or set VPC options like DHCP so you must do so before you install thecluster

Your VPC must meet the following characteristics

The VPCrsquos CIDR block must contain the NetworkingMachineCIDR range which is the IPaddress pool for cluster machines

The VPC must not use the kubernetesiocluster owned tag

You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC sothat the cluster can use the Route 53 zones that are attached to the VPC to resolve clusterrsquosinternal DNS records See DNS Support in Your VPC in the AWS documentation

If you use a cluster with public access you must create a public and a private subnet for each availabilityzone that your cluster uses The installation program modifies your subnets to add the kubernetesiocluster shared tag so your subnets must have at least one free tag slot available forit Review the current Tag Restrictions in the AWS documentation to ensure that the installationprogram can add a tag to each subnet that you specify

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

OpenShift Container Platform 46 Installing on AWS

120

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

CHAPTER 1 INSTALLING ON AWS

121

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

1642 VPC validation

To ensure that the subnets that you provide are suitable the installation program confirms the followingdata

All the subnets that you specify exist

You provide private subnets

The subnet CIDRs belong to the machine CIDR that you specified

You provide subnets for each availability zone Each availability zone contains no more than onepublic and one private subnet If you use a private cluster provide only a private subnet for eachavailability zone Otherwise provide exactly one public and private subnet for each availabilityzone

You provide a public subnet for each private subnet availability zone Machines are notprovisioned in availability zones that you do not provide private subnets for

If you destroy a cluster that uses an existing VPC the VPC is not deleted When you remove theOpenShift Container Platform cluster from a VPC the kubernetesiocluster shared tag is removedfrom the subnets that it used

1643 Division of permissions

Starting with OpenShift Container Platform 43 you do not need all of the permissions that are requiredfor an installation program-provisioned infrastructure cluster to deploy a cluster This change mimicsthe division of permissions that you might have at your company some individuals can create differentresource in your clouds than others For example you might be able to create application-specific itemslike instances buckets and load balancers but not networking-related components such as VPCssubnets or ingress rules

The AWS credentials that you use when you create your cluster do not need the networking permissionsthat are required to make VPCs and core networking components within the VPC such as subnetsrouting tables Internet gateways NAT and VPN You still need permission to make the applicationresources that the machines within the cluster require such as ELBs security groups S3 buckets andnodes

1644 Isolation between clusters

If you deploy OpenShift Container Platform to an existing network the isolation of cluster services isreduced in the following ways

OpenShift Container Platform 46 Installing on AWS

122

You can install multiple OpenShift Container Platform clusters in the same VPC

ICMP ingress is allowed from the entire network

TCP 22 ingress (SSH) is allowed to the entire network

Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network

Control plane TCP 22623 ingress (MCS) is allowed to the entire network

165 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

166 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

CHAPTER 1 INSTALLING ON AWS

123

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram

167 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

124

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

168 Manually creating the installation configuration file

When installing OpenShift Container Platform on Amazon Web Services (AWS) into a region requiring acustom Red Hat Enterprise Linux CoreOS (RHCOS) AMI you must manually generate your installationconfiguration file

Prerequisites

Obtain the OpenShift Container Platform installation program and the access token for yourcluster

Procedure

1 Create an installation directory to store your required installation assets in

IMPORTANT

$ tar xvf openshift-install-linuxtargz

$ mkdir ltinstallation_directorygt

CHAPTER 1 INSTALLING ON AWS

125

IMPORTANT

You must create a directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse an installationdirectory If you want to reuse individual files from another cluster installationyou can copy them into your directory However the file names for theinstallation assets might change between releases Use caution when copyinginstallation files from an earlier OpenShift Container Platform version

2 Customize the following install-configyaml file template and save it in the ltinstallation_directorygt

NOTE

You must name this configuration file install-configyaml

3 Back up the install-configyaml file so that you can use it to install multiple clusters

IMPORTANT

The install-configyaml file is consumed during the next step of the installationprocess You must back it up now

1681 Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster you provide parameter values to describeyour account on the cloud platform that hosts your cluster and optionally customize your clusterrsquosplatform When you create the install-configyaml installation configuration file you provide values forthe required parameters through the command line If you customize your cluster you can modify the install-configyaml file to provide more details about the platform

NOTE

After installation you cannot modify these parameters in the install-configyaml file

IMPORTANT

The openshift-install command does not validate field names for parameters If anincorrect name is specified the related file or object is not created and no error isreported Ensure that the field names for any parameters that are specified are correct

Table 114 Required parameters

Parameter Description Values

apiVersion The API version for the install-configyamlcontent The current version isv1 The installer may alsosupport older API versions

String

OpenShift Container Platform 46 Installing on AWS

126

baseDomain The base domain of yourcloud provider The basedomain is used to createroutes to your OpenShiftContainer Platform clustercomponents The full DNSname for your cluster is acombination of the baseDomain and metadataname parametervalues that uses the ltmetadatanamegtltbaseDomaingt format

A fully-qualified domain or subdomain name such as examplecom

metadata Kubernetes resource ObjectMeta from which onlythe name parameter isconsumed

Object

metadataname The name of the cluster DNSrecords for the cluster are allsubdomains of metadatanamebaseDomain

String of lowercase letters hyphens (-) and periods() such as dev

platform The configuration for thespecific platform upon whichto perform the installation aws baremetal azure openstack ovirt vsphereFor additional informationabout platformltplatformgtparameters consult thefollowing table for yourspecific platform

Object

pullSecret Get this pull secret fromhttpscloudredhatcomopenshiftinstallpull-secret toauthenticate downloadingcontainer images forOpenShift Container Platformcomponents from servicessuch as Quayio

Parameter Description Values

auths cloudopenshiftcom authb3Blb= emailyouexamplecom quayio authb3Blb= emailyouexamplecom

CHAPTER 1 INSTALLING ON AWS

127

Table 115 Optional parameters

Parameter Description Values

additionalTrustBundle

A PEM-encoded X509 certificatebundle that is added to the nodestrusted certificate store This trustbundle may also be used when a proxyhas been configured

String

compute The configuration for the machinesthat comprise the compute nodes

Array of machine-pool objects Fordetails see the following Machine-pool table

computearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heteregeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

computehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on computemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

computename Required if you use compute Thename of the machine pool

worker

computeplatform Required if you use compute Use thisparameter to specify the cloudprovider to host the worker machinesThis parameter value must match the controlPlaneplatform parametervalue

aws azure gcp openstack ovirt vsphere or

computereplicas The number of compute machineswhich are also known as workermachines to provision

A positive integer greater than or equalto 2 The default value is 3

OpenShift Container Platform 46 Installing on AWS

128

controlPlane The configuration for the machinesthat comprise the control plane

Array of MachinePool objects Fordetails see the following Machine-pool table

controlPlanearchitecture

Determines the instruction setarchitecture of the machines in thepool Currently heterogeneousclusters are not supported so all poolsmust specify the same architectureValid values are amd64 (the default)

String

controlPlanehyperthreading

Whether to enable or disablesimultaneous multithreading or hyperthreading on control planemachines By default simultaneousmultithreading is enabled to increasethe performance of your machinescores

IMPORTANT

If you disablesimultaneousmultithreading ensurethat your capacityplanning accounts forthe dramaticallydecreased machineperformance

Enabled or Disabled

controlPlanename Required if you use controlPlaneThe name of the machine pool

master

controlPlaneplatform

Required if you use controlPlaneUse this parameter to specify the cloudprovider that hosts the control planemachines This parameter value mustmatch the computeplatformparameter value

aws azure gcp openstack ovirt vsphere or

controlPlanereplicas

The number of control plane machinesto provision

The only supported value is 3 which isthe default value

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

129

credentialsMode The Cloud Credential Operator (CCO)mode If no mode is specified theCCO dynamically tries to determinethe capabilities of the providedcredentials with a preference for mintmode on the platforms where multiplemodes are supported

NOTE

Not all CCO modesare supported for allcloud providers Formore information onCCO modes see theCloud CredentialOperator entry in theRed Hat Operatorsreference content

Mint Passthrough Manual or anempty string ()

fips Enable or disable FIPS mode Thedefault is false (disabled) If FIPSmode is enabled the Red HatEnterprise Linux CoreOS (RHCOS)machines that OpenShift ContainerPlatform runs on bypass the defaultKubernetes cryptography suite and usethe cryptography modules that areprovided with RHCOS instead

false or true

imageContentSources

Sources and repositories for therelease-image content

Array of objects Includes a sourceand optionally mirrors as describedin the following rows of this table

imageContentSourcessource

Required if you use imageContentSources Specify therepository that users refer to forexample in image pull specifications

String

imageContentSourcesmirrors

Specify one or more repositories thatmay also contain the same images

Array of strings

networking The configuration for the pod networkprovider in the cluster

Object

networkingclusterNetwork

The IP address pools for pods Thedefault is 101280014 with a hostprefix of 23

Array of objects

Parameter Description Values

OpenShift Container Platform 46 Installing on AWS

130

networkingclusterNetworkcidr

Required if you use networkingclusterNetwork The IPblock address pool

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingclusterNetworkhostPrefix

Required if you use networkingclusterNetwork Theprefix size to allocate to each nodefrom the CIDR For example 24 wouldallocate 2^8=256 addresses to eachnode

Integer

networkingmachineNetwork

The IP address pools for machines Array of objects

networkingmachineNetworkcidr

Required if you use networkingmachineNetwork TheIP block address pool The default is 1000016 for all platforms otherthan libvirt For libvirt the default is 192168126024

IP network IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

networkingnetworkType

The type of network to install Thedefault is OpenShiftSDN

String

networkingserviceNetwork

The IP address pools for services Thedefault is 172300016

Array of IP networks IP networks arerepresented as strings using ClasslessInter-Domain Routing (CIDR) notationwith a traditional IP address or networknumber followed by the forward slash() character followed by a decimalvalue between 0 and 32 that describesthe number of significant bits Forexample 1000016 represents IPaddresses 10000 through 100255255

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

131

publish How to publish or expose the user-facing endpoints of your cluster suchas the Kubernetes API OpenShiftroutes

Internal or External To deploy aprivate cluster which cannot beaccessed from the internet set publish to Internal The defaultvalue is External

sshKey The SSH key or keys to authenticateaccess your cluster machines

NOTE

For productionOpenShift ContainerPlatform clusters onwhich you want toperform installationdebugging or disasterrecovery specify anSSH key that your ssh-agent processuses

One or more keys For example

sshKey ltkey1gt ltkey2gt ltkey3gt

Parameter Description Values

Table 116 Optional AWS parameters

Parameter Description Values

computeplatformawsamiID

The AWS AMI used to bootcompute machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

computeplatformawsrootVolumeiops

The InputOutput OperationsPer Second (IOPS) that isreserved for the root volume

Integer for example 4000

computeplatformawsrootVolumesize

The size in GiB of the rootvolume

Integer for example 500

computeplatformawsrootVolumetype

The instance type of the rootvolume

Valid AWS EBS instance type such as io1

computeplatformawstype

The EC2 instance type for thecompute machines

Valid AWS instance type such as c59xlarge

OpenShift Container Platform 46 Installing on AWS

132

computeplatformawszones

The availability zones wherethe installation programcreates machines for thecompute machine pool If youprovide your own VPC youmust provide a subnet in thatavailability zone

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

computeawsregion

The AWS region that theinstallation program createscompute resources in

Any valid AWS region such as us-east-1

controlPlaneplatformawsamiID

The AWS AMI used to bootcontrol plane machines for thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

controlPlaneplatformawstype

The EC2 instance type for thecontrol plane machines

Valid AWS instance type such as c59xlarge

controlPlaneplatformawszones

The availability zones wherethe installation programcreates machines for thecontrol plane machine pool

A list of valid AWS availability zones such as us-east-1c in a YAML sequence

controlPlaneawsregion

The AWS region that theinstallation program createscontrol plane resources in

Valid AWS region such as us-east-1

platformawsamiID

The AWS AMI used to boot allmachines for the cluster Ifset the AMI must belong tothe same region as thecluster This is required forregions that require a customRHCOS AMI

Any published or custom RHCOS AMI that belongsto the set AWS region

platformawsserviceEndpointsname

The AWS service endpointname Custom endpoints areonly required for cases wherealternative AWS endpointslike FIPS must be usedCustom API endpoints can bespecified for EC2 S3 IAMElastic Load BalancingTagging Route 53 and STSAWS services

Valid AWS service endpoint name

Parameter Description Values

CHAPTER 1 INSTALLING ON AWS

133

platformawsserviceEndpointsurl

The AWS service endpointURL The URL must use the https protocol and the hostmust trust the certificate

Valid AWS service endpoint URL

platformawsuserTags

A map of keys and values thatthe installation program addsas tags to all resources that itcreates

Any valid YAML map such as key value pairs in the ltkeygt ltvaluegt format For more informationabout AWS tags see Tagging Your Amazon EC2Resources in the AWS documentation

platformawssubnets

If you provide the VPCinstead of allowing theinstallation program to createthe VPC for you specify thesubnet for the cluster to useThe subnet must be part ofthe same machineNetwork[]cidrranges that you specify For astandard cluster specify apublic and a private subnet foreach availability zone For aprivate cluster specify aprivate subnet for eachavailability zone

Valid subnet IDs

Parameter Description Values

1682 Sample customized install-configyaml file for AWS

You can customize the install-configyaml file to specify more details about your OpenShift ContainerPlatform clusterrsquos platform or modify the values of the required parameters

IMPORTANT

This sample YAML file is provided for reference only You must obtain your install-configyaml file by using the installation program and modify it

apiVersion v1baseDomain examplecom 1credentialsMode Mint 2controlPlane 3 4 hyperthreading Enabled 5 name master platform aws zones - us-gov-west-1a - us-gov-west-1b

OpenShift Container Platform 46 Installing on AWS

134

1 10 14

2

Required

Optional Add this parameter to force the Cloud Credential Operator (CCO) to use the specifiedmode instead of having the CCO dynamically try to determine the capabilities of the credentialsFor details about CCO modes see the Cloud Credential Operator entry in the Red Hat Operators

rootVolume iops 4000 size 500 type io1 6 type m5xlarge replicas 3compute 7- hyperthreading Enabled 8 name worker platform aws rootVolume iops 2000 size 500 type io1 9 type c54xlarge zones - us-gov-west-1c replicas 3metadata name test-cluster 10networking clusterNetwork - cidr 101280014 hostPrefix 23 machineNetwork - cidr 1000016 networkType OpenShiftSDN serviceNetwork - 172300016platform aws region us-gov-west-1 userTags adminContact jdoe costCenter 7536 subnets 11 - subnet-1 - subnet-2 - subnet-3 amiID ami-96c6f8f7 12 serviceEndpoints 13 - name ec2 url httpsvpce-idec2us-west-2vpceamazonawscompullSecret auths 14fips false 15sshKey ssh-ed25519 AAAA 16publish Internal 17

CHAPTER 1 INSTALLING ON AWS

135

3 7

4

5 8

6 9

11

12

13

15

16

17

reference content

If you do not provide these parameters and values the installation program provides the defaultvalue

The controlPlane section is a single mapping but the compute section is a sequence of mappingsTo meet the requirements of the different data structures the first line of the compute sectionmust begin with a hyphen - and the first line of the controlPlane section must not Although bothsections currently define a single machine pool it is possible that future versions of OpenShiftContainer Platform will support defining multiple compute pools during installation Only onecontrol plane pool is used

Whether to enable or disable simultaneous multithreading or hyperthreading By defaultsimultaneous multithreading is enabled to increase the performance of your machines cores Youcan disable it by setting the parameter value to Disabled If you disable simultaneousmultithreading in some cluster machines you must disable it in all cluster machines

IMPORTANT

If you disable simultaneous multithreading ensure that your capacity planningaccounts for the dramatically decreased machine performance Use larger instancetypes such as m42xlarge or m52xlarge for your machines if you disablesimultaneous multithreading

To configure faster storage for etcd especially for larger clusters set the storage type as io1 andset iops to 2000

If you provide your own VPC specify subnets for each availability zone that your cluster uses

The ID of the AMI used to boot machines for the cluster If set the AMI must belong to the sameregion as the cluster

The AWS service endpoints Custom endpoints are required when installing to an unknown AWSregion The endpoint URL must use the https protocol and the host must trust the certificate

Whether to enable or disable FIPS mode By default FIPS mode is not enabled If FIPS mode isenabled the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift ContainerPlatform runs on bypass the default Kubernetes cryptography suite and use the cryptographymodules that are provided with RHCOS instead

You can optionally provide the sshKey value that you use to access the machines in your cluster

NOTE

For production OpenShift Container Platform clusters on which you want to performinstallation debugging or disaster recovery specify an SSH key that your ssh-agentprocess uses

How to publish the user-facing endpoints of your cluster Set publish to Internal to deploy aprivate cluster which cannot be accessed from the Internet The default value is External

1683 AWS regions without a published RHCOS AMI

You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions

OpenShift Container Platform 46 Installing on AWS

136

1

1

1

without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI)or the AWS software development kit (SDK) If a published AMI is not available for an AWS region youcan upload a custom AMI prior to installing the cluster This is required if you are deploying your clusterto an AWS government region

If you are deploying to a non-government region that does not have a published RHCOS AMI and youdo not specify a custom AMI the installation program copies the us-east-1 AMI to the user accountautomatically Then the installation program creates the control plane machines with encrypted EBSvolumes using the default or user-specified Key Management Service (KMS) key This allows the AMI tofollow the same process workflow as published RHCOS AMIs

A region without native support for an RHCOS AMI is not available to select from the terminal duringcluster creation because it is not published However you can install to this region by configuring thecustom AMI in the install-configyaml file

1684 Uploading a custom RHCOS AMI in AWS

If you are deploying to a custom Amazon Web Services (AWS) region you must upload a custom RedHat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region

Prerequisites

You configured an AWS account

You created an Amazon S3 bucket with the required IAM service role

You uploaded your RHCOS VMDK file to Amazon S3 The RHCOS VMDK file must be thehighest version that is less than or equal to the OpenShift Container Platform version you areinstalling

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer

Procedure

1 Export your AWS profile as an environment variable

The AWS profile name that holds your AWS credentials like govcloud

2 Export the region to associate with your custom AMI as an environment variable

The AWS region like us-gov-east-1

3 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable

The RHCOS VMDK version like 460

$ export AWS_PROFILE=ltaws_profilegt 1

$ export AWS_DEFAULT_REGION=ltaws_regiongt 1

$ export RHCOS_VERSION=ltversiongt 1

CHAPTER 1 INSTALLING ON AWS

137

1

2

4 Export the Amazon S3 bucket name as an environment variable

5 Create the containersjson file and define your RHCOS VMDK file

6 Import the RHCOS disk as an Amazon EBS snapshot

The description of your RHCOS disk being imported like rhcos-$RHCOS_VERSION-x86_64-awsx86_64

The file path to the JSON file describing your RHCOS disk The JSON file should containyour Amazon S3 bucket name and key

7 Check the status of the image import

Example output

$ export VMIMPORT_BUCKET_NAME=lts3_bucket_namegt

$ cat ltltEOF gt containersjson Description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 Format vmdk UserBucket S3Bucket $VMIMPORT_BUCKET_NAME S3Key rhcos-$RHCOS_VERSION-x86_64-awsx86_64vmdk EOF

$ aws ec2 import-snapshot --region $AWS_DEFAULT_REGION --description ltdescriptiongt 1 --disk-container ltfile_pathgtcontainersjson 2

$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region $AWS_DEFAULT_REGION

ImportSnapshotTasks [ Description rhcos-460-x86_64-awsx86_64 ImportTaskId import-snap-fh6i8uil SnapshotTaskDetail Description rhcos-460-x86_64-awsx86_64 DiskImageSize 8190566400 Format VMDK SnapshotId snap-06331325870076318 Status completed UserBucket S3Bucket external-images S3Key rhcos-460-x86_64-awsx86_64vmdk

OpenShift Container Platform 46 Installing on AWS

138

1

2

3

4

Copy the SnapshotId to register the image

8 Create a custom RHCOS AMI from the RHCOS snapshot

The RHCOS VMDK architecture type like x86_64 s390x or ppc64le

The Description from the imported snapshot

The name of the RHCOS AMI

The SnapshotID from the imported snapshot

To learn more about these APIs see the AWS documentation for importing snapshots and creatingEBS-backed AMIs

1685 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

]

$ aws ec2 register-image --region $AWS_DEFAULT_REGION --architecture x86_64 1 --description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 2 --ena-support --name rhcos-$RHCOS_VERSION-x86_64-awsx86_64 3 --virtualization-type hvm --root-device-name devxvda --block-device-mappings DeviceName=devxvdaEbs=DeleteOnTermination=trueSnapshotId=ltsnapshot_IDgt 4

CHAPTER 1 INSTALLING ON AWS

139

1

2

3

4

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

OpenShift Container Platform 46 Installing on AWS

140

1

2

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

169 Deploying the cluster

You can install OpenShift Container Platform on a compatible cloud platform

IMPORTANT

You can run the create cluster command of the installation program only once duringinitial installation

Prerequisites

Configure an account with the cloud platform that hosts your cluster

Obtain the OpenShift Container Platform installation program and the pull secret for yourcluster

Procedure

1 Change to the directory that contains the installation program and initialize the clusterdeployment

For ltinstallation_directorygt specify the location of your customized install-configyaml file

To view different installation details specify warn debug or error instead of info

NOTE

If the cloud provider account that you configured on your host does not havesufficient permissions to deploy the cluster the installation process stops andthe missing permissions are displayed

When the cluster deployment completes directions for accessing your cluster including a link toits web console and credentials for the kubeadmin user display in your terminal

$ openshift-install create cluster --dir=ltinstallation_directorygt 1 --log-level=info 2

CHAPTER 1 INSTALLING ON AWS

141

Example output

NOTE

The cluster access and credential information also outputs to ltinstallation_directorygtopenshift_installlog when an installation succeeds

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

IMPORTANT

You must not delete the installation program or the files that the installationprogram creates Both are required to delete the cluster

2 Optional Remove or disable the AdministratorAccess policy from the IAM account that youused to install the cluster

NOTE

The elevated permissions provided by the AdministratorAccess policy arerequired only during installation

1610 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

16101 Installing the OpenShift CLI on Linux

INFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Ee6gm-ymBZj-Wt5ALINFO Time elapsed 36m22s

OpenShift Container Platform 46 Installing on AWS

142

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

16102 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

16103 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

$ tar xvzf ltfilegt

$ echo $PATH

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

143

1

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1611 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1612 Logging in to the cluster by using the web console

$ echo $PATH

$ oc ltcommandgt

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

OpenShift Container Platform 46 Installing on AWS

144

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

1613 Next steps

Validating an installation

Customize your cluster

If necessary you can opt out of remote health reporting

17 INSTALLING A CLUSTER ON USER-PROVISIONED

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

145

17 INSTALLING A CLUSTER ON USER-PROVISIONEDINFRASTRUCTURE IN AWS BY USING CLOUDFORMATIONTEMPLATES

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)that uses infrastructure that you provide

One way to create this infrastructure is to use the provided CloudFormation templates You can modifythe templates to customize your infrastructure or use the information that they contain to create AWSobjects according to your companyrsquos policies

IMPORTANT

The steps for performing a user-provisioned infrastructure installation are provided as anexample only Installing a cluster with infrastructure you provide requires knowledge ofthe cloud provider and the installation process of OpenShift Container Platform SeveralCloudFormation templates are provided to assist in completing these steps or to helpmodel your own You are also free to create the required resources through othermethods the templates are just an example

171 Prerequisites

You reviewed details about the OpenShift Container Platform installation and updateprocesses

You configured an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix) in the AWS documentation

If you use a firewall you configured it to allow the sites that your cluster requires access to

NOTE

Be sure to also review this site list if you are configuring a proxy

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

172 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster The

OpenShift Container Platform 46 Installing on AWS

146

Telemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

173 Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services(AWS) you must manually create both the machines and their supporting infrastructure

For more information about the integration testing for different platforms see the OpenShift ContainerPlatform 4x Tested Integrations page

By using the provided CloudFormation templates you can create stacks of AWS resources thatrepresent the following components

An AWS Virtual Private Cloud (VPC)

Networking and load balancing components

Security groups and roles

An OpenShift Container Platform bootstrap node

OpenShift Container Platform control plane nodes

An OpenShift Container Platform compute node

Alternatively you can manually create the components or you can reuse existing infrastructure that

CHAPTER 1 INSTALLING ON AWS

147

Alternatively you can manually create the components or you can reuse existing infrastructure thatmeets the cluster requirements Review the CloudFormation templates for more details about how thecomponents interrelate

1731 Cluster machines

You need AWSEC2Instance objects for the following machines

A bootstrap machine This machine is required during installation but you can remove it afteryour cluster deploys

Three control plane machines The control plane machines are not governed by a machine set

Compute machines You must create at least two compute machines which are also known asworker machines during installation These machines are not governed by a machine set

You can use the following instance types for the cluster machines with the provided CloudFormationtemplates

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5types instead

Table 117 Instance types for machines

Instance type Bootstrap Control plane Compute

i3large x

m4large x

m4xlarge x x

m42xlarge x x

m44xlarge x x

m48xlarge x x

m410xlarge x x

m416xlarge x x

m5large x

m5xlarge x x

m52xlarge x x

m54xlarge x x

OpenShift Container Platform 46 Installing on AWS

148

m58xlarge x x

m510xlarge x x

m516xlarge x x

c4large x

c4xlarge x

c42xlarge x x

c44xlarge x x

c48xlarge x x

r4large x

r4xlarge x x

r42xlarge x x

r44xlarge x x

r48xlarge x x

r416xlarge x x

Instance type Bootstrap Control plane Compute

You might be able to use other instance types that meet the specifications of these instance types

1732 Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructurethat you provision you must provide a mechanism for approving cluster certificate signing requests(CSRs) after installation The kube-controller-manager only approves the kubelet client CSRs The machine-approver cannot guarantee the validity of a serving certificate that is requested by usingkubelet credentials because it cannot confirm that the correct machine issued the request You mustdetermine and implement a method of verifying the validity of the kubelet serving certificate requestsand approving them

1733 Other infrastructure components

A VPC

DNS entries

CHAPTER 1 INSTALLING ON AWS

149

Load balancers (classic or network) and listeners

A public and a private Route 53 zone

Security groups

IAM roles

S3 buckets

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

OpenShift Container Platform 46 Installing on AWS

150

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a privatehosted zone similar to the one that the installation program uses if it provisions the clusterrsquosinfrastructure You must create a DNS entry that resolves to your load balancer An entry for apiltcluster_namegtltdomaingt must point to the external load balancer and an entry for api-intltcluster_namegtltdomaingt must point to the internal load balancer

The cluster also requires load balancers and listeners for port 6443 which are required for theKubernetes API and its extensions and port 22623 which are required for the Ignition config files fornew machines The targets will be the master nodes Port 6443 must be accessible to both clientsexternal to the cluster and nodes within the cluster Port 22623 must be accessible to nodes within thecluster

Component AWS type Description

DNS AWSRoute53HostedZone

The hosted zone for your internal DNS

CHAPTER 1 INSTALLING ON AWS

151

etcd recordsets

AWSRoute53RecordSet

The registration records for etcd for your control plane machines

Public loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your public subnets

External APIserver record

AWSRoute53RecordSetGroup

Alias records for the external API server

Externallistener

AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the external load balancer

External targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the external load balancer

Private loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your private subnets

Internal APIserver record

AWSRoute53RecordSetGroup

Alias records for the internal API server

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 22623 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the internal load balancer

Component AWS type Description

OpenShift Container Platform 46 Installing on AWS

152

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Component AWS type Description

Security groups

The control plane and worker machines require access to the following ports

Group Type IP Protocol Port range

MasterSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

tcp 6443

tcp 22623

WorkerSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

BootstrapSecurityGroup

AWSEC2SecurityGroup

tcp 22

tcp 19531

Control plane Ingress

The control plane machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd tcp 2379- 2380

MasterIngressVxlan

Vxlan packets udp 4789

MasterIngressWorkerVxlan

Vxlan packets udp 4789

MasterIngressInternal

Internal cluster communication and Kubernetesproxy metrics

tcp 9000 - 9999

CHAPTER 1 INSTALLING ON AWS

153

MasterIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

MasterIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Worker Ingress

The worker machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets udp 4789

WorkerIngressWorkerVxlan

Vxlan packets udp 4789

WorkerIngressInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

OpenShift Container Platform 46 Installing on AWS

154

WorkerIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Roles and instance profiles

You must grant the machines permissions in AWS The provided CloudFormation templates grant themachines Allow permissions for the following AWSIAMRole objects and provide a AWSIAMInstanceProfile for each set of roles If you do not use the templates you can grant themachines the following broad permissions or the following individual permissions

Role Effect Action Resource

Master Allow ec2

Allow elasticloadbalancing

Allow iamPassRole

Allow s3GetObject

Worker Allow ec2Describe

Bootstrap Allow ec2Describe

Allow ec2AttachVolume

Allow ec2DetachVolume

1734 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 112 Required EC2 permissions for installation

tagTagResources

tagUntagResources

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

CHAPTER 1 INSTALLING ON AWS

155

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

OpenShift Container Platform 46 Installing on AWS

156

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 113 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

CHAPTER 1 INSTALLING ON AWS

157

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 114 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

elasticloadbalancingSetLoadBalancerPoliciesOfListener

OpenShift Container Platform 46 Installing on AWS

158

Example 115 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 116 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

route53ListHostedZonesByName

CHAPTER 1 INSTALLING ON AWS

159

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 117 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 118 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

s3GetObjectAcl

OpenShift Container Platform 46 Installing on AWS

160

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 119 Required permissions to delete base cluster resources

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 120 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

ec2DeleteVpc

CHAPTER 1 INSTALLING ON AWS

161

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

Example 121 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 122 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

OpenShift Container Platform 46 Installing on AWS

162

174 Obtaining the installation program

Before you install OpenShift Container Platform download the installation file on a local computer

Prerequisites

You have a computer that runs Linux or macOS with 500 MB of local disk space

Procedure

1 Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site If youhave a Red Hat account log in with your credentials If you do not create an account

2 Select your infrastructure provider

3 Navigate to the page for your installation type download the installation program for youroperating system and place the file in the directory where you will store the installationconfiguration files

IMPORTANT

The installation program creates several files on the computer that you use toinstall your cluster You must keep the installation program and the files that theinstallation program creates after you finish installing the cluster Both files arerequired to delete the cluster

IMPORTANT

Deleting the files created by the installation program does not remove yourcluster even if the cluster failed during installation To remove your clustercomplete the OpenShift Container Platform uninstallation procedures for yourspecific cloud provider

4 Extract the installation program For example on a computer that uses a Linux operatingsystem run the following command

5 From the Pull Secret page on the Red Hat OpenShift Cluster Manager site download yourinstallation pull secret as a txt file This pull secret allows you to authenticate with the servicesthat are provided by the included authorities including Quayio which serves the containerimages for OpenShift Container Platform components

175 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster the

$ tar xvf openshift-install-linuxtargz

CHAPTER 1 INSTALLING ON AWS

163

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram If you install a cluster on infrastructure that you provision you must provide this key toyour clusterrsquos machines

176 Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisionedinfrastructure you must generate the files that the installation program needs to deploy your clusterand modify them so that the cluster creates only the machines that it will use You generate and

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

OpenShift Container Platform 46 Installing on AWS

164

customize the install-configyaml file Kubernetes manifests and Ignition config files You also have theoption to first set up a separate var partition during the preparation phases of installation

1761 Optional Creating a separate var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installerHowever there are cases where you might want to create separate partitions in a part of the filesystemthat you expect to grow

OpenShift Container Platform supports the addition of a single partition to attach storage to either the var partition or a subdirectory of var For example

varlibcontainers Holds container-related content that can grow as more images andcontainers are added to a system

varlibetcd Holds data that you might want to keep separate for purposes such asperformance optimization of etcd storage

var Holds data that you might want to keep separate for purposes such as auditing

Storing the contents of a var directory separately makes it easier to grow storage for those areas asneeded and reinstall OpenShift Container Platform at a later date and keep that data intact With thismethod you will not have to pull all your containers again nor will you have to copy massive log fileswhen you update systems

Because var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS)the following procedure sets up the separate var partition by creating a machine config that is insertedduring the openshift-install preparation phases of an OpenShift Container Platform installation

IMPORTANT

If you follow the steps to create a separate var partition in this procedure it is notnecessary to create the Kubernetes manifest and Ignition config files again as describedlater in this section

Procedure

1 Create a directory to hold the OpenShift Container Platform installation files

2 Run openshift-install to create a set of files in the manifest and openshift subdirectoriesAnswer the system questions as you are prompted

3 Create a MachineConfig object and add it to a file in the openshift directory For examplename the file 98-var-partitionyaml change the disk device name to the name of the storage

$ mkdir $HOMEclusterconfig

$ openshift-install create manifests --dir $HOMEclusterconfig SSH Public Key $ ls $HOMEclusterconfigopenshift99_kubeadmin-password-secretyaml99_openshift-cluster-api_master-machines-0yaml99_openshift-cluster-api_master-machines-1yaml99_openshift-cluster-api_master-machines-2yaml

CHAPTER 1 INSTALLING ON AWS

165

1

2

device on the worker systems and set the storage size as appropriate This attaches storage toa separate var directory

The storage device name of the disk that you want to partition

When adding a data partition to the boot disk a minimum value of 25000 MiB (Mebibytes)is recommended The root file system is automatically resized to fill all available space upto the specified offset If no value is specified or if the specified value is smaller than therecommended minimum the resulting root file system will be too small and futurereinstalls of RHCOS might overwrite the beginning of the data partition

4 Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories

Now you can use the Ignition config files as input to the installation procedures to install Red HatEnterprise Linux CoreOS (RHCOS) systems

apiVersion machineconfigurationopenshiftiov1kind MachineConfigmetadata labels machineconfigurationopenshiftiorole worker name 98-var-partitionspec config ignition version 310 storage disks - device devltdevice_namegt 1 partitions - sizeMiB ltpartition_sizegt startMiB ltpartition_start_offsetgt 2 label var filesystems - path var device devdiskby-partlabelvar format xfs systemd units - name varmount enabled true contents | [Unit] Before=local-fstarget [Mount] Where=var What=devdiskby-partlabelvar [Install] WantedBy=local-fstarget

$ openshift-install create ignition-configs --dir $HOMEclusterconfig$ ls $HOMEclusterconfigauth bootstrapign masterign metadatajson workerign

OpenShift Container Platform 46 Installing on AWS

166

1

1762 Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deployyour cluster

Prerequisites

You obtained the OpenShift Container Platform installation program for user-provisionedinfrastructure and the pull secret for your cluster

You checked that you are deploying your cluster to a region with an accompanying Red HatEnterprise Linux CoreOS (RHCOS) AMI published by Red Hat If you are deploying to a regionthat requires a custom AMI such as an AWS GovCloud region you must create the install-configyaml file manually

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select aws as the platform to target

iii If you do not have an AWS profile stored on your computer enter the AWS access keyID and secret access key for the user that you configured to run the installationprogram

NOTE

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

167

NOTE

The AWS access key ID and secret access key are stored in ~awscredentials in the home directory of the current user on theinstallation host You are prompted for the credentials by the installationprogram if the credentials for the exported profile are not present in thefile Any credentials that you provide to the installation program arestored in the file

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Optional Back up the install-configyaml file

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

Additional resources

See Configuration and credential file settings in the AWS documentation for more informationabout AWS profile and credential configuration

1763 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

OpenShift Container Platform 46 Installing on AWS

168

1

2

3

4

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

169

1

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

1764 Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines you mustgenerate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines

The installation configuration file transforms into the Kubernetes manifests The manifests wrap into theIgnition configuration files which are later used to create the cluster

IMPORTANT

The Ignition config files that the installation program generates contain certificates thatexpire after 24 hours which are then renewed at that time If the cluster is shut downbefore renewing the certificates and the cluster is later restarted after the 24 hours haveelapsed the cluster automatically recovers the expired certificates The exception is thatyou must manually approve the pending node-bootstrapper certificate signing requests(CSRs) to recover kubelet certificates See the documentation for Recovering fromexpired control plane certificates for more information

Prerequisites

You obtained the OpenShift Container Platform installation program

You created the install-configyaml installation configuration file

Procedure

1 Change to the directory that contains the installation program and generate the Kubernetesmanifests for the cluster

Example output

For ltinstallation_directorygt specify the installation directory that contains the install-configyaml file you created

2 Remove the Kubernetes manifest files that define the control plane machines

By removing these files you prevent the cluster from automatically generating control planemachines

3 Remove the Kubernetes manifest files that define the worker machines

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

INFO Credentials loaded from the myprofile profile in file homemyuserawscredentialsINFO Consuming Install Config from target directoryINFO Manifests created in install_dirmanifests and install_diropenshift

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_master-machines-yaml

OpenShift Container Platform 46 Installing on AWS

170

1 2

1

Because you create and manage the worker machines yourself you do not need to initializethese machines

4 Check that the mastersSchedulable parameter in the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml Kubernetes manifestfile is set to false This setting prevents pods from being scheduled on the control planemachines

a Open the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml file

b Locate the mastersSchedulable parameter and ensure that it is set to false

c Save and exit the file

5 Optional If you do not want the Ingress Operator to create DNS records on your behalf removethe privateZone and publicZone sections from the ltinstallation_directorygtmanifestscluster-dns-02-configyml DNS configuration file

Remove this section completely

If you do so you must add ingress DNS records manually in a later step

6 To create the Ignition configuration files run the following command from the directory thatcontains the installation program

For ltinstallation_directorygt specify the same installation directory

The following files are generated in the directory

auth kubeadmin-password kubeconfig bootstrapign masterign metadatajson workerign

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_worker-machineset-yaml

apiVersion configopenshiftiov1kind DNSmetadata creationTimestamp null name clusterspec baseDomain exampleopenshiftcom privateZone 1 id mycluster-100419-private-zone publicZone 2 id exampleopenshiftcomstatus

$ openshift-install create ignition-configs --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

171

1

1

177 Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify yourcluster in Amazon Web Services (AWS) The infrastructure name is also used to locate the appropriateAWS resources during an OpenShift Container Platform installation The provided CloudFormationtemplates contain references to this infrastructure name so you must extract it

Prerequisites

You obtained the OpenShift Container Platform installation program and the pull secret foryour cluster

You generated the Ignition config files for your cluster

You installed the jq package

Procedure

To extract and view the infrastructure name from the Ignition config file metadata run thefollowing command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

The output of this command is your cluster name and a random string

178 Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShiftContainer Platform cluster to use You can customize the VPC to meet your requirements includingVPN and route tables

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the VPC

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

$ jq -r infraID ltinstallation_directorygtmetadatajson 1

openshift-vw9j6 1

OpenShift Container Platform 46 Installing on AWS

172

1

2

3

4

5

6

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

Procedure

1 Create a JSON file that contains the parameter values that the template requires

The CIDR block for the VPC

Specify a CIDR block in the format xxxx16-24

The number of availability zones to deploy the VPC in

Specify an integer between 1 and 3

The size of each subnet in each availability zone

Specify an integer between 5 and 13 where 5 is 27 and 13 is 19

2 Copy the template from the CloudFormation template for the VPC section of this topic andsave it as a YAML file on your computer This template describes the VPC that your clusterrequires

3 Launch the CloudFormation template to create a stack of AWS resources that represent theVPC

IMPORTANT

You must enter the command on a single line

[ ParameterKey VpcCidr 1 ParameterValue 1000016 2 ParameterKey AvailabilityZoneCount 3 ParameterValue 1 4 ParameterKey SubnetBits 5 ParameterValue 12 6 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

173

1

2

3

ltnamegt is the name for the CloudFormation stack such as cluster-vpc You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

VpcId The ID of your VPC

PublicSubnetIds

The IDs of the new public subnets

PrivateSubnetIds

The IDs of the new private subnets

1781 CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for yourOpenShift Container Platform cluster

Example 123 CloudFormation template for the VPC

arnawscloudformationus-east-1269333783861stackcluster-vpcdbedae40-2fd3-11eb-820e-12a48460849f

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for Best Practice VPC with 1-3 AZs

Parameters VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String AvailabilityZoneCount ConstraintDescription The number of availability zones (Min 1 Max 3) MinValue 1 MaxValue 3

OpenShift Container Platform 46 Installing on AWS

174

Default 1 Description How many AZs to create VPC subnets for (Min 1 Max 3) Type Number SubnetBits ConstraintDescription CIDR block parameter must be in the form xxxx19-27 MinValue 5 MaxValue 13 Default 12 Description Size of each subnet to create within the availability zones (Min 5 = 27 Max 13 = 19) Type Number

Metadata AWSCloudFormationInterface ParameterGroups - Label default Network Configuration Parameters - VpcCidr - SubnetBits - Label default Availability Zones Parameters - AvailabilityZoneCount ParameterLabels AvailabilityZoneCount default Availability Zone Count VpcCidr default VPC CIDR SubnetBits default Bits Per Subnet

Conditions DoAz3 Equals [3 Ref AvailabilityZoneCount] DoAz2 Or [Equals [2 Ref AvailabilityZoneCount] Condition DoAz3]

Resources VPC Type AWSEC2VPC Properties EnableDnsSupport true EnableDnsHostnames true CidrBlock Ref VpcCidr PublicSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [0 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PublicSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC

CHAPTER 1 INSTALLING ON AWS

175

CidrBlock Select [1 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PublicSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [2 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion InternetGateway Type AWSEC2InternetGateway GatewayToInternet Type AWSEC2VPCGatewayAttachment Properties VpcId Ref VPC InternetGatewayId Ref InternetGateway PublicRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PublicRoute Type AWSEC2Route DependsOn GatewayToInternet Properties RouteTableId Ref PublicRouteTable DestinationCidrBlock 00000 GatewayId Ref InternetGateway PublicSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PublicSubnet2 RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation3 Condition DoAz3 Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet3 RouteTableId Ref PublicRouteTable PrivateSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [3 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion

OpenShift Container Platform 46 Installing on AWS

176

PrivateRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PrivateSubnet RouteTableId Ref PrivateRouteTable NAT DependsOn - GatewayToInternet Type AWSEC2NatGateway Properties AllocationId FnGetAtt - EIP - AllocationId SubnetId Ref PublicSubnet EIP Type AWSEC2EIP Properties Domain vpc Route Type AWSEC2Route Properties RouteTableId Ref PrivateRouteTable DestinationCidrBlock 00000 NatGatewayId Ref NAT PrivateSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [4 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PrivateRouteTable2 Type AWSEC2RouteTable Condition DoAz2 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PrivateSubnet2 RouteTableId Ref PrivateRouteTable2 NAT2 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz2

CHAPTER 1 INSTALLING ON AWS

177

Properties AllocationId FnGetAtt - EIP2 - AllocationId SubnetId Ref PublicSubnet2 EIP2 Type AWSEC2EIP Condition DoAz2 Properties Domain vpc Route2 Type AWSEC2Route Condition DoAz2 Properties RouteTableId Ref PrivateRouteTable2 DestinationCidrBlock 00000 NatGatewayId Ref NAT2 PrivateSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [5 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion PrivateRouteTable3 Type AWSEC2RouteTable Condition DoAz3 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation3 Type AWSEC2SubnetRouteTableAssociation Condition DoAz3 Properties SubnetId Ref PrivateSubnet3 RouteTableId Ref PrivateRouteTable3 NAT3 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz3 Properties AllocationId FnGetAtt - EIP3 - AllocationId SubnetId Ref PublicSubnet3 EIP3 Type AWSEC2EIP Condition DoAz3 Properties Domain vpc

OpenShift Container Platform 46 Installing on AWS

178

Route3 Type AWSEC2Route Condition DoAz3 Properties RouteTableId Ref PrivateRouteTable3 DestinationCidrBlock 00000 NatGatewayId Ref NAT3 S3Endpoint Type AWSEC2VPCEndpoint Properties PolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Action - Resource - RouteTableIds - Ref PublicRouteTable - Ref PrivateRouteTable - If [DoAz2 Ref PrivateRouteTable2 Ref AWSNoValue] - If [DoAz3 Ref PrivateRouteTable3 Ref AWSNoValue] ServiceName Join - - - comamazonaws - Ref AWSRegion - s3 VpcId Ref VPC

Outputs VpcId Description ID of the new VPC Value Ref VPC PublicSubnetIds Description Subnet IDs of the public subnets Value Join [ [Ref PublicSubnet If [DoAz2 Ref PublicSubnet2 Ref AWSNoValue] If [DoAz3 Ref PublicSubnet3 Ref AWSNoValue]] ] PrivateSubnetIds Description Subnet IDs of the private subnets Value Join [ [Ref PrivateSubnet If [DoAz2 Ref PrivateSubnet2 Ref AWSNoValue] If [DoAz3 Ref PrivateSubnet3 Ref AWSNoValue]] ]

CHAPTER 1 INSTALLING ON AWS

179

1

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

179 Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS)that your OpenShift Container Platform cluster can use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the networking and load balancing components that yourOpenShift Container Platform cluster requires The template also creates a hosted zone and subnettags

You can run the template multiple times within a single Virtual Private Cloud (VPC)

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-configyaml file for your cluster You can obtain details about your hosted zone by running thefollowing command

For the ltroute53_domaingt specify the Route 53 base domain that you used when yougenerated the install-configyaml file for the cluster

Example output

In the example output the hosted zone ID is Z21IXYZABCZ2A4

2 Create a JSON file that contains the parameter values that the template requires

$ aws route53 list-hosted-zones-by-name --dns-name ltroute53_domaingt 1

myclusterexamplecom False 100HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA hostedzoneZ21IXYZABCZ2A4 myclusterexamplecom 10

OpenShift Container Platform 46 Installing on AWS

180

1

2

3

4

5

6

7

8

9

A short representative cluster name to use for host names etc

Specify the cluster name that you used when you generated the install-configyaml filefor the cluster

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The Route 53 public zone ID to register the targets with

Specify the Route 53 public zone ID which as a format similar to Z21IXYZABCZ2A4 Youcan obtain this value from the AWS console

The Route 53 zone to register the targets with

Specify the Route 53 base domain that you used when you generated the install-configyaml file for the cluster Do not include the trailing period () that is displayed in theAWS console

The public subnets that you created for your VPC

[ ParameterKey ClusterName 1 ParameterValue mycluster 2 ParameterKey InfrastructureName 3 ParameterValue mycluster-ltrandom_stringgt 4 ParameterKey HostedZoneId 5 ParameterValue ltrandom_stringgt 6 ParameterKey HostedZoneName 7 ParameterValue examplecom 8 ParameterKey PublicSubnets 9 ParameterValue subnet-ltrandom_stringgt 10 ParameterKey PrivateSubnets 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey VpcId 13 ParameterValue vpc-ltrandom_stringgt 14 ]

CHAPTER 1 INSTALLING ON AWS

181

10

11

12

13

14

1

2

3

4

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

3 Copy the template from the CloudFormation template for the network and load balancerssection of this topic and save it as a YAML file on your computer This template describes thenetworking and load balancing objects that your cluster requires

IMPORTANT

If you are deploying your cluster to an AWS government region you must updatethe InternalApiServerRecord in the CloudFormation template to use CNAMErecords Records of type ALIAS are not supported for AWS government regions

4 Launch the CloudFormation template to create a stack of AWS resources that provide thenetworking and load balancing components

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-dns You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole resources

Example output

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-dnscd3e5de0-2fd4-11eb-5cf0-12be5c33a183

OpenShift Container Platform 46 Installing on AWS

182

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

PrivateHostedZoneId

Hosted zone ID for the private DNS

ExternalApiLoadBalancerName

Full name of the external API load balancer

InternalApiLoadBalancerName

Full name of the internal API load balancer

ApiServerDnsName

Full host name of the API server

RegisterNlbIpTargetsLambda

Lambda ARN useful to help registerderegister IP targets for these load balancers

ExternalApiTargetGroupArn

ARN of external API target group

InternalApiTargetGroupArn

ARN of internal API target group

InternalServiceTargetGroupArn

ARN of internal service target group

1791 CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and loadbalancers that you need for your OpenShift Container Platform cluster

Example 124 CloudFormation template for the network and load balancers

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Network Elements (Route53 amp LBs)

Parameters ClusterName

CHAPTER 1 INSTALLING ON AWS

183

AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Cluster name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short representative cluster name to use for host names and other identifying names Type String InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String HostedZoneId Description The Route53 public zone ID to register the targets with such as Z21IXYZABCZ2A4 Type String HostedZoneName Description The Route53 zone to register the targets with such as examplecom Omit the trailing period Type String Default examplecom PublicSubnets Description The internet-facing subnets Type ListltAWSEC2SubnetIdgt PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - ClusterName - InfrastructureName - Label default Network Configuration Parameters - VpcId - PublicSubnets - PrivateSubnets - Label default DNS Parameters - HostedZoneName - HostedZoneId ParameterLabels

OpenShift Container Platform 46 Installing on AWS

184

ClusterName default Cluster Name InfrastructureName default Infrastructure Name VpcId default VPC ID PublicSubnets default Public Subnets PrivateSubnets default Private Subnets HostedZoneName default Public Hosted Zone Name HostedZoneId default Public Hosted Zone ID

Resources ExtApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName ext]] IpAddressType ipv4 Subnets Ref PublicSubnets Type network

IntApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName int]] Scheme internal IpAddressType ipv4 Subnets Ref PrivateSubnets Type network

IntDns Type AWSRoute53HostedZone Properties HostedZoneConfig Comment Managed by CloudFormation Name Join [ [Ref ClusterName Ref HostedZoneName]] HostedZoneTags - Key Name Value Join [- [Ref InfrastructureName int]] - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value owned VPCs - VPCId Ref VpcId VPCRegion Ref AWSRegion

ExternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref HostedZoneId RecordSets - Name Join [

CHAPTER 1 INSTALLING ON AWS

185

[api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt ExtApiElbCanonicalHostedZoneID DNSName GetAtt ExtApiElbDNSName

InternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref IntDns RecordSets - Name Join [ [api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName - Name Join [ [api-int Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName

ExternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref ExternalApiTargetGroup LoadBalancerArn Ref ExtApiElb Port 6443 Protocol TCP

ExternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP

OpenShift Container Platform 46 Installing on AWS

186

TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalApiTargetGroup LoadBalancerArn Ref IntApiElb Port 6443 Protocol TCP

InternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalServiceInternalListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalServiceTargetGroup LoadBalancerArn Ref IntApiElb Port 22623 Protocol TCP

InternalServiceTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath healthz HealthCheckPort 22623 HealthCheckProtocol HTTPS

CHAPTER 1 INSTALLING ON AWS

187

HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 22623 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

RegisterTargetLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName nlb lambda role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalApiTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalServiceTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref ExternalApiTargetGroup

RegisterNlbIpTargets Type AWSLambdaFunction Properties Handler indexhandler

OpenShift Container Platform 46 Installing on AWS

188

Role FnGetAtt - RegisterTargetLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) elb = boto3client(elbv2) if event[RequestType] == Delete elbderegister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) elif event[RequestType] == Create elbregister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][TargetArn]+event[ResourceProperties][TargetIp]) Runtime python37 Timeout 120

RegisterSubnetTagsLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName subnet-tags-lambda-role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName subnet-tagging-policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ ec2DeleteTags ec2CreateTags ] Resource arnawsec2subnet - Effect Allow Action [ ec2DescribeSubnets ec2DescribeTags ] Resource

CHAPTER 1 INSTALLING ON AWS

189

RegisterSubnetTags Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterSubnetTagsLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) ec2_client = boto3client(ec2) if event[RequestType] == Delete for subnet_id in event[ResourceProperties][Subnets] ec2_clientdelete_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName]]) elif event[RequestType] == Create for subnet_id in event[ResourceProperties][Subnets] ec2_clientcreate_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName] Value shared]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][InfrastructureName]+event[ResourceProperties][Subnets][0]) Runtime python37 Timeout 120

RegisterPublicSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PublicSubnets

RegisterPrivateSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PrivateSubnets

Outputs PrivateHostedZoneId Description Hosted zone ID for the private DNS which is required for private records Value Ref IntDns ExternalApiLoadBalancerName Description Full name of the external API load balancer Value GetAtt ExtApiElbLoadBalancerFullName InternalApiLoadBalancerName Description Full name of the internal API load balancer Value GetAtt IntApiElbLoadBalancerFullName ApiServerDnsName Description Full hostname of the API server which is required for the Ignition config files

OpenShift Container Platform 46 Installing on AWS

190

IMPORTANT

If you are deploying your cluster to an AWS government region you must update the InternalApiServerRecord to use CNAME records Records of type ALIAS are notsupported for AWS government regions For example

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

You can view details about your hosted zones by navigating to the AWS Route 53 console

See Listing public hosted zones in the AWS documentation for more information about listingpublic hosted zones

1710 Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift ContainerPlatform cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the security groups and roles that your OpenShift ContainerPlatform cluster requires

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Value Join [ [api-int Ref ClusterName Ref HostedZoneName]] RegisterNlbIpTargetsLambda Description Lambda ARN useful to help register or deregister IP targets for these load balancers Value GetAtt RegisterNlbIpTargetsArn ExternalApiTargetGroupArn Description ARN of the external API target group Value Ref ExternalApiTargetGroup InternalApiTargetGroupArn Description ARN of the internal API target group Value Ref InternalApiTargetGroup InternalServiceTargetGroupArn Description ARN of the internal service target group Value Ref InternalServiceTargetGroup

Type CNAMETTL 10ResourceRecords- GetAtt IntApiElbDNSName

CHAPTER 1 INSTALLING ON AWS

191

1

2

3

4

5

6

7

8

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Create a JSON file that contains the parameter values that the template requires

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The CIDR block for the VPC

Specify the CIDR block parameter that you used for the VPC that you defined in the form xxxx16-24

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

2 Copy the template from the CloudFormation template for security objects section of this

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey VpcCidr 3 ParameterValue 1000016 4 ParameterKey PrivateSubnets 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey VpcId 7 ParameterValue vpc-ltrandom_stringgt 8 ]

OpenShift Container Platform 46 Installing on AWS

192

1

2

3

4

2 Copy the template from the CloudFormation template for security objects section of thistopic and save it as a YAML file on your computer This template describes the security groupsand roles that your cluster requires

3 Launch the CloudFormation template to create a stack of AWS resources that represent thesecurity groups and roles

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-sec You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

MasterSecurityGroupId

Master Security Group ID

WorkerSecurityGroupId

Worker Security Group ID

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-sec03bd4210-2ed7-11eb-6d7a-13fc0b61e9db

$ aws cloudformation describe-stacks --stack-name ltnamegt

CHAPTER 1 INSTALLING ON AWS

193

MasterInstanceProfile

Master IAM Instance Profile

WorkerInstanceProfile

Worker IAM Instance Profile

17101 CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need foryour OpenShift Container Platform cluster

Example 125 CloudFormation template for security objects

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Security Elements (Security Groups amp IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Network Configuration Parameters

OpenShift Container Platform 46 Installing on AWS

194

- VpcId - VpcCidr - PrivateSubnets ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID VpcCidr default VPC CIDR PrivateSubnets default Private Subnets

Resources MasterSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Master Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr - IpProtocol tcp ToPort 6443 FromPort 6443 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22623 ToPort 22623 CidrIp Ref VpcCidr VpcId Ref VpcId

WorkerSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Worker Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr VpcId Ref VpcId

MasterIngressEtcd Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId

CHAPTER 1 INSTALLING ON AWS

195

SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description etcd FromPort 2379 ToPort 2380 IpProtocol tcp

MasterIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressWorkerVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressWorkerGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

OpenShift Container Platform 46 Installing on AWS

196

MasterIngressWorkerInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressWorkerInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services

CHAPTER 1 INSTALLING ON AWS

197

FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressWorkerIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIngressWorkerIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressMasterVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressGeneve Type AWSEC2SecurityGroupIngress

OpenShift Container Platform 46 Installing on AWS

198

Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressMasterGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressMasterInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressMasterInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999

CHAPTER 1 INSTALLING ON AWS

199

IpProtocol udp

WorkerIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes secure kubelet port FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal Kubernetes communication FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressMasterIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressMasterIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId

OpenShift Container Platform 46 Installing on AWS

200

SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2AttachVolume - ec2AuthorizeSecurityGroupIngress - ec2CreateSecurityGroup - ec2CreateTags - ec2CreateVolume - ec2DeleteSecurityGroup - ec2DeleteVolume - ec2Describe - ec2DetachVolume - ec2ModifyInstanceAttribute - ec2ModifyVolume - ec2RevokeSecurityGroupIngress - elasticloadbalancingAddTags - elasticloadbalancingAttachLoadBalancerToSubnets - elasticloadbalancingApplySecurityGroupsToLoadBalancer - elasticloadbalancingCreateListener - elasticloadbalancingCreateLoadBalancer - elasticloadbalancingCreateLoadBalancerPolicy - elasticloadbalancingCreateLoadBalancerListeners - elasticloadbalancingCreateTargetGroup - elasticloadbalancingConfigureHealthCheck - elasticloadbalancingDeleteListener - elasticloadbalancingDeleteLoadBalancer - elasticloadbalancingDeleteLoadBalancerListeners - elasticloadbalancingDeleteTargetGroup - elasticloadbalancingDeregisterInstancesFromLoadBalancer - elasticloadbalancingDeregisterTargets - elasticloadbalancingDescribe - elasticloadbalancingDetachLoadBalancerFromSubnets - elasticloadbalancingModifyListener - elasticloadbalancingModifyLoadBalancerAttributes

CHAPTER 1 INSTALLING ON AWS

201

- elasticloadbalancingModifyTargetGroup - elasticloadbalancingModifyTargetGroupAttributes - elasticloadbalancingRegisterInstancesWithLoadBalancer - elasticloadbalancingRegisterTargets - elasticloadbalancingSetLoadBalancerPoliciesForBackendServer - elasticloadbalancingSetLoadBalancerPoliciesOfListener - kmsDescribeKey Resource

MasterInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref MasterIamRole

WorkerIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName worker policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2DescribeInstances - ec2DescribeRegions Resource

WorkerInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref WorkerIamRole

Outputs MasterSecurityGroupId Description Master Security Group ID Value GetAtt MasterSecurityGroupGroupId

WorkerSecurityGroupId Description Worker Security Group ID Value GetAtt WorkerSecurityGroupGroupId

MasterInstanceProfile Description Master IAM Instance Profile Value Ref MasterInstanceProfile

OpenShift Container Platform 46 Installing on AWS

202

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1711 RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs valid for the various Amazon WebServices (AWS) zones you can specify for your OpenShift Container Platform nodes

NOTE

You can also install to regions that do not have a RHCOS AMI published by importing yourown AMI

Table 118 RHCOS AMIs

AWS zone AWS AMI

af-south-1 ami-09921c9c1c36e695c

ap-east-1 ami-01ee8446e9af6b197

ap-northeast-1 ami-04e5b5722a55846ea

ap-northeast-2 ami-0fdc25c8a0273a742

ap-south-1 ami-09e3deb397cc526a8

ap-southeast-1 ami-0630e03f75e02eec4

ap-southeast-2 ami-069450613262ba03c

ca-central-1 ami-012518cdbd3057dfd

eu-central-1 ami-0bd7175ff5b1aef0c

eu-north-1 ami-06c9ec42d0a839ad2

eu-south-1 ami-0614d7440a0363d71

eu-west-1 ami-01b89df58b5d4d5fa

WorkerInstanceProfile Description Worker IAM Instance Profile Value Ref WorkerInstanceProfile

CHAPTER 1 INSTALLING ON AWS

203

eu-west-2 ami-06f6e31ddd554f89d

eu-west-3 ami-0dc82e2517ded15a1

me-south-1 ami-07d181e3aa0f76067

sa-east-1 ami-0cd44e6dd20e6c7fa

us-east-1 ami-04a16d506e5b0e246

us-east-2 ami-0a1f868ad58ea59a7

us-west-1 ami-0a65d76e3a6f6622f

us-west-2 ami-0dd9008abadc519f1

AWS zone AWS AMI

17111 AWS regions without a published RHCOS AMI

You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regionswithout native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI)or the AWS software development kit (SDK) If a published AMI is not available for an AWS region youcan upload a custom AMI prior to installing the cluster This is required if you are deploying your clusterto an AWS government region

If you are deploying to a non-government region that does not have a published RHCOS AMI and youdo not specify a custom AMI the installation program copies the us-east-1 AMI to the user accountautomatically Then the installation program creates the control plane machines with encrypted EBSvolumes using the default or user-specified Key Management Service (KMS) key This allows the AMI tofollow the same process workflow as published RHCOS AMIs

A region without native support for an RHCOS AMI is not available to select from the terminal duringcluster creation because it is not published However you can install to this region by configuring thecustom AMI in the install-configyaml file

17112 Uploading a custom RHCOS AMI in AWS

If you are deploying to a custom Amazon Web Services (AWS) region you must upload a custom RedHat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region

Prerequisites

You configured an AWS account

You created an Amazon S3 bucket with the required IAM service role

You uploaded your RHCOS VMDK file to Amazon S3 The RHCOS VMDK file must be thehighest version that is less than or equal to the OpenShift Container Platform version you areinstalling

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Using

OpenShift Container Platform 46 Installing on AWS

204

1

1

1

1

2

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer

Procedure

1 Export your AWS profile as an environment variable

The AWS profile name that holds your AWS credentials like govcloud

2 Export the region to associate with your custom AMI as an environment variable

The AWS region like us-gov-east-1

3 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable

The RHCOS VMDK version like 460

4 Export the Amazon S3 bucket name as an environment variable

5 Create the containersjson file and define your RHCOS VMDK file

6 Import the RHCOS disk as an Amazon EBS snapshot

The description of your RHCOS disk being imported like rhcos-$RHCOS_VERSION-x86_64-awsx86_64

The file path to the JSON file describing your RHCOS disk The JSON file should containyour Amazon S3 bucket name and key

$ export AWS_PROFILE=ltaws_profilegt 1

$ export AWS_DEFAULT_REGION=ltaws_regiongt 1

$ export RHCOS_VERSION=ltversiongt 1

$ export VMIMPORT_BUCKET_NAME=lts3_bucket_namegt

$ cat ltltEOF gt containersjson Description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 Format vmdk UserBucket S3Bucket $VMIMPORT_BUCKET_NAME S3Key rhcos-$RHCOS_VERSION-x86_64-awsx86_64vmdk EOF

$ aws ec2 import-snapshot --region $AWS_DEFAULT_REGION --description ltdescriptiongt 1 --disk-container ltfile_pathgtcontainersjson 2

CHAPTER 1 INSTALLING ON AWS

205

1

2

3

4

7 Check the status of the image import

Example output

Copy the SnapshotId to register the image

8 Create a custom RHCOS AMI from the RHCOS snapshot

The RHCOS VMDK architecture type like x86_64 s390x or ppc64le

The Description from the imported snapshot

The name of the RHCOS AMI

The SnapshotID from the imported snapshot

To learn more about these APIs see the AWS documentation for importing snapshots and creatingEBS-backed AMIs

1712 Creating the bootstrap node in AWS

$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region $AWS_DEFAULT_REGION

ImportSnapshotTasks [ Description rhcos-460-x86_64-awsx86_64 ImportTaskId import-snap-fh6i8uil SnapshotTaskDetail Description rhcos-460-x86_64-awsx86_64 DiskImageSize 8190566400 Format VMDK SnapshotId snap-06331325870076318 Status completed UserBucket S3Bucket external-images S3Key rhcos-460-x86_64-awsx86_64vmdk ]

$ aws ec2 register-image --region $AWS_DEFAULT_REGION --architecture x86_64 1 --description rhcos-$RHCOS_VERSION-x86_64-awsx86_64 2 --ena-support --name rhcos-$RHCOS_VERSION-x86_64-awsx86_64 3 --virtualization-type hvm --root-device-name devxvda --block-device-mappings DeviceName=devxvdaEbs=DeleteOnTermination=trueSnapshotId=ltsnapshot_IDgt 4

OpenShift Container Platform 46 Installing on AWS

206

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift ContainerPlatform cluster initialization

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the bootstrap node that your OpenShift Container Platforminstallation requires

NOTE

If you do not use the provided CloudFormation template to create your bootstrap nodeyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

Procedure

1 Provide a location to serve the bootstrapign Ignition config file to your cluster This file islocated in your installation directory One way to do this is to create an S3 bucket in yourclusterrsquos region and upload the Ignition config file to it

IMPORTANT

The provided CloudFormation Template assumes that the Ignition config filesfor your cluster are served from an S3 bucket If you choose to serve the filesfrom another location you must modify the templates

IMPORTANT

If you are deploying to a region that has endpoints that differ from the AWS SDKor you are providing your own custom endpoints you must use a presigned URLfor your S3 bucket instead of the s3 schema

NOTE

The bootstrap Ignition config file does contain secrets like X509 keys Thefollowing steps provide basic security for the S3 bucket To provide additionalsecurity you can enable an S3 bucket policy to allow only certain users such asthe OpenShift IAM user to access objects that the bucket contains You canavoid S3 entirely and serve your bootstrap Ignition config file from any addressthat the bootstrap machine can reach

CHAPTER 1 INSTALLING ON AWS

207

1

1

a Create the bucket

ltcluster-namegt-infra is the bucket name When creating the install-configyaml filereplace ltcluster-namegt with the name specified for the cluster

b Upload the bootstrapign Ignition config file to the bucket

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

c Verify that the file uploaded

Example output

2 Create a JSON file that contains the parameter values that the template requires

$ aws s3 mb s3ltcluster-namegt-infra 1

$ aws s3 cp ltinstallation_directorygtbootstrapign s3ltcluster-namegt-infrabootstrapign 1

$ aws s3 ls s3ltcluster-namegt-infra

2019-04-03 161516 314878 bootstrapign

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AllowedBootstrapSshCidr 5 ParameterValue 00000 6 ParameterKey PublicSubnet 7 ParameterValue subnet-ltrandom_stringgt 8 ParameterKey MasterSecurityGroupId 9 ParameterValue sg-ltrandom_stringgt 10 ParameterKey VpcId 11 ParameterValue vpc-ltrandom_stringgt 12

OpenShift Container Platform 46 Installing on AWS

208

1

2

3

4

5

6

7

8

9

10

11

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node

Specify a valid AWSEC2ImageId value

CIDR block to allow SSH access to the bootstrap node

Specify a CIDR block in the format xxxx16-24

The public subnet that is associated with your VPC to launch the bootstrap node into

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The master security group ID (for registering temporary rules)

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The VPC created resources will belong to

ParameterKey BootstrapIgnitionLocation 13 ParameterValue s3ltbucket_namegtbootstrapign 14 ParameterKey AutoRegisterELB 15 ParameterValue yes 16 ParameterKey RegisterNlbIpTargetsLambdaArn 17 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 18 ParameterKey ExternalApiTargetGroupArn 19 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 20 ParameterKey InternalApiTargetGroupArn 21 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 22 ParameterKey InternalServiceTargetGroupArn 23 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 24 ]

CHAPTER 1 INSTALLING ON AWS

209

12

13

14

15

16

17

18

19

20

21

22

23

24

1

Specify the VpcId value from the output of the CloudFormation template for the VPC

Location to fetch bootstrap Ignition config file from

Specify the S3 bucket and file name in the form s3ltbucket_namegtbootstrapign

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

3 Copy the template from the CloudFormation template for the bootstrap machine section ofthis topic and save it as a YAML file on your computer This template describes the bootstrapmachine that your cluster requires

4 Launch the CloudFormation template to create a stack of AWS resources that represent thebootstrap node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-bootstrap You needthe name of this stack if you remove the cluster

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

OpenShift Container Platform 46 Installing on AWS

210

2

3

4

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

BootstrapInstanceId

The bootstrap Instance ID

BootstrapPublicIp

The bootstrap node public IP address

BootstrapPrivateIp

The bootstrap node private IP address

17121 CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need foryour OpenShift Container Platform cluster

Example 126 CloudFormation template for the bootstrap machine

arnawscloudformationus-east-1269333783861stackcluster-bootstrap12944486-2add-11eb-9dee-12dace8e3a83

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Bootstrap (EC2 Instance Security Groups and IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String

CHAPTER 1 INSTALLING ON AWS

211

RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AllowedBootstrapSshCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])(([0-9]|1[0-9]|2[0-9]|3[0-2]))$ ConstraintDescription CIDR block parameter must be in the form xxxx0-32 Default 00000 Description CIDR block to allow SSH access to the bootstrap node Type String PublicSubnet Description The public subnet to launch the bootstrap node into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID for registering temporary rules Type AWSEC2SecurityGroupId VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId BootstrapIgnitionLocation Default s3my-s3-bucketbootstrapign Description Ignition config file location Type String AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId

OpenShift Container Platform 46 Installing on AWS

212

- Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID AllowedBootstrapSshCidr default Allowed SSH Source PublicSubnet default Public Subnet RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Bootstrap Ignition Source MasterSecurityGroupId default Master Security Group ID AutoRegisterELB default Use Provided ELB Automation

Conditions DoRegistration Equals [yes Ref AutoRegisterELB]

Resources BootstrapIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName bootstrap policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action ec2Describe

CHAPTER 1 INSTALLING ON AWS

213

Resource - Effect Allow Action ec2AttachVolume Resource - Effect Allow Action ec2DetachVolume Resource - Effect Allow Action s3GetObject Resource

BootstrapInstanceProfile Type AWSIAMInstanceProfile Properties Path Roles - Ref BootstrapIamRole

BootstrapSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Bootstrap Security Group SecurityGroupIngress - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref AllowedBootstrapSshCidr - IpProtocol tcp ToPort 19531 FromPort 19531 CidrIp 00000 VpcId Ref VpcId

BootstrapInstance Type AWSEC2Instance Properties ImageId Ref RhcosAmi IamInstanceProfile Ref BootstrapInstanceProfile InstanceType i3large NetworkInterfaces - AssociatePublicIpAddress true DeviceIndex 0 GroupSet - Ref BootstrapSecurityGroup - Ref MasterSecurityGroupId SubnetId Ref PublicSubnet UserData FnBase64 Sub - ignitionconfigreplacesource$S3Locversion310 - S3Loc Ref BootstrapIgnitionLocation

RegisterBootstrapApiTarget Condition DoRegistration Type CustomNLBRegister

OpenShift Container Platform 46 Installing on AWS

214

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise LinuxCoreOS (RHCOS) AMIs for the AWS zones

1713 Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the control plane nodes

IMPORTANT

The CloudFormation template creates a stack that represents three control plane nodes

NOTE

Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

Outputs BootstrapInstanceId Description Bootstrap Instance ID Value Ref BootstrapInstance

BootstrapPublicIp Description The bootstrap node public IP address Value GetAtt BootstrapInstancePublicIp

BootstrapPrivateIp Description The bootstrap node private IP address Value GetAtt BootstrapInstancePrivateIp

CHAPTER 1 INSTALLING ON AWS

215

NOTE

If you do not use the provided CloudFormation template to create your control planenodes you must review the provided information and manually create the infrastructureIf your cluster does not initialize correctly you might have to contact Red Hat supportwith your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

Procedure

1 Create a JSON file that contains the parameter values that the template requires

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AutoRegisterDNS 5 ParameterValue yes 6 ParameterKey PrivateHostedZoneId 7 ParameterValue ltrandom_stringgt 8 ParameterKey PrivateHostedZoneName 9 ParameterValue myclusterexamplecom 10 ParameterKey Master0Subnet 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey Master1Subnet 13

OpenShift Container Platform 46 Installing on AWS

216

ParameterValue subnet-ltrandom_stringgt 14 ParameterKey Master2Subnet 15 ParameterValue subnet-ltrandom_stringgt 16 ParameterKey MasterSecurityGroupId 17 ParameterValue sg-ltrandom_stringgt 18 ParameterKey IgnitionLocation 19 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configmaster 20 ParameterKey CertificateAuthorities 21 ParameterValue datatextplaincharset=utf-8base64ABCxYz== 22 ParameterKey MasterInstanceProfileName 23 ParameterValue ltroles_stackgt-MasterInstanceProfile-ltrandom_stringgt 24 ParameterKey MasterInstanceType 25 ParameterValue m4xlarge 26 ParameterKey AutoRegisterELB 27 ParameterValue yes 28 ParameterKey RegisterNlbIpTargetsLambdaArn 29 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 30 ParameterKey ExternalApiTargetGroupArn 31 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 32 ParameterKey InternalApiTargetGroupArn 33 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 34 ParameterKey InternalServiceTargetGroupArn 35 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 36 ]

CHAPTER 1 INSTALLING ON AWS

217

1

2

3

4

5

6

7

8

9

10

11 13 15

12 14 16

17

18

19

20

21

22

23

24

25

26

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control planemachines

Specify an AWSEC2ImageId value

Whether or not to perform DNS etcd registration

Specify yes or no If you specify yes you must provide hosted zone information

The Route 53 private zone ID to register the etcd targets with

Specify the PrivateHostedZoneId value from the output of the CloudFormation templatefor DNS and load balancing

The Route 53 zone to register the targets with

Specify ltcluster_namegtltdomain_namegt where ltdomain_namegt is the Route 53 basedomain that you used when you generated install-configyaml file for the cluster Do notinclude the trailing period () that is displayed in the AWS console

A subnet preferably private to launch the control plane machines on

Specify a subnet from the PrivateSubnets value from the output of theCloudFormation template for DNS and load balancing

The master security group ID to associate with master nodes

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch control plane Ignition config file from

Specify the generated Ignition config file location httpsapi-intltcluster_namegtltdomain_namegt22623configmaster

The base64 encoded certificate authority string to use

Specify the value from the masterign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with master nodes

Specify the MasterInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4xlarge

m42xlarge

OpenShift Container Platform 46 Installing on AWS

218

27

28

29

30

31

32

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c42xlarge

c44xlarge

c48xlarge

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 specify an m5 type such as m5xlarge instead

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

CHAPTER 1 INSTALLING ON AWS

219

33

34

35

36

1

2

3

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

2 Copy the template from the CloudFormation template for control plane machines section ofthis topic and save it as a YAML file on your computer This template describes the control planemachines that your cluster requires

3 If you specified an m5 instance type as the value for MasterInstanceType add that instancetype to the MasterInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent thecontrol plane nodes

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-control-plane Youneed the name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents three control planenodes

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-control-plane21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

220

17131 CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you needfor your OpenShift Container Platform cluster

Example 127 CloudFormation template for control plane machines

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 master instances)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AutoRegisterDNS Default yes AllowedValues - yes - no Description Do you want to invoke DNS etcd registration which requires Hosted Zone information Type String PrivateHostedZoneId Description The Route53 private zone ID to register the etcd targets with such as Z21IXYZABCZ2A4 Type String PrivateHostedZoneName Description The Route53 zone to register the targets with such as clusterexamplecom Omit the trailing period Type String Master0Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master1Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master2Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configmaster Description Ignition config file location Type String

CHAPTER 1 INSTALLING ON AWS

221

CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String MasterInstanceProfileName Description IAM profile to associate with master nodes Type String MasterInstanceType Default m5xlarge Type String AllowedValues - m4xlarge - m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c42xlarge - c44xlarge - c48xlarge - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String

Metadata

OpenShift Container Platform 46 Installing on AWS

222

AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label default DNS Parameters - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID Master0Subnet default Master-0 Subnet Master1Subnet default Master-1 Subnet Master2Subnet default Master-2 Subnet MasterInstanceType default Master Instance Type MasterInstanceProfileName default Master Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Master Ignition Source

CHAPTER 1 INSTALLING ON AWS

223

CertificateAuthorities default Ignition CA String MasterSecurityGroupId default Master Security Group ID AutoRegisterDNS default Use Provided DNS Automation AutoRegisterELB default Use Provided ELB Automation PrivateHostedZoneName default Private Hosted Zone Name PrivateHostedZoneId default Private Hosted Zone ID

Conditions DoRegistration Equals [yes Ref AutoRegisterELB] DoDns Equals [yes Ref AutoRegisterDNS]

Resources Master0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master0Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster0 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalApiTarget

OpenShift Container Platform 46 Installing on AWS

224

Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master0PrivateIp

Master1 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master1Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster1 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties

CHAPTER 1 INSTALLING ON AWS

225

ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master1PrivateIp

Master2 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master2Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster2 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

OpenShift Container Platform 46 Installing on AWS

226

RegisterMaster2InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master2PrivateIp

EtcdSrvRecords Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [_etcd-server-ssl_tcp Ref PrivateHostedZoneName]] ResourceRecords - Join [ [0 10 2380 Join [ [etcd-0 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-1 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-2 Ref PrivateHostedZoneName]]] ] TTL 60 Type SRV

Etcd0Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-0 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master0PrivateIp TTL 60 Type A

Etcd1Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-1 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master1PrivateIp TTL 60 Type A

Etcd2Record Condition DoDns Type AWSRoute53RecordSet

CHAPTER 1 INSTALLING ON AWS

227

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1714 Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent a worker node

IMPORTANT

The CloudFormation template creates a stack that represents one worker node Youmust create a stack for each worker node

NOTE

If you do not use the provided CloudFormation template to create your worker nodesyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-2 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master2PrivateIp TTL 60 Type A

Outputs PrivateIPs Description The control-plane node private IP addresses Value Join [ [GetAtt Master0PrivateIp GetAtt Master1PrivateIp GetAtt Master2PrivateIp] ]

OpenShift Container Platform 46 Installing on AWS

228

1

2

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

Procedure

1 Create a JSON file that contains the parameter values that the CloudFormation templaterequires

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey Subnet 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey WorkerSecurityGroupId 7 ParameterValue sg-ltrandom_stringgt 8 ParameterKey IgnitionLocation 9 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configworker 10 ParameterKey CertificateAuthorities 11 ParameterValue 12 ParameterKey WorkerInstanceProfileName 13 ParameterValue 14 ParameterKey WorkerInstanceType 15 ParameterValue m4large 16 ]

CHAPTER 1 INSTALLING ON AWS

229

3

4

5

6

7

8

9

10

11

12

13

14

15

16

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes

Specify an AWSEC2ImageId value

A subnet preferably private to launch the worker nodes on

Specify a subnet from the PrivateSubnets value from the output of the CloudFormationtemplate for DNS and load balancing

The worker security group ID to associate with worker nodes

Specify the WorkerSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch bootstrap Ignition config file from

Specify the generated Ignition config location httpsapi-intltcluster_namegtltdomain_namegt22623configworker

Base64 encoded certificate authority string to use

Specify the value from the workerign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with worker nodes

Specify the WorkerInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4large

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5large

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

OpenShift Container Platform 46 Installing on AWS

230

1

2

3

m516xlarge

c4large

c4xlarge

c42xlarge

c44xlarge

c48xlarge

r4large

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5 types instead

2 Copy the template from the CloudFormation template for worker machines section of thistopic and save it as a YAML file on your computer This template describes the networkingobjects and load balancers that your cluster requires

3 If you specified an m5 instance type as the value for WorkerInstanceType add that instancetype to the WorkerInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent aworker node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-worker-1 You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

231

file

Example output

NOTE

The CloudFormation template creates a stack that represents one worker node

5 Confirm that the template components exist

6 Continue to create worker stacks until you have created enough worker machines for yourcluster You can create additional worker stacks by referencing the same template andparameter files and specifying a different stack name

IMPORTANT

You must create at least two worker machines so you must create at least twostacks that use this CloudFormation template

17141 CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need foryour OpenShift Container Platform cluster

Example 128 CloudFormation template for worker machines

arnawscloudformationus-east-1269333783861stackcluster-worker-1729ee301-1c2a-11eb-348f-sd9888c65b59

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId WorkerSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation

OpenShift Container Platform 46 Installing on AWS

232

Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configworker Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String WorkerInstanceProfileName Description IAM profile to associate with master nodes Type String WorkerInstanceType Default m5large Type String AllowedValues - m4large - m4xlarge - m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5large - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c4large - c4xlarge - c42xlarge - c44xlarge - c48xlarge - r4large - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName

CHAPTER 1 INSTALLING ON AWS

233

- Label default Network Configuration Parameters - Subnet ParameterLabels Subnet default Subnet InfrastructureName default Infrastructure Name WorkerInstanceType default Worker Instance Type WorkerInstanceProfileName default Worker Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID IgnitionLocation default Worker Ignition Source CertificateAuthorities default Ignition CA String WorkerSecurityGroupId default Worker Security Group ID

Resources Worker0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref WorkerInstanceProfileName InstanceType Ref WorkerInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref WorkerSecurityGroupId SubnetId Ref Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

Outputs PrivateIP Description The compute node private IP address Value GetAtt Worker0PrivateIp

OpenShift Container Platform 46 Installing on AWS

234

1

2

Additional resources

You can view details about the CloudFormation stacks that you create by navigating to the AWSCloudFormation console

1715 Initializing the bootstrap sequence on AWS with user-provisionedinfrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS) you can start thebootstrap sequence that initializes the OpenShift Container Platform control plane

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

You created the worker nodes

Procedure

1 Change to the directory that contains the installation program and start the bootstrap processthat initializes the OpenShift Container Platform control plane

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different installation details specify warn debug or error instead of info

Example output

$ openshift-install wait-for bootstrap-complete --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Waiting up to 20m0s for the Kubernetes API at httpsapimyclusterexamplecom6443INFO API v1190+9f84db3 upINFO Waiting up to 30m0s for bootstrapping to completeINFO It is now safe to remove the bootstrap resourcesINFO Time elapsed 1s

CHAPTER 1 INSTALLING ON AWS

235

If the command exits without a FATAL warning your OpenShift Container Platform controlplane has initialized

NOTE

After the control plane initializes it sets up the compute nodes and installsadditional services in the form of Operators

Additional resources

See Monitoring installation progress for details about monitoring the installation bootstrap andcontrol plane logs as an OpenShift Container Platform installation progresses

See Gathering bootstrap node diagnostic data for information about troubleshooting issuesrelated to the bootstrap process

You can view details about the running instances that are created by using the AWS EC2console

1716 Installing the OpenShift CLI by downloading the binary

You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from acommand-line interface You can install oc on Linux Windows or macOS

IMPORTANT

If you installed an earlier version of oc you cannot use it to complete all of the commandsin OpenShift Container Platform 46 Download and install the new version of oc

17161 Installing the OpenShift CLI on Linux

You can install the OpenShift CLI (oc) binary on Linux by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Linux from the drop-down menu and clickDownload command-line tools

4 Unpack the archive

5 Place the oc binary in a directory that is on your PATHTo check your PATH execute the following command

After you install the CLI it is available using the oc command

$ tar xvzf ltfilegt

$ echo $PATH

OpenShift Container Platform 46 Installing on AWS

236

17162 Installing the OpenShift CLI on Windows

You can install the OpenShift CLI (oc) binary on Windows by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select Windows from the drop-down menu and clickDownload command-line tools

4 Unzip the archive with a ZIP program

5 Move the oc binary to a directory that is on your PATHTo check your PATH open the command prompt and execute the following command

After you install the CLI it is available using the oc command

17163 Installing the OpenShift CLI on macOS

You can install the OpenShift CLI (oc) binary on macOS by using the following procedure

Procedure

1 Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site

2 Select your infrastructure provider and if applicable your installation type

3 In the Command-line interface section select MacOS from the drop-down menu and clickDownload command-line tools

4 Unpack and unzip the archive

5 Move the oc binary to a directory on your PATHTo check your PATH open a terminal and execute the following command

After you install the CLI it is available using the oc command

1717 Logging in to the cluster by using the CLI

$ oc ltcommandgt

Cgt path

Cgt oc ltcommandgt

$ echo $PATH

$ oc ltcommandgt

CHAPTER 1 INSTALLING ON AWS

237

1

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1718 Approving the certificate signing requests for your machines

When you add machines to a cluster two pending certificate signing requests (CSRs) are generated foreach machine that you added You must confirm that these CSRs are approved or if necessary approvethem yourself The client requests must be approved first followed by the server requests

Prerequisites

You added machines to your cluster

Procedure

1 Confirm that the cluster recognizes the machines

Example output

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 63m v1190master-1 Ready master 63m v1190master-2 Ready master 64m v1190worker-0 NotReady worker 76s v1190worker-1 NotReady worker 70s v1190

OpenShift Container Platform 46 Installing on AWS

238

The output lists all of the machines that you created

NOTE

The preceding output might not include the compute nodes also known asworker nodes until some CSRs are approved

2 Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster

Example output

In this example two machines are joining the cluster You might see more approved CSRs in thelist

3 If the CSRs were not approved after all of the pending CSRs for the machines you added are in Pending status approve the CSRs for your cluster machines

NOTE

Because the CSRs rotate automatically approve your CSRs within an hour ofadding the machines to the cluster If you do not approve them within an hour thecertificates will rotate and more than two certificates will be present for eachnode You must approve all of these certificates After you approve the initialCSRs the subsequent node client CSRs are automatically approved by thecluster kube-controller-manager

NOTE

For clusters running on platforms that are not machine API enabled such as baremetal and other user-provisioned infrastructure you must implement a methodof automatically approving the kubelet serving certificate requests (CSRs) If arequest is not approved then the oc exec oc rsh and oc logs commandscannot succeed because a serving certificate is required when the API serverconnects to the kubelet Any operation that contacts the Kubelet endpointrequires this certificate approval to be in place The method must watch for newCSRs confirm that the CSR was submitted by the node-bootstrapper serviceaccount in the systemnode or systemadmin groups and confirm the identityof the node

To approve them individually run the following command for each valid CSR

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-8b2br 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pendingcsr-8vnps 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pending

$ oc adm certificate approve ltcsr_namegt 1

CHAPTER 1 INSTALLING ON AWS

239

1

1

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

NOTE

Some Operators might not become available until some CSRs are approved

4 Now that your client requests are approved you must review the server requests for eachmachine that you added to the cluster

Example output

5 If the remaining CSRs are not approved and are in the Pending status approve the CSRs foryour cluster machines

To approve them individually run the following command for each valid CSR

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

6 After all client and server CSRs have been approved the machines have the Ready statusVerify this by running the following command

Example output

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs --no-run-if-empty oc adm certificate approve

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-bfd72 5m26s systemnodeip-10-0-50-126us-east-2computeinternal Pendingcsr-c57lv 5m26s systemnodeip-10-0-95-157us-east-2computeinternal Pending

$ oc adm certificate approve ltcsr_namegt 1

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs oc adm certificate approve

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 73m v1200master-1 Ready master 73m v1200

OpenShift Container Platform 46 Installing on AWS

240

NOTE

It can take a few minutes after approval of the server CSRs for the machines totransition to the Ready status

Additional information

For more information on CSRs see Certificate Signing Requests

1719 Initial Operator configuration

After the control plane initializes you must immediately configure some Operators so that they allbecome available

Prerequisites

Your control plane has initialized

Procedure

1 Watch the cluster components come online

Example output

master-2 Ready master 74m v1200worker-0 Ready worker 11m v1200worker-1 Ready worker 11m v1200

$ watch -n5 oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCEauthentication 460 True False False 3h56mcloud-credential 460 True False False 29hcluster-autoscaler 460 True False False 29hconfig-operator 460 True False False 6h39mconsole 460 True False False 3h59mcsi-snapshot-controller 460 True False False 4h12mdns 460 True False False 4h15metcd 460 True False False 29himage-registry 460 True False False 3h59mingress 460 True False False 4h30minsights 460 True False False 29hkube-apiserver 460 True False False 29hkube-controller-manager 460 True False False 29hkube-scheduler 460 True False False 29hkube-storage-version-migrator 460 True False False 4h2mmachine-api 460 True False False 29hmachine-approver 460 True False False 6h34mmachine-config 460 True False False 3h56mmarketplace 460 True False False 4h2mmonitoring 460 True False False 6h31mnetwork 460 True False False 29hnode-tuning 460 True False False 4h30m

CHAPTER 1 INSTALLING ON AWS

241

2 Configure the Operators that are not available

17191 Image registry storage configuration

Amazon Web Services provides default storage which means the Image Registry Operator is availableafter installation However if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage you must manually configure registry storage

Instructions are shown for configuring a persistent volume which is required for production clustersWhere applicable instructions are shown for configuring an empty directory as the storage locationwhich is available for only non-production clusters

Additional instructions are provided for allowing the image registry to use block storage types by usingthe Recreate rollout strategy during upgrades

You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShiftContainer Platform to hidden regions See Configuring the registry for AWS user-provisionedinfrastructure for more information

171911 Configuring registry storage for AWS with user-provisioned infrastructure

During installation your cloud credentials are sufficient to create an Amazon S3 bucket and the RegistryOperator will automatically configure storage

If the Registry Operator cannot create an S3 bucket and automatically configure storage you cancreate an S3 bucket and configure storage with the following procedure

Prerequisites

You have a cluster on AWS with user-provisioned infrastructure

For Amazon S3 storage the secret is expected to contain two keys

REGISTRY_STORAGE_S3_ACCESSKEY

REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage

1 Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old

2 Fill in the storage configuration in configsimageregistryoperatoropenshiftiocluster

openshift-apiserver 460 True False False 3h56mopenshift-controller-manager 460 True False False 4h36mopenshift-samples 460 True False False 4h30moperator-lifecycle-manager 460 True False False 29hoperator-lifecycle-manager-catalog 460 True False False 29hoperator-lifecycle-manager-packageserver 460 True False False 3h59mservice-ca 460 True False False 29hstorage 460 True False False 4h30m

$ oc edit configsimageregistryoperatoropenshiftiocluster

OpenShift Container Platform 46 Installing on AWS

242

Example configuration

WARNING

To secure your registry images in AWS block public access to the S3 bucket

171912 Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator For non-production clusters you can setthe image registry to an empty directory If you do so all images are lost if you restart the registry

Procedure

To set the image registry storage to an empty directory

WARNING

Configure this option for only non-production clusters

If you run this command before the Image Registry Operator initializes its components the oc patch command fails with the following error

Wait a few minutes and run the command again

1720 Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster remove the bootstrap resourcesfrom Amazon Web Services (AWS)

Prerequisites

You completed the initial Operator configuration for your cluster

storage s3 bucket ltbucket-namegt region ltregion-namegt

$ oc patch configsimageregistryoperatoropenshiftio cluster --type merge --patch specstorageemptyDir

Error from server (NotFound) configsimageregistryoperatoropenshiftio cluster not found

CHAPTER 1 INSTALLING ON AWS

243

1

Procedure

1 Delete the bootstrap resources If you used the CloudFormation template delete its stack

Delete the stack by using the AWS CLI

ltnamegt is the name of your bootstrap stack

Delete the stack by using the AWS CloudFormation console

1721 Creating the Ingress DNS Records

If you removed the DNS Zone configuration manually create DNS records that point to the Ingress loadbalancer You can create either a wildcard record or specific records While the following procedure usesA records you can use other record types that you require such as CNAME or alias

Prerequisites

You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) thatuses infrastructure that you provisioned

You installed the OpenShift CLI (oc)

You installed the jq package

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix)

Procedure

1 Determine the routes to create

To create a wildcard record use appsltcluster_namegtltdomain_namegt where ltcluster_namegt is your cluster name and ltdomain_namegt is the Route 53 base domainfor your OpenShift Container Platform cluster

To create specific records you must create a record for each route that your cluster uses asshown in the output of the following command

Example output

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP address

$ aws cloudformation delete-stack --stack-name ltnamegt 1

$ oc get --all-namespaces -o jsonpath=range items[]range statusingress[]hostnendend routes

oauth-openshiftappsltcluster_namegtltdomain_namegtconsole-openshift-consoleappsltcluster_namegtltdomain_namegtdownloads-openshift-consoleappsltcluster_namegtltdomain_namegtalertmanager-main-openshift-monitoringappsltcluster_namegtltdomain_namegtgrafana-openshift-monitoringappsltcluster_namegtltdomain_namegtprometheus-k8s-openshift-monitoringappsltcluster_namegtltdomain_namegt

OpenShift Container Platform 46 Installing on AWS

244

1

1 2

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP addressthat it uses which is shown in the EXTERNAL-IP column

Example output

3 Locate the hosted zone ID for the load balancer

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer that you obtained

Example output

The output of this command is the load balancer hosted zone ID

4 Obtain the public hosted zone ID for your clusterrsquos domain

For ltdomain_namegt specify the Route 53 base domain for your OpenShift ContainerPlatform cluster

Example output

The public hosted zone ID for your domain is shown in the command output In this example it is Z3URY6TWQ91KVV

5 Add the alias records to your private zone

$ oc -n openshift-ingress get service router-default

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGErouter-default LoadBalancer 1723062215 ab328us-east-2elbamazonawscom 8031499TCP44330693TCP 5m

$ aws elb describe-load-balancers | jq -r LoadBalancerDescriptions[] | select(DNSName == ltexternal_ipgt)CanonicalHostedZoneNameID 1

Z3AADJGX6KTTL2

$ aws route53 list-hosted-zones-by-name --dns-name ltdomain_namegt 1 --query HostedZones[ ConfigPrivateZone = `true` ampamp Name == `ltdomain_namegt`]Id 2 --output text

hostedzoneZ3URY6TWQ91KVV

$ aws route53 change-resource-record-sets --hosted-zone-id ltprivate_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATE

CHAPTER 1 INSTALLING ON AWS

245

1

2

3

4

1

2

3

4

For ltprivate_hosted_zone_idgt specify the value from the output of the CloudFormationtemplate for DNS and load balancing

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

6 Add the records to your public zone

For ltpublic_hosted_zone_idgt specify the public hosted zone for your domain

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operator

gt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

$ aws route53 change-resource-record-sets --hosted-zone-id ltpublic_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

OpenShift Container Platform 46 Installing on AWS

246

1

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

1722 Completing an AWS installation on user-provisioned infrastructure

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure monitor the deployment to completion

Prerequisites

You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure

You installed the oc CLI

Procedure

From the directory that contains the installation program complete the cluster installation

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

1723 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can log

$ openshift-install --dir=ltinstallation_directorygt wait-for install-complete 1

INFO Waiting up to 40m0s for the cluster at httpsapimyclusterexamplecom6443 to initializeINFO Waiting up to 10m0s for the openshift-console route to be createdINFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Fe5en-ymBEc-Wt6NLINFO Time elapsed 1s

CHAPTER 1 INSTALLING ON AWS

247

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

1724 Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

See Working with stacks in the AWS documentation for more information about AWSCloudFormation stacks

1725 Next steps

Validating an installation

Customize your cluster

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

OpenShift Container Platform 46 Installing on AWS

248

If necessary you can opt out of remote health reporting

18 INSTALLING A CLUSTER ON AWS THAT USES MIRROREDINSTALLATION CONTENT

In OpenShift Container Platform version 46 you can install a cluster on Amazon Web Services (AWS)using infrastructure that you provide and an internal mirror of the installation release content

IMPORTANT

While you can install an OpenShift Container Platform cluster by using mirroredinstallation release content your cluster still requires Internet access to use the AWSAPIs

One way to create this infrastructure is to use the provided CloudFormation templates You can modifythe templates to customize your infrastructure or use the information that they contain to create AWSobjects according to your companyrsquos policies

IMPORTANT

The steps for performing a user-provisioned infrastructure installation are provided as anexample only Installing a cluster with infrastructure you provide requires knowledge ofthe cloud provider and the installation process of OpenShift Container Platform SeveralCloudFormation templates are provided to assist in completing these steps or to helpmodel your own You are also free to create the required resources through othermethods the templates are just an example

181 Prerequisites

You created a mirror registry on your mirror host and obtained the imageContentSources datafor your version of OpenShift Container Platform

IMPORTANT

Because the installation media is on the mirror host you can use that computerto complete all installation steps

You reviewed details about the OpenShift Container Platform installation and updateprocesses

You configured an AWS account to host the cluster

IMPORTANT

If you have an AWS profile stored on your computer it must not use a temporarysession token that you generated while using a multi-factor authenticationdevice The cluster continues to use your current AWS credentials to create AWSresources for the entire life of the cluster so you must use key-based long-livedcredentials To generate appropriate keys see Managing Access Keys for IAMUsers in the AWS documentation You can supply the keys when you run theinstallation program

CHAPTER 1 INSTALLING ON AWS

249

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix) in the AWS documentation

If you use a firewall and plan to use the Telemetry service you configured the firewall to allowthe sites that your cluster requires access to

NOTE

Be sure to also review this site list if you are configuring a proxy

If you do not allow the system to manage identity and access management (IAM) then a clusteradministrator can manually create and maintain IAM credentials Manual mode can also be usedin environments where the cloud IAM APIs are not reachable

182 About installations in restricted networks

In OpenShift Container Platform 46 you can perform an installation that does not require an activeconnection to the Internet to obtain software components Restricted network installations can becompleted using installer-provisioned infrastructure or user-provisioned infrastructure depending onthe cloud platform to which you are installing the cluster

If you choose to perform a restricted network installation on a cloud platform you still require access toits cloud APIs Some cloud functions like Amazon Web Servicersquos IAM service require Internet access soyou might still require Internet access Depending on your network you might require less Internetaccess for an installation on bare metal hardware or on VMware vSphere

To complete a restricted network installation you must create a registry that mirrors the contents of theOpenShift Container Platform registry and contains the installation media You can create this registryon a mirror host which can access both the Internet and your closed network or by using other methodsthat meet your restrictions

IMPORTANT

Because of the complexity of the configuration for user-provisioned installationsconsider completing a standard user-provisioned infrastructure installation before youattempt a restricted network installation using user-provisioned infrastructureCompleting this test installation might make it easier to isolate and troubleshoot anyissues that might arise during your installation in a restricted network

1821 Additional limits

Clusters in restricted networks have the following additional limitations and restrictions

The ClusterVersion status includes an Unable to retrieve available updates error

By default you cannot use the contents of the Developer Catalog because you cannot accessthe required image stream tags

183 Internet and Telemetry access for OpenShift Container Platform

In OpenShift Container Platform 46 you require access to the Internet to install your cluster TheTelemetry service which runs by default to provide metrics about cluster health and the success ofupdates also requires Internet access If your cluster is connected to the Internet Telemetry runsautomatically and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM)

OpenShift Container Platform 46 Installing on AWS

250

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct either maintainedautomatically by Telemetry or manually using OCM use subscription watch to track your OpenShiftContainer Platform subscriptions at the account or multi-cluster level

You must have Internet access to

Access the Red Hat OpenShift Cluster Manager page to download the installation program andperform subscription management If the cluster has Internet access and you do not disableTelemetry that service automatically entitles your cluster

Access Quayio to obtain the packages that are required to install your cluster

Obtain the packages that are required to perform cluster updates

IMPORTANT

If your cluster cannot have direct Internet access you can perform a restricted networkinstallation on some types of infrastructure that you provision During that process youdownload the content that is required and use it to populate a mirror registry with thepackages that you need to install a cluster and generate the installation program Withsome installation types the environment that you install your cluster in will not requireInternet access Before you update the cluster you update the content of the mirrorregistry

Additional resources

See About remote health monitoring for more information about the Telemetry service

184 Required AWS infrastructure components

To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services(AWS) you must manually create both the machines and their supporting infrastructure

For more information about the integration testing for different platforms see the OpenShift ContainerPlatform 4x Tested Integrations page

By using the provided CloudFormation templates you can create stacks of AWS resources thatrepresent the following components

An AWS Virtual Private Cloud (VPC)

Networking and load balancing components

Security groups and roles

An OpenShift Container Platform bootstrap node

OpenShift Container Platform control plane nodes

An OpenShift Container Platform compute node

Alternatively you can manually create the components or you can reuse existing infrastructure thatmeets the cluster requirements Review the CloudFormation templates for more details about how thecomponents interrelate

1841 Cluster machines

CHAPTER 1 INSTALLING ON AWS

251

You need AWSEC2Instance objects for the following machines

A bootstrap machine This machine is required during installation but you can remove it afteryour cluster deploys

Three control plane machines The control plane machines are not governed by a machine set

Compute machines You must create at least two compute machines which are also known asworker machines during installation These machines are not governed by a machine set

You can use the following instance types for the cluster machines with the provided CloudFormationtemplates

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5types instead

Table 119 Instance types for machines

Instance type Bootstrap Control plane Compute

i3large x

m4large x

m4xlarge x x

m42xlarge x x

m44xlarge x x

m48xlarge x x

m410xlarge x x

m416xlarge x x

m5large x

m5xlarge x x

m52xlarge x x

m54xlarge x x

m58xlarge x x

m510xlarge x x

OpenShift Container Platform 46 Installing on AWS

252

m516xlarge x x

c4large x

c4xlarge x

c42xlarge x x

c44xlarge x x

c48xlarge x x

r4large x

r4xlarge x x

r42xlarge x x

r44xlarge x x

r48xlarge x x

r416xlarge x x

Instance type Bootstrap Control plane Compute

You might be able to use other instance types that meet the specifications of these instance types

1842 Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructurethat you provision you must provide a mechanism for approving cluster certificate signing requests(CSRs) after installation The kube-controller-manager only approves the kubelet client CSRs The machine-approver cannot guarantee the validity of a serving certificate that is requested by usingkubelet credentials because it cannot confirm that the correct machine issued the request You mustdetermine and implement a method of verifying the validity of the kubelet serving certificate requestsand approving them

1843 Other infrastructure components

A VPC

DNS entries

Load balancers (classic or network) and listeners

A public and a private Route 53 zone

Security groups

CHAPTER 1 INSTALLING ON AWS

253

IAM roles

S3 buckets

If you are working in a disconnected environment you are unable to reach the public IP addresses forEC2 and ELB endpoints To resolve this you must create a VPC endpoint and attach it to the subnetthat the clusters are using The endpoints should be named as follows

ec2ltregiongtamazonawscom

elasticloadbalancingltregiongtamazonawscom

s3ltregiongtamazonawscom

Required VPC components

You must provide a suitable VPC and subnets that allow communication to your machines

Component

AWS type Description

VPCAWSEC2VPC

AWSEC2VPCEndpoint

You must provide a public VPC for thecluster to use The VPC uses an endpointthat references the route tables for eachsubnet to improve communication withthe registry that is hosted in S3

Publicsubnets AWSEC2Subnet

AWSEC2SubnetNetworkAclAssociation

Your VPC must have public subnets forbetween 1 and 3 availability zones andassociate them with appropriate Ingressrules

Internetgateway AWSEC2InternetGateway

AWSEC2VPCGatewayAttachment

AWSEC2RouteTable

AWSEC2Route

AWSEC2SubnetRouteTableAssociation

AWSEC2NatGateway

AWSEC2EIP

You must have a public Internet gatewaywith public routes attached to the VPCIn the provided templates each publicsubnet has a NAT gateway with an EIPaddress These NAT gateways allowcluster resources like private subnetinstances to reach the Internet and arenot required for some restricted networkor proxy scenarios

Networkaccesscontrol

AWSEC2NetworkAcl

AWSEC2NetworkAclEntry

You must allow the VPC to access thefollowing ports

Port Reason

OpenShift Container Platform 46 Installing on AWS

254

80 Inbound HTTPtraffic

443 Inbound HTTPStraffic

22 Inbound SSHtraffic

1024 - 65535 Inboundephemeral traffic

0 - 65535 Outboundephemeral traffic

Privatesubnets AWSEC2Subnet

AWSEC2RouteTable

AWSEC2SubnetRouteTableAssociation

Your VPC can have private subnets Theprovided CloudFormation templates cancreate private subnets for between 1 and3 availability zones If you use privatesubnets you must provide appropriateroutes and tables for them

Component

AWS type Description

Required DNS and load balancing components

Your DNS and load balancer configuration needs to use a public hosted zone and can use a privatehosted zone similar to the one that the installation program uses if it provisions the clusterrsquosinfrastructure You must create a DNS entry that resolves to your load balancer An entry for apiltcluster_namegtltdomaingt must point to the external load balancer and an entry for api-intltcluster_namegtltdomaingt must point to the internal load balancer

The cluster also requires load balancers and listeners for port 6443 which are required for theKubernetes API and its extensions and port 22623 which are required for the Ignition config files fornew machines The targets will be the master nodes Port 6443 must be accessible to both clientsexternal to the cluster and nodes within the cluster Port 22623 must be accessible to nodes within thecluster

Component AWS type Description

DNS AWSRoute53HostedZone

The hosted zone for your internal DNS

etcd recordsets

AWSRoute53RecordSet

The registration records for etcd for your control plane machines

CHAPTER 1 INSTALLING ON AWS

255

Public loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your public subnets

External APIserver record

AWSRoute53RecordSetGroup

Alias records for the external API server

Externallistener

AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the external load balancer

External targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the external load balancer

Private loadbalancer

AWSElasticLoadBalancingV2LoadBalancer

The load balancer for your private subnets

Internal APIserver record

AWSRoute53RecordSetGroup

Alias records for the internal API server

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 22623 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Internal listener AWSElasticLoadBalancingV2Listener

A listener on port 6443 for the internal load balancer

Internal targetgroup

AWSElasticLoadBalancingV2TargetGroup

The target group for the internal load balancer

Component AWS type Description

OpenShift Container Platform 46 Installing on AWS

256

Security groups

The control plane and worker machines require access to the following ports

Group Type IP Protocol Port range

MasterSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

tcp 6443

tcp 22623

WorkerSecurityGroup

AWSEC2SecurityGroup

icmp 0

tcp 22

BootstrapSecurityGroup

AWSEC2SecurityGroup

tcp 22

tcp 19531

Control plane Ingress

The control plane machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

MasterIngressEtcd

etcd tcp 2379- 2380

MasterIngressVxlan

Vxlan packets udp 4789

MasterIngressWorkerVxlan

Vxlan packets udp 4789

MasterIngressInternal

Internal cluster communication and Kubernetesproxy metrics

tcp 9000 - 9999

MasterIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

MasterIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

CHAPTER 1 INSTALLING ON AWS

257

MasterIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250 - 10259

MasterIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

MasterIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Ingress group Description IP protocol Port range

Worker Ingress

The worker machines require the following Ingress groups Each Ingress group is a AWSEC2SecurityGroupIngress resource

Ingress group Description IP protocol Port range

WorkerIngressVxlan

Vxlan packets udp 4789

WorkerIngressWorkerVxlan

Vxlan packets udp 4789

WorkerIngressInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressWorkerInternal

Internal cluster communication tcp 9000 - 9999

WorkerIngressKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressWorkerKube

Kubernetes kubelet scheduler and controllermanager

tcp 10250

WorkerIngressIngressServices

Kubernetes Ingress services tcp 30000 - 32767

WorkerIngressWorkerIngressServices

Kubernetes Ingress services tcp 30000 - 32767

Roles and instance profiles

OpenShift Container Platform 46 Installing on AWS

258

You must grant the machines permissions in AWS The provided CloudFormation templates grant themachines Allow permissions for the following AWSIAMRole objects and provide a AWSIAMInstanceProfile for each set of roles If you do not use the templates you can grant themachines the following broad permissions or the following individual permissions

Role Effect Action Resource

Master Allow ec2

Allow elasticloadbalancing

Allow iamPassRole

Allow s3GetObject

Worker Allow ec2Describe

Bootstrap Allow ec2Describe

Allow ec2AttachVolume

Allow ec2DetachVolume

1844 Required AWS permissions

When you attach the AdministratorAccess policy to the IAM user that you create in Amazon WebServices (AWS) you grant that user all of the required permissions To deploy all components of anOpenShift Container Platform cluster the IAM user requires the following permissions

Example 129 Required EC2 permissions for installation

tagTagResources

tagUntagResources

ec2AllocateAddress

ec2AssociateAddress

ec2AuthorizeSecurityGroupEgress

ec2AuthorizeSecurityGroupIngress

ec2CopyImage

ec2CreateNetworkInterface

ec2AttachNetworkInterface

ec2CreateSecurityGroup

CHAPTER 1 INSTALLING ON AWS

259

ec2CreateTags

ec2CreateVolume

ec2DeleteSecurityGroup

ec2DeleteSnapshot

ec2DeregisterImage

ec2DescribeAccountAttributes

ec2DescribeAddresses

ec2DescribeAvailabilityZones

ec2DescribeDhcpOptions

ec2DescribeImages

ec2DescribeInstanceAttribute

ec2DescribeInstanceCreditSpecifications

ec2DescribeInstances

ec2DescribeInternetGateways

ec2DescribeKeyPairs

ec2DescribeNatGateways

ec2DescribeNetworkAcls

ec2DescribeNetworkInterfaces

ec2DescribePrefixLists

ec2DescribeRegions

ec2DescribeRouteTables

ec2DescribeSecurityGroups

ec2DescribeSubnets

ec2DescribeTags

ec2DescribeVolumes

ec2DescribeVpcAttribute

ec2DescribeVpcClassicLink

ec2DescribeVpcClassicLinkDnsSupport

ec2DescribeVpcEndpoints

OpenShift Container Platform 46 Installing on AWS

260

ec2DescribeVpcs

ec2GetEbsDefaultKmsKeyId

ec2ModifyInstanceAttribute

ec2ModifyNetworkInterfaceAttribute

ec2ReleaseAddress

ec2RevokeSecurityGroupEgress

ec2RevokeSecurityGroupIngress

ec2RunInstances

ec2TerminateInstances

Example 130 Required permissions for creating network resources during installation

ec2AssociateDhcpOptions

ec2AssociateRouteTable

ec2AttachInternetGateway

ec2CreateDhcpOptions

ec2CreateInternetGateway

ec2CreateNatGateway

ec2CreateRoute

ec2CreateRouteTable

ec2CreateSubnet

ec2CreateVpc

ec2CreateVpcEndpoint

ec2ModifySubnetAttribute

ec2ModifyVpcAttribute

NOTE

If you use an existing VPC your account does not require these permissions forcreating network resources

Example 131 Required Elastic Load Balancing permissions for installation

elasticloadbalancingAddTags

CHAPTER 1 INSTALLING ON AWS

261

elasticloadbalancingApplySecurityGroupsToLoadBalancer

elasticloadbalancingAttachLoadBalancerToSubnets

elasticloadbalancingConfigureHealthCheck

elasticloadbalancingCreateListener

elasticloadbalancingCreateLoadBalancer

elasticloadbalancingCreateLoadBalancerListeners

elasticloadbalancingCreateTargetGroup

elasticloadbalancingDeleteLoadBalancer

elasticloadbalancingDeregisterInstancesFromLoadBalancer

elasticloadbalancingDeregisterTargets

elasticloadbalancingDescribeInstanceHealth

elasticloadbalancingDescribeListeners

elasticloadbalancingDescribeLoadBalancerAttributes

elasticloadbalancingDescribeLoadBalancers

elasticloadbalancingDescribeTags

elasticloadbalancingDescribeTargetGroupAttributes

elasticloadbalancingDescribeTargetHealth

elasticloadbalancingModifyLoadBalancerAttributes

elasticloadbalancingModifyTargetGroup

elasticloadbalancingModifyTargetGroupAttributes

elasticloadbalancingRegisterInstancesWithLoadBalancer

elasticloadbalancingRegisterTargets

elasticloadbalancingSetLoadBalancerPoliciesOfListener

Example 132 Required IAM permissions for installation

iamAddRoleToInstanceProfile

iamCreateInstanceProfile

iamCreateRole

iamDeleteInstanceProfile

iamDeleteRole

OpenShift Container Platform 46 Installing on AWS

262

iamDeleteRolePolicy

iamGetInstanceProfile

iamGetRole

iamGetRolePolicy

iamGetUser

iamListInstanceProfilesForRole

iamListRoles

iamListUsers

iamPassRole

iamPutRolePolicy

iamRemoveRoleFromInstanceProfile

iamSimulatePrincipalPolicy

iamTagRole

Example 133 Required Route 53 permissions for installation

route53ChangeResourceRecordSets

route53ChangeTagsForResource

route53CreateHostedZone

route53DeleteHostedZone

route53GetChange

route53GetHostedZone

route53ListHostedZones

route53ListHostedZonesByName

route53ListResourceRecordSets

route53ListTagsForResource

route53UpdateHostedZoneComment

Example 134 Required S3 permissions for installation

s3CreateBucket

s3DeleteBucket

CHAPTER 1 INSTALLING ON AWS

263

s3GetAccelerateConfiguration

s3GetBucketAcl

s3GetBucketCors

s3GetBucketLocation

s3GetBucketLogging

s3GetBucketObjectLockConfiguration

s3GetBucketReplication

s3GetBucketRequestPayment

s3GetBucketTagging

s3GetBucketVersioning

s3GetBucketWebsite

s3GetEncryptionConfiguration

s3GetLifecycleConfiguration

s3GetReplicationConfiguration

s3ListBucket

s3PutBucketAcl

s3PutBucketTagging

s3PutEncryptionConfiguration

Example 135 S3 permissions that cluster Operators require

s3DeleteObject

s3GetObject

s3GetObjectAcl

s3GetObjectTagging

s3GetObjectVersion

s3PutObject

s3PutObjectAcl

s3PutObjectTagging

Example 136 Required permissions to delete base cluster resources

OpenShift Container Platform 46 Installing on AWS

264

autoscalingDescribeAutoScalingGroups

ec2DeleteNetworkInterface

ec2DeleteVolume

elasticloadbalancingDeleteTargetGroup

elasticloadbalancingDescribeTargetGroups

iamDeleteAccessKey

iamDeleteUser

iamListAttachedRolePolicies

iamListInstanceProfiles

iamListRolePolicies

iamListUserPolicies

s3DeleteObject

s3ListBucketVersions

tagGetResources

Example 137 Required permissions to delete network resources

ec2DeleteDhcpOptions

ec2DeleteInternetGateway

ec2DeleteNatGateway

ec2DeleteRoute

ec2DeleteRouteTable

ec2DeleteSubnet

ec2DeleteVpc

ec2DeleteVpcEndpoints

ec2DetachInternetGateway

ec2DisassociateRouteTable

ec2ReplaceRouteTableAssociation

NOTE

If you use an existing VPC your account does not require these permissions to deletenetwork resources

CHAPTER 1 INSTALLING ON AWS

265

Example 138 Additional IAM and S3 permissions that are required to create manifests

iamCreateAccessKey

iamCreateUser

iamDeleteAccessKey

iamDeleteUser

iamDeleteUserPolicy

iamGetUserPolicy

iamListAccessKeys

iamPutUserPolicy

iamTagUser

iamGetUserPolicy

iamListAccessKeys

s3PutBucketPublicAccessBlock

s3GetBucketPublicAccessBlock

s3PutLifecycleConfiguration

s3HeadBucket

s3ListBucketMultipartUploads

s3AbortMultipartUpload

Example 139 Optional permission for quota checks for installation

servicequotasListAWSDefaultServiceQuotas

185 Generating an SSH private key and adding it to the agent

If you want to perform installation debugging or disaster recovery on your cluster you must provide anSSH key to both your ssh-agent and the installation program You can use this key to access thebootstrap machine in a public cluster to troubleshoot installation issues

NOTE

In a production environment you require disaster recovery and debugging

You can use this key to SSH into the master nodes as the user core When you deploy the cluster the

OpenShift Container Platform 46 Installing on AWS

266

1

1

You can use this key to SSH into the master nodes as the user core When you deploy the cluster thekey is added to the core userrsquos ~sshauthorized_keys list

NOTE

You must use a local key not one that you configured with platform-specific approachessuch as AWS key pairs

Procedure

1 If you do not have an SSH key that is configured for password-less authentication on yourcomputer create one For example on a computer that uses a Linux operating system run thefollowing command

Specify the path and file name such as ~sshid_rsa of the new SSH key

Running this command generates an SSH key that does not require a password in the locationthat you specified

2 Start the ssh-agent process as a background task

Example output

3 Add your SSH private key to the ssh-agent

Example output

Specify the path and file name for your SSH private key such as ~sshid_rsa

Next steps

When you install OpenShift Container Platform provide the SSH public key to the installationprogram If you install a cluster on infrastructure that you provision you must provide this key toyour clusterrsquos machines

186 Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisionedinfrastructure you must generate the files that the installation program needs to deploy your clusterand modify them so that the cluster creates only the machines that it will use You generate and

$ ssh-keygen -t ed25519 -N -f ltpathgtltfile_namegt 1

$ eval $(ssh-agent -s)

Agent pid 31874

$ ssh-add ltpathgtltfile_namegt 1

Identity added homeltyougtltpathgtltfile_namegt (ltcomputer_namegt)

CHAPTER 1 INSTALLING ON AWS

267

customize the install-configyaml file Kubernetes manifests and Ignition config files You also have theoption to first set up a separate var partition during the preparation phases of installation

1861 Optional Creating a separate var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installerHowever there are cases where you might want to create separate partitions in a part of the filesystemthat you expect to grow

OpenShift Container Platform supports the addition of a single partition to attach storage to either the var partition or a subdirectory of var For example

varlibcontainers Holds container-related content that can grow as more images andcontainers are added to a system

varlibetcd Holds data that you might want to keep separate for purposes such asperformance optimization of etcd storage

var Holds data that you might want to keep separate for purposes such as auditing

Storing the contents of a var directory separately makes it easier to grow storage for those areas asneeded and reinstall OpenShift Container Platform at a later date and keep that data intact With thismethod you will not have to pull all your containers again nor will you have to copy massive log fileswhen you update systems

Because var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS)the following procedure sets up the separate var partition by creating a machine config that is insertedduring the openshift-install preparation phases of an OpenShift Container Platform installation

IMPORTANT

If you follow the steps to create a separate var partition in this procedure it is notnecessary to create the Kubernetes manifest and Ignition config files again as describedlater in this section

Procedure

1 Create a directory to hold the OpenShift Container Platform installation files

2 Run openshift-install to create a set of files in the manifest and openshift subdirectoriesAnswer the system questions as you are prompted

3 Create a MachineConfig object and add it to a file in the openshift directory For examplename the file 98-var-partitionyaml change the disk device name to the name of the storage

$ mkdir $HOMEclusterconfig

$ openshift-install create manifests --dir $HOMEclusterconfig SSH Public Key $ ls $HOMEclusterconfigopenshift99_kubeadmin-password-secretyaml99_openshift-cluster-api_master-machines-0yaml99_openshift-cluster-api_master-machines-1yaml99_openshift-cluster-api_master-machines-2yaml

OpenShift Container Platform 46 Installing on AWS

268

1

2

device on the worker systems and set the storage size as appropriate This attaches storage toa separate var directory

The storage device name of the disk that you want to partition

When adding a data partition to the boot disk a minimum value of 25000 MiB (Mebibytes)is recommended The root file system is automatically resized to fill all available space upto the specified offset If no value is specified or if the specified value is smaller than therecommended minimum the resulting root file system will be too small and futurereinstalls of RHCOS might overwrite the beginning of the data partition

4 Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories

Now you can use the Ignition config files as input to the installation procedures to install Red HatEnterprise Linux CoreOS (RHCOS) systems

apiVersion machineconfigurationopenshiftiov1kind MachineConfigmetadata labels machineconfigurationopenshiftiorole worker name 98-var-partitionspec config ignition version 310 storage disks - device devltdevice_namegt 1 partitions - sizeMiB ltpartition_sizegt startMiB ltpartition_start_offsetgt 2 label var filesystems - path var device devdiskby-partlabelvar format xfs systemd units - name varmount enabled true contents | [Unit] Before=local-fstarget [Mount] Where=var What=devdiskby-partlabelvar [Install] WantedBy=local-fstarget

$ openshift-install create ignition-configs --dir $HOMEclusterconfig$ ls $HOMEclusterconfigauth bootstrapign masterign metadatajson workerign

CHAPTER 1 INSTALLING ON AWS

269

1

1862 Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deployyour cluster

Prerequisites

You obtained the OpenShift Container Platform installation program for user-provisionedinfrastructure and the pull secret for your cluster For a restricted network installation thesefiles are on your mirror host

You checked that you are deploying your cluster to a region with an accompanying Red HatEnterprise Linux CoreOS (RHCOS) AMI published by Red Hat If you are deploying to a regionthat requires a custom AMI such as an AWS GovCloud region you must create the install-configyaml file manually

Procedure

1 Create the install-configyaml file

a Change to the directory that contains the installation program and run the followingcommand

For ltinstallation_directorygt specify the directory name to store the files that theinstallation program creates

IMPORTANT

Specify an empty directory Some installation assets like bootstrap X509certificates have short expiration intervals so you must not reuse aninstallation directory If you want to reuse individual files from another clusterinstallation you can copy them into your directory However the file namesfor the installation assets might change between releases Use caution whencopying installation files from an earlier OpenShift Container Platformversion

b At the prompts provide the configuration details for your cloud

i Optional Select an SSH key to use to access your cluster machines

NOTE

For production OpenShift Container Platform clusters on which you wantto perform installation debugging or disaster recovery specify an SSHkey that your ssh-agent process uses

ii Select aws as the platform to target

iii If you do not have an AWS profile stored on your computer enter the AWS access keyID and secret access key for the user that you configured to run the installationprogram

$ openshift-install create install-config --dir=ltinstallation_directorygt 1

OpenShift Container Platform 46 Installing on AWS

270

NOTE

The AWS access key ID and secret access key are stored in ~awscredentials in the home directory of the current user on theinstallation host You are prompted for the credentials by the installationprogram if the credentials for the exported profile are not present in thefile Any credentials that you provide to the installation program arestored in the file

iv Select the AWS region to deploy the cluster to

v Select the base domain for the Route 53 service that you configured for your cluster

vi Enter a descriptive name for your cluster

vii Paste the pull secret that you obtained from the Pull Secret page on the Red HatOpenShift Cluster Manager site

2 Edit the install-configyaml file to provide the additional information that is required for aninstallation in a restricted network

a Update the pullSecret value to contain the authentication information for your registry

For ltlocal_registrygt specify the registry domain name and optionally the port that yourmirror registry uses to serve content For example registryexamplecom or registryexamplecom5000 For ltcredentialsgt specify the base64-encoded user nameand password for your mirror registry

b Add the additionalTrustBundle parameter and value The value must be the contents ofthe certificate file that you used for your mirror registry which can be an exiting trustedcertificate authority or the self-signed certificate that you generated for the mirror registry

c Add the image content resources

Use the imageContentSources section from the output of the command to mirror therepository or the values that you used when you mirrored the content from the media thatyou brought into your restricted network

pullSecret authsltlocal_registrygt auth ltcredentialsgtemail youexamplecom

additionalTrustBundle | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----

imageContentSources- mirrors - ltlocal_registrygtltlocal_repository_namegtrelease source quayioopenshift-release-devocp-release- mirrors - ltlocal_registrygtltlocal_repository_namegtrelease source registrysvcciopenshiftorgocprelease

CHAPTER 1 INSTALLING ON AWS

271

d Optional Set the publishing strategy to Internal

By setting this option you create an internal Ingress Controller and a private load balancer

3 Optional Back up the install-configyaml file

IMPORTANT

The install-configyaml file is consumed during the installation process If youwant to reuse the file you must back it up now

Additional resources

See Configuration and credential file settings in the AWS documentation for more informationabout AWS profile and credential configuration

1863 Configuring the cluster-wide proxy during installation

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPSproxy available You can configure a new OpenShift Container Platform cluster to use a proxy byconfiguring the proxy settings in the install-configyaml file

Prerequisites

You have an existing install-configyaml file

You reviewed the sites that your cluster requires access to and determined whether any ofthem need to bypass the proxy By default all cluster egress traffic is proxied including calls tohosting cloud provider APIs You added sites to the Proxy objectrsquos specnoProxy field tobypass the proxy if necessary

NOTE

The Proxy object statusnoProxy field is populated with the values of the networkingmachineNetwork[]cidr networkingclusterNetwork[]cidr and networkingserviceNetwork[] fields from your installation configuration

For installations on Amazon Web Services (AWS) Google Cloud Platform (GCP)Microsoft Azure and Red Hat OpenStack Platform (RHOSP) the Proxy object statusnoProxy field is also populated with the instance metadata endpoint(169254169254)

Procedure

1 Edit your install-configyaml file and add the proxy settings For example

publish Internal

apiVersion v1baseDomain mydomaincomproxy httpProxy httpltusernamegtltpswdgtltipgtltportgt 1 httpsProxy httpltusernamegtltpswdgtltipgtltportgt 2 noProxy examplecom 3

OpenShift Container Platform 46 Installing on AWS

272

1

2

3

4

A proxy URL to use for creating HTTP connections outside the cluster The URL schememust be http If you use an MITM transparent proxy network that does not requireadditional proxy configuration but requires additional CAs you must not specify an httpProxy value

A proxy URL to use for creating HTTPS connections outside the cluster If this field is notspecified then httpProxy is used for both HTTP and HTTPS connections If you use anMITM transparent proxy network that does not require additional proxy configuration butrequires additional CAs you must not specify an httpsProxy value

A comma-separated list of destination domain names domains IP addresses or othernetwork CIDRs to exclude proxying Preface a domain with to match subdomains only Forexample ycom matches xycom but not ycom Use to bypass proxy for alldestinations

If provided the installation program generates a config map that is named user-ca-bundlein the openshift-config namespace that contains one or more additional CA certificatesthat are required for proxying HTTPS connections The Cluster Network Operator thencreates a trusted-ca-bundle config map that merges these contents with the Red HatEnterprise Linux CoreOS (RHCOS) trust bundle and this config map is referenced in the Proxy objectrsquos trustedCA field The additionalTrustBundle field is required unless theproxyrsquos identity certificate is signed by an authority from the RHCOS trust bundle If youuse an MITM transparent proxy network that does not require additional proxyconfiguration but requires additional CAs you must provide the MITM CA certificate

NOTE

The installation program does not support the proxy readinessEndpoints field

2 Save the file and reference it when installing OpenShift Container Platform

The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settingsin the provided install-configyaml file If no proxy settings are provided a cluster Proxy object is stillcreated but it will have a nil spec

NOTE

Only the Proxy object named cluster is supported and no additional proxies can becreated

1864 Creating the Kubernetes manifest and Ignition config files

Because you must modify some cluster definition files and manually start the cluster machines you mustgenerate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines

The installation configuration file transforms into the Kubernetes manifests The manifests wrap into theIgnition configuration files which are later used to create the cluster

additionalTrustBundle | 4 -----BEGIN CERTIFICATE----- ltMY_TRUSTED_CA_CERTgt -----END CERTIFICATE-----

CHAPTER 1 INSTALLING ON AWS

273

1

IMPORTANT

The Ignition config files that the installation program generates contain certificates thatexpire after 24 hours which are then renewed at that time If the cluster is shut downbefore renewing the certificates and the cluster is later restarted after the 24 hours haveelapsed the cluster automatically recovers the expired certificates The exception is thatyou must manually approve the pending node-bootstrapper certificate signing requests(CSRs) to recover kubelet certificates See the documentation for Recovering fromexpired control plane certificates for more information

Prerequisites

You obtained the OpenShift Container Platform installation program For a restricted networkinstallation these files are on your mirror host

You created the install-configyaml installation configuration file

Procedure

1 Change to the directory that contains the installation program and generate the Kubernetesmanifests for the cluster

Example output

For ltinstallation_directorygt specify the installation directory that contains the install-configyaml file you created

2 Remove the Kubernetes manifest files that define the control plane machines

By removing these files you prevent the cluster from automatically generating control planemachines

3 Remove the Kubernetes manifest files that define the worker machines

Because you create and manage the worker machines yourself you do not need to initializethese machines

4 Check that the mastersSchedulable parameter in the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml Kubernetes manifestfile is set to false This setting prevents pods from being scheduled on the control planemachines

a Open the ltinstallation_directorygtmanifestscluster-scheduler-02-configyml file

$ openshift-install create manifests --dir=ltinstallation_directorygt 1

INFO Credentials loaded from the myprofile profile in file homemyuserawscredentialsINFO Consuming Install Config from target directoryINFO Manifests created in install_dirmanifests and install_diropenshift

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_master-machines-yaml

$ rm -f ltinstallation_directorygtopenshift99_openshift-cluster-api_worker-machineset-yaml

OpenShift Container Platform 46 Installing on AWS

274

1 2

1

b Locate the mastersSchedulable parameter and ensure that it is set to false

c Save and exit the file

5 Optional If you do not want the Ingress Operator to create DNS records on your behalf removethe privateZone and publicZone sections from the ltinstallation_directorygtmanifestscluster-dns-02-configyml DNS configuration file

Remove this section completely

If you do so you must add ingress DNS records manually in a later step

6 To create the Ignition configuration files run the following command from the directory thatcontains the installation program

For ltinstallation_directorygt specify the same installation directory

The following files are generated in the directory

auth kubeadmin-password kubeconfig bootstrapign masterign metadatajson workerign

187 Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify yourcluster in Amazon Web Services (AWS) The infrastructure name is also used to locate the appropriateAWS resources during an OpenShift Container Platform installation The provided CloudFormationtemplates contain references to this infrastructure name so you must extract it

Prerequisites

You obtained the OpenShift Container Platform installation program and the pull secret for

apiVersion configopenshiftiov1kind DNSmetadata creationTimestamp null name clusterspec baseDomain exampleopenshiftcom privateZone 1 id mycluster-100419-private-zone publicZone 2 id exampleopenshiftcomstatus

$ openshift-install create ignition-configs --dir=ltinstallation_directorygt 1

CHAPTER 1 INSTALLING ON AWS

275

1

1

You obtained the OpenShift Container Platform installation program and the pull secret foryour cluster

You generated the Ignition config files for your cluster

You installed the jq package

Procedure

To extract and view the infrastructure name from the Ignition config file metadata run thefollowing command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

The output of this command is your cluster name and a random string

188 Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShiftContainer Platform cluster to use You can customize the VPC to meet your requirements includingVPN and route tables

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the VPC

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

Procedure

1 Create a JSON file that contains the parameter values that the template requires

$ jq -r infraID ltinstallation_directorygtmetadatajson 1

openshift-vw9j6 1

[

OpenShift Container Platform 46 Installing on AWS

276

1

2

3

4

5

6

1

2

3

The CIDR block for the VPC

Specify a CIDR block in the format xxxx16-24

The number of availability zones to deploy the VPC in

Specify an integer between 1 and 3

The size of each subnet in each availability zone

Specify an integer between 5 and 13 where 5 is 27 and 13 is 19

2 Copy the template from the CloudFormation template for the VPC section of this topic andsave it as a YAML file on your computer This template describes the VPC that your clusterrequires

3 Launch the CloudFormation template to create a stack of AWS resources that represent theVPC

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-vpc You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

ParameterKey VpcCidr 1 ParameterValue 1000016 2 ParameterKey AvailabilityZoneCount 3 ParameterValue 1 4 ParameterKey SubnetBits 5 ParameterValue 12 6 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

CHAPTER 1 INSTALLING ON AWS

277

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

VpcId The ID of your VPC

PublicSubnetIds

The IDs of the new public subnets

PrivateSubnetIds

The IDs of the new private subnets

1881 CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for yourOpenShift Container Platform cluster

Example 140 CloudFormation template for the VPC

arnawscloudformationus-east-1269333783861stackcluster-vpcdbedae40-2fd3-11eb-820e-12a48460849f

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for Best Practice VPC with 1-3 AZs

Parameters VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String AvailabilityZoneCount ConstraintDescription The number of availability zones (Min 1 Max 3) MinValue 1 MaxValue 3 Default 1 Description How many AZs to create VPC subnets for (Min 1 Max 3) Type Number SubnetBits ConstraintDescription CIDR block parameter must be in the form xxxx19-27 MinValue 5 MaxValue 13 Default 12 Description Size of each subnet to create within the availability zones (Min 5 = 27 Max 13 = 19) Type Number

OpenShift Container Platform 46 Installing on AWS

278

Metadata AWSCloudFormationInterface ParameterGroups - Label default Network Configuration Parameters - VpcCidr - SubnetBits - Label default Availability Zones Parameters - AvailabilityZoneCount ParameterLabels AvailabilityZoneCount default Availability Zone Count VpcCidr default VPC CIDR SubnetBits default Bits Per Subnet

Conditions DoAz3 Equals [3 Ref AvailabilityZoneCount] DoAz2 Or [Equals [2 Ref AvailabilityZoneCount] Condition DoAz3]

Resources VPC Type AWSEC2VPC Properties EnableDnsSupport true EnableDnsHostnames true CidrBlock Ref VpcCidr PublicSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [0 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PublicSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [1 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PublicSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [2 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select

CHAPTER 1 INSTALLING ON AWS

279

- 2 - FnGetAZs Ref AWSRegion InternetGateway Type AWSEC2InternetGateway GatewayToInternet Type AWSEC2VPCGatewayAttachment Properties VpcId Ref VPC InternetGatewayId Ref InternetGateway PublicRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PublicRoute Type AWSEC2Route DependsOn GatewayToInternet Properties RouteTableId Ref PublicRouteTable DestinationCidrBlock 00000 GatewayId Ref InternetGateway PublicSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PublicSubnet2 RouteTableId Ref PublicRouteTable PublicSubnetRouteTableAssociation3 Condition DoAz3 Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PublicSubnet3 RouteTableId Ref PublicRouteTable PrivateSubnet Type AWSEC2Subnet Properties VpcId Ref VPC CidrBlock Select [3 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 0 - FnGetAZs Ref AWSRegion PrivateRouteTable Type AWSEC2RouteTable Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation Type AWSEC2SubnetRouteTableAssociation Properties SubnetId Ref PrivateSubnet RouteTableId Ref PrivateRouteTable NAT DependsOn

OpenShift Container Platform 46 Installing on AWS

280

- GatewayToInternet Type AWSEC2NatGateway Properties AllocationId FnGetAtt - EIP - AllocationId SubnetId Ref PublicSubnet EIP Type AWSEC2EIP Properties Domain vpc Route Type AWSEC2Route Properties RouteTableId Ref PrivateRouteTable DestinationCidrBlock 00000 NatGatewayId Ref NAT PrivateSubnet2 Type AWSEC2Subnet Condition DoAz2 Properties VpcId Ref VPC CidrBlock Select [4 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 1 - FnGetAZs Ref AWSRegion PrivateRouteTable2 Type AWSEC2RouteTable Condition DoAz2 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation2 Type AWSEC2SubnetRouteTableAssociation Condition DoAz2 Properties SubnetId Ref PrivateSubnet2 RouteTableId Ref PrivateRouteTable2 NAT2 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz2 Properties AllocationId FnGetAtt - EIP2 - AllocationId SubnetId Ref PublicSubnet2 EIP2 Type AWSEC2EIP Condition DoAz2 Properties Domain vpc

CHAPTER 1 INSTALLING ON AWS

281

Route2 Type AWSEC2Route Condition DoAz2 Properties RouteTableId Ref PrivateRouteTable2 DestinationCidrBlock 00000 NatGatewayId Ref NAT2 PrivateSubnet3 Type AWSEC2Subnet Condition DoAz3 Properties VpcId Ref VPC CidrBlock Select [5 Cidr [Ref VpcCidr 6 Ref SubnetBits]] AvailabilityZone Select - 2 - FnGetAZs Ref AWSRegion PrivateRouteTable3 Type AWSEC2RouteTable Condition DoAz3 Properties VpcId Ref VPC PrivateSubnetRouteTableAssociation3 Type AWSEC2SubnetRouteTableAssociation Condition DoAz3 Properties SubnetId Ref PrivateSubnet3 RouteTableId Ref PrivateRouteTable3 NAT3 DependsOn - GatewayToInternet Type AWSEC2NatGateway Condition DoAz3 Properties AllocationId FnGetAtt - EIP3 - AllocationId SubnetId Ref PublicSubnet3 EIP3 Type AWSEC2EIP Condition DoAz3 Properties Domain vpc Route3 Type AWSEC2Route Condition DoAz3 Properties RouteTableId Ref PrivateRouteTable3 DestinationCidrBlock 00000 NatGatewayId Ref NAT3 S3Endpoint Type AWSEC2VPCEndpoint

OpenShift Container Platform 46 Installing on AWS

282

189 Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS)that your OpenShift Container Platform cluster can use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the networking and load balancing components that yourOpenShift Container Platform cluster requires The template also creates a hosted zone and subnettags

You can run the template multiple times within a single Virtual Private Cloud (VPC)

Properties PolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Action - Resource - RouteTableIds - Ref PublicRouteTable - Ref PrivateRouteTable - If [DoAz2 Ref PrivateRouteTable2 Ref AWSNoValue] - If [DoAz3 Ref PrivateRouteTable3 Ref AWSNoValue] ServiceName Join - - - comamazonaws - Ref AWSRegion - s3 VpcId Ref VPC

Outputs VpcId Description ID of the new VPC Value Ref VPC PublicSubnetIds Description Subnet IDs of the public subnets Value Join [ [Ref PublicSubnet If [DoAz2 Ref PublicSubnet2 Ref AWSNoValue] If [DoAz3 Ref PublicSubnet3 Ref AWSNoValue]] ] PrivateSubnetIds Description Subnet IDs of the private subnets Value Join [ [Ref PrivateSubnet If [DoAz2 Ref PrivateSubnet2 Ref AWSNoValue] If [DoAz3 Ref PrivateSubnet3 Ref AWSNoValue]] ]

CHAPTER 1 INSTALLING ON AWS

283

1

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-configyaml file for your cluster You can obtain details about your hosted zone by running thefollowing command

For the ltroute53_domaingt specify the Route 53 base domain that you used when yougenerated the install-configyaml file for the cluster

Example output

In the example output the hosted zone ID is Z21IXYZABCZ2A4

2 Create a JSON file that contains the parameter values that the template requires

$ aws route53 list-hosted-zones-by-name --dns-name ltroute53_domaingt 1

myclusterexamplecom False 100HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA hostedzoneZ21IXYZABCZ2A4 myclusterexamplecom 10

[ ParameterKey ClusterName 1 ParameterValue mycluster 2 ParameterKey InfrastructureName 3 ParameterValue mycluster-ltrandom_stringgt 4 ParameterKey HostedZoneId 5 ParameterValue ltrandom_stringgt 6 ParameterKey HostedZoneName 7

OpenShift Container Platform 46 Installing on AWS

284

1

2

3

4

5

6

7

8

9

10

11

12

13

14

A short representative cluster name to use for host names etc

Specify the cluster name that you used when you generated the install-configyaml filefor the cluster

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The Route 53 public zone ID to register the targets with

Specify the Route 53 public zone ID which as a format similar to Z21IXYZABCZ2A4 Youcan obtain this value from the AWS console

The Route 53 zone to register the targets with

Specify the Route 53 base domain that you used when you generated the install-configyaml file for the cluster Do not include the trailing period () that is displayed in theAWS console

The public subnets that you created for your VPC

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

3 Copy the template from the CloudFormation template for the network and load balancerssection of this topic and save it as a YAML file on your computer This template describes thenetworking and load balancing objects that your cluster requires

ParameterValue examplecom 8 ParameterKey PublicSubnets 9 ParameterValue subnet-ltrandom_stringgt 10 ParameterKey PrivateSubnets 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey VpcId 13 ParameterValue vpc-ltrandom_stringgt 14 ]

CHAPTER 1 INSTALLING ON AWS

285

1

2

3

4

IMPORTANT

If you are deploying your cluster to an AWS government region you must updatethe InternalApiServerRecord in the CloudFormation template to use CNAMErecords Records of type ALIAS are not supported for AWS government regions

4 Launch the CloudFormation template to create a stack of AWS resources that provide thenetworking and load balancing components

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-dns You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole resources

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

PrivateHostedZoneId

Hosted zone ID for the private DNS

ExternalApiLoadBalancerName

Full name of the external API load balancer

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

arnawscloudformationus-east-1269333783861stackcluster-dnscd3e5de0-2fd4-11eb-5cf0-12be5c33a183

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

286

InternalApiLoadBalancerName

Full name of the internal API load balancer

ApiServerDnsName

Full host name of the API server

RegisterNlbIpTargetsLambda

Lambda ARN useful to help registerderegister IP targets for these load balancers

ExternalApiTargetGroupArn

ARN of external API target group

InternalApiTargetGroupArn

ARN of internal API target group

InternalServiceTargetGroupArn

ARN of internal service target group

1891 CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and loadbalancers that you need for your OpenShift Container Platform cluster

Example 141 CloudFormation template for the network and load balancers

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Network Elements (Route53 amp LBs)

Parameters ClusterName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Cluster name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short representative cluster name to use for host names and other identifying names Type String InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster

CHAPTER 1 INSTALLING ON AWS

287

Type String HostedZoneId Description The Route53 public zone ID to register the targets with such as Z21IXYZABCZ2A4 Type String HostedZoneName Description The Route53 zone to register the targets with such as examplecom Omit the trailing period Type String Default examplecom PublicSubnets Description The internet-facing subnets Type ListltAWSEC2SubnetIdgt PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - ClusterName - InfrastructureName - Label default Network Configuration Parameters - VpcId - PublicSubnets - PrivateSubnets - Label default DNS Parameters - HostedZoneName - HostedZoneId ParameterLabels ClusterName default Cluster Name InfrastructureName default Infrastructure Name VpcId default VPC ID PublicSubnets default Public Subnets PrivateSubnets default Private Subnets HostedZoneName default Public Hosted Zone Name HostedZoneId default Public Hosted Zone ID

Resources

OpenShift Container Platform 46 Installing on AWS

288

ExtApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName ext]] IpAddressType ipv4 Subnets Ref PublicSubnets Type network

IntApiElb Type AWSElasticLoadBalancingV2LoadBalancer Properties Name Join [- [Ref InfrastructureName int]] Scheme internal IpAddressType ipv4 Subnets Ref PrivateSubnets Type network

IntDns Type AWSRoute53HostedZone Properties HostedZoneConfig Comment Managed by CloudFormation Name Join [ [Ref ClusterName Ref HostedZoneName]] HostedZoneTags - Key Name Value Join [- [Ref InfrastructureName int]] - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value owned VPCs - VPCId Ref VpcId VPCRegion Ref AWSRegion

ExternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref HostedZoneId RecordSets - Name Join [ [api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt ExtApiElbCanonicalHostedZoneID DNSName GetAtt ExtApiElbDNSName

InternalApiServerRecord Type AWSRoute53RecordSetGroup Properties Comment Alias record for the API server HostedZoneId Ref IntDns RecordSets - Name Join [

CHAPTER 1 INSTALLING ON AWS

289

[api Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName - Name Join [ [api-int Ref ClusterName Join [ [Ref HostedZoneName ]]] ] Type A AliasTarget HostedZoneId GetAtt IntApiElbCanonicalHostedZoneID DNSName GetAtt IntApiElbDNSName

ExternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref ExternalApiTargetGroup LoadBalancerArn Ref ExtApiElb Port 6443 Protocol TCP

ExternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalApiListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalApiTargetGroup LoadBalancerArn Ref IntApiElb

OpenShift Container Platform 46 Installing on AWS

290

Port 6443 Protocol TCP

InternalApiTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath readyz HealthCheckPort 6443 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 6443 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

InternalServiceInternalListener Type AWSElasticLoadBalancingV2Listener Properties DefaultActions - Type forward TargetGroupArn Ref InternalServiceTargetGroup LoadBalancerArn Ref IntApiElb Port 22623 Protocol TCP

InternalServiceTargetGroup Type AWSElasticLoadBalancingV2TargetGroup Properties HealthCheckIntervalSeconds 10 HealthCheckPath healthz HealthCheckPort 22623 HealthCheckProtocol HTTPS HealthyThresholdCount 2 UnhealthyThresholdCount 2 Port 22623 Protocol TCP TargetType ip VpcId Ref VpcId TargetGroupAttributes - Key deregistration_delaytimeout_seconds Value 60

RegisterTargetLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName nlb lambda role]] AssumeRolePolicyDocument

CHAPTER 1 INSTALLING ON AWS

291

Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalApiTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref InternalServiceTargetGroup - Effect Allow Action [ elasticloadbalancingRegisterTargets elasticloadbalancingDeregisterTargets ] Resource Ref ExternalApiTargetGroup

RegisterNlbIpTargets Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterTargetLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) elb = boto3client(elbv2) if event[RequestType] == Delete elbderegister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=[Id event[ResourceProperties][TargetIp]]) elif event[RequestType] == Create elbregister_targets(TargetGroupArn=event[ResourceProperties][TargetArn]Targets=

OpenShift Container Platform 46 Installing on AWS

292

[Id event[ResourceProperties][TargetIp]]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][TargetArn]+event[ResourceProperties][TargetIp]) Runtime python37 Timeout 120

RegisterSubnetTagsLambdaIamRole Type AWSIAMRole Properties RoleName Join [- [Ref InfrastructureName subnet-tags-lambda-role]] AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - lambdaamazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName subnet-tagging-policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action [ ec2DeleteTags ec2CreateTags ] Resource arnawsec2subnet - Effect Allow Action [ ec2DescribeSubnets ec2DescribeTags ] Resource

RegisterSubnetTags Type AWSLambdaFunction Properties Handler indexhandler Role FnGetAtt - RegisterSubnetTagsLambdaIamRole - Arn Code ZipFile | import json import boto3 import cfnresponse def handler(event context) ec2_client = boto3client(ec2)

CHAPTER 1 INSTALLING ON AWS

293

if event[RequestType] == Delete for subnet_id in event[ResourceProperties][Subnets] ec2_clientdelete_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName]]) elif event[RequestType] == Create for subnet_id in event[ResourceProperties][Subnets] ec2_clientcreate_tags(Resources=[subnet_id] Tags=[Key kubernetesiocluster + event[ResourceProperties][InfrastructureName] Value shared]) responseData = cfnresponsesend(event context cfnresponseSUCCESS responseData event[ResourceProperties][InfrastructureName]+event[ResourceProperties][Subnets][0]) Runtime python37 Timeout 120

RegisterPublicSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PublicSubnets

RegisterPrivateSubnetTags Type CustomSubnetRegister Properties ServiceToken GetAtt RegisterSubnetTagsArn InfrastructureName Ref InfrastructureName Subnets Ref PrivateSubnets

Outputs PrivateHostedZoneId Description Hosted zone ID for the private DNS which is required for private records Value Ref IntDns ExternalApiLoadBalancerName Description Full name of the external API load balancer Value GetAtt ExtApiElbLoadBalancerFullName InternalApiLoadBalancerName Description Full name of the internal API load balancer Value GetAtt IntApiElbLoadBalancerFullName ApiServerDnsName Description Full hostname of the API server which is required for the Ignition config files Value Join [ [api-int Ref ClusterName Ref HostedZoneName]] RegisterNlbIpTargetsLambda Description Lambda ARN useful to help register or deregister IP targets for these load balancers Value GetAtt RegisterNlbIpTargetsArn ExternalApiTargetGroupArn Description ARN of the external API target group Value Ref ExternalApiTargetGroup InternalApiTargetGroupArn Description ARN of the internal API target group Value Ref InternalApiTargetGroup InternalServiceTargetGroupArn Description ARN of the internal service target group Value Ref InternalServiceTargetGroup

OpenShift Container Platform 46 Installing on AWS

294

IMPORTANT

If you are deploying your cluster to an AWS government region you must update the InternalApiServerRecord to use CNAME records Records of type ALIAS are notsupported for AWS government regions For example

Additional resources

See Listing public hosted zones in the AWS documentation for more information about listingpublic hosted zones

1810 Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift ContainerPlatform cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the security groups and roles that your OpenShift ContainerPlatform cluster requires

NOTE

If you do not use the provided CloudFormation template to create your AWSinfrastructure you must review the provided information and manually create theinfrastructure If your cluster does not initialize correctly you might have to contact RedHat support with your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

Procedure

1 Create a JSON file that contains the parameter values that the template requires

Type CNAMETTL 10ResourceRecords- GetAtt IntApiElbDNSName

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey VpcCidr 3 ParameterValue 1000016 4

CHAPTER 1 INSTALLING ON AWS

295

1

2

3

4

5

6

7

8

1

2

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

The CIDR block for the VPC

Specify the CIDR block parameter that you used for the VPC that you defined in the form xxxx16-24

The private subnets that you created for your VPC

Specify the PrivateSubnetIds value from the output of the CloudFormation template forthe VPC

The VPC that you created for the cluster

Specify the VpcId value from the output of the CloudFormation template for the VPC

2 Copy the template from the CloudFormation template for security objects section of thistopic and save it as a YAML file on your computer This template describes the security groupsand roles that your cluster requires

3 Launch the CloudFormation template to create a stack of AWS resources that represent thesecurity groups and roles

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-sec You need thename of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ParameterKey PrivateSubnets 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey VpcId 7 ParameterValue vpc-ltrandom_stringgt 8 ]

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

OpenShift Container Platform 46 Installing on AWS

296

3

4

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

Example output

4 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

MasterSecurityGroupId

Master Security Group ID

WorkerSecurityGroupId

Worker Security Group ID

MasterInstanceProfile

Master IAM Instance Profile

WorkerInstanceProfile

Worker IAM Instance Profile

18101 CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need foryour OpenShift Container Platform cluster

Example 142 CloudFormation template for security objects

arnawscloudformationus-east-1269333783861stackcluster-sec03bd4210-2ed7-11eb-6d7a-13fc0b61e9db

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Security Elements (Security Groups amp IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1

CHAPTER 1 INSTALLING ON AWS

297

ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String VpcCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])((1[6-9]|2[0-4]))$ ConstraintDescription CIDR block parameter must be in the form xxxx16-24 Default 1000016 Description CIDR block for VPC Type String VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId PrivateSubnets Description The internal subnets Type ListltAWSEC2SubnetIdgt

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Network Configuration Parameters - VpcId - VpcCidr - PrivateSubnets ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID VpcCidr default VPC CIDR PrivateSubnets default Private Subnets

Resources MasterSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Master Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr

OpenShift Container Platform 46 Installing on AWS

298

- IpProtocol tcp ToPort 6443 FromPort 6443 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22623 ToPort 22623 CidrIp Ref VpcCidr VpcId Ref VpcId

WorkerSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Worker Security Group SecurityGroupIngress - IpProtocol icmp FromPort 0 ToPort 0 CidrIp Ref VpcCidr - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref VpcCidr VpcId Ref VpcId

MasterIngressEtcd Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description etcd FromPort 2379 ToPort 2380 IpProtocol tcp

MasterIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressWorkerVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

MasterIngressGeneve

CHAPTER 1 INSTALLING ON AWS

299

Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressWorkerGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

MasterIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressWorkerInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

MasterIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

MasterIngressWorkerInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000

OpenShift Container Platform 46 Installing on AWS

300

ToPort 9999 IpProtocol udp

MasterIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes kubelet scheduler and controller manager FromPort 10250 ToPort 10259 IpProtocol tcp

MasterIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressWorkerIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

MasterIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIngressWorkerIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties

CHAPTER 1 INSTALLING ON AWS

301

GroupId GetAtt MasterSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressMasterVxlan Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Vxlan packets FromPort 4789 ToPort 4789 IpProtocol udp

WorkerIngressGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressMasterGeneve Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Geneve packets FromPort 6081 ToPort 6081 IpProtocol udp

WorkerIngressInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

OpenShift Container Platform 46 Installing on AWS

302

WorkerIngressMasterInternal Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol tcp

WorkerIngressInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressMasterInternalUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal cluster communication FromPort 9000 ToPort 9999 IpProtocol udp

WorkerIngressKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes secure kubelet port FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressWorkerKube Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Internal Kubernetes communication FromPort 10250 ToPort 10250 IpProtocol tcp

WorkerIngressIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId

CHAPTER 1 INSTALLING ON AWS

303

Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressMasterIngressServices Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol tcp

WorkerIngressIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt WorkerSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

WorkerIngressMasterIngressServicesUDP Type AWSEC2SecurityGroupIngress Properties GroupId GetAtt WorkerSecurityGroupGroupId SourceSecurityGroupId GetAtt MasterSecurityGroupGroupId Description Kubernetes ingress services FromPort 30000 ToPort 32767 IpProtocol udp

MasterIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Policies - PolicyName Join [- [Ref InfrastructureName master policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2AttachVolume - ec2AuthorizeSecurityGroupIngress

OpenShift Container Platform 46 Installing on AWS

304

- ec2CreateSecurityGroup - ec2CreateTags - ec2CreateVolume - ec2DeleteSecurityGroup - ec2DeleteVolume - ec2Describe - ec2DetachVolume - ec2ModifyInstanceAttribute - ec2ModifyVolume - ec2RevokeSecurityGroupIngress - elasticloadbalancingAddTags - elasticloadbalancingAttachLoadBalancerToSubnets - elasticloadbalancingApplySecurityGroupsToLoadBalancer - elasticloadbalancingCreateListener - elasticloadbalancingCreateLoadBalancer - elasticloadbalancingCreateLoadBalancerPolicy - elasticloadbalancingCreateLoadBalancerListeners - elasticloadbalancingCreateTargetGroup - elasticloadbalancingConfigureHealthCheck - elasticloadbalancingDeleteListener - elasticloadbalancingDeleteLoadBalancer - elasticloadbalancingDeleteLoadBalancerListeners - elasticloadbalancingDeleteTargetGroup - elasticloadbalancingDeregisterInstancesFromLoadBalancer - elasticloadbalancingDeregisterTargets - elasticloadbalancingDescribe - elasticloadbalancingDetachLoadBalancerFromSubnets - elasticloadbalancingModifyListener - elasticloadbalancingModifyLoadBalancerAttributes - elasticloadbalancingModifyTargetGroup - elasticloadbalancingModifyTargetGroupAttributes - elasticloadbalancingRegisterInstancesWithLoadBalancer - elasticloadbalancingRegisterTargets - elasticloadbalancingSetLoadBalancerPoliciesForBackendServer - elasticloadbalancingSetLoadBalancerPoliciesOfListener - kmsDescribeKey Resource

MasterInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref MasterIamRole

WorkerIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole

CHAPTER 1 INSTALLING ON AWS

305

1811 RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs valid for the various Amazon WebServices (AWS) zones you can specify for your OpenShift Container Platform nodes

NOTE

You can also install to regions that do not have a RHCOS AMI published by importing yourown AMI

Table 120 RHCOS AMIs

AWS zone AWS AMI

af-south-1 ami-09921c9c1c36e695c

ap-east-1 ami-01ee8446e9af6b197

Policies - PolicyName Join [- [Ref InfrastructureName worker policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action - ec2DescribeInstances - ec2DescribeRegions Resource

WorkerInstanceProfile Type AWSIAMInstanceProfile Properties Roles - Ref WorkerIamRole

Outputs MasterSecurityGroupId Description Master Security Group ID Value GetAtt MasterSecurityGroupGroupId

WorkerSecurityGroupId Description Worker Security Group ID Value GetAtt WorkerSecurityGroupGroupId

MasterInstanceProfile Description Master IAM Instance Profile Value Ref MasterInstanceProfile

WorkerInstanceProfile Description Worker IAM Instance Profile Value Ref WorkerInstanceProfile

OpenShift Container Platform 46 Installing on AWS

306

ap-northeast-1 ami-04e5b5722a55846ea

ap-northeast-2 ami-0fdc25c8a0273a742

ap-south-1 ami-09e3deb397cc526a8

ap-southeast-1 ami-0630e03f75e02eec4

ap-southeast-2 ami-069450613262ba03c

ca-central-1 ami-012518cdbd3057dfd

eu-central-1 ami-0bd7175ff5b1aef0c

eu-north-1 ami-06c9ec42d0a839ad2

eu-south-1 ami-0614d7440a0363d71

eu-west-1 ami-01b89df58b5d4d5fa

eu-west-2 ami-06f6e31ddd554f89d

eu-west-3 ami-0dc82e2517ded15a1

me-south-1 ami-07d181e3aa0f76067

sa-east-1 ami-0cd44e6dd20e6c7fa

us-east-1 ami-04a16d506e5b0e246

us-east-2 ami-0a1f868ad58ea59a7

us-west-1 ami-0a65d76e3a6f6622f

us-west-2 ami-0dd9008abadc519f1

AWS zone AWS AMI

1812 Creating the bootstrap node in AWS

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift ContainerPlatform cluster initialization

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources The stack represents the bootstrap node that your OpenShift Container Platforminstallation requires

NOTE

CHAPTER 1 INSTALLING ON AWS

307

1

NOTE

If you do not use the provided CloudFormation template to create your bootstrap nodeyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

Procedure

1 Provide a location to serve the bootstrapign Ignition config file to your cluster This file islocated in your installation directory One way to do this is to create an S3 bucket in yourclusterrsquos region and upload the Ignition config file to it

IMPORTANT

The provided CloudFormation Template assumes that the Ignition config filesfor your cluster are served from an S3 bucket If you choose to serve the filesfrom another location you must modify the templates

IMPORTANT

If you are deploying to a region that has endpoints that differ from the AWS SDKor you are providing your own custom endpoints you must use a presigned URLfor your S3 bucket instead of the s3 schema

NOTE

The bootstrap Ignition config file does contain secrets like X509 keys Thefollowing steps provide basic security for the S3 bucket To provide additionalsecurity you can enable an S3 bucket policy to allow only certain users such asthe OpenShift IAM user to access objects that the bucket contains You canavoid S3 entirely and serve your bootstrap Ignition config file from any addressthat the bootstrap machine can reach

a Create the bucket

ltcluster-namegt-infra is the bucket name When creating the install-configyaml filereplace ltcluster-namegt with the name specified for the cluster

$ aws s3 mb s3ltcluster-namegt-infra 1

OpenShift Container Platform 46 Installing on AWS

308

1

b Upload the bootstrapign Ignition config file to the bucket

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

c Verify that the file uploaded

Example output

2 Create a JSON file that contains the parameter values that the template requires

$ aws s3 cp ltinstallation_directorygtbootstrapign s3ltcluster-namegt-infrabootstrapign 1

$ aws s3 ls s3ltcluster-namegt-infra

2019-04-03 161516 314878 bootstrapign

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AllowedBootstrapSshCidr 5 ParameterValue 00000 6 ParameterKey PublicSubnet 7 ParameterValue subnet-ltrandom_stringgt 8 ParameterKey MasterSecurityGroupId 9 ParameterValue sg-ltrandom_stringgt 10 ParameterKey VpcId 11 ParameterValue vpc-ltrandom_stringgt 12 ParameterKey BootstrapIgnitionLocation 13 ParameterValue s3ltbucket_namegtbootstrapign 14 ParameterKey AutoRegisterELB 15 ParameterValue yes 16

CHAPTER 1 INSTALLING ON AWS

309

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node

Specify a valid AWSEC2ImageId value

CIDR block to allow SSH access to the bootstrap node

Specify a CIDR block in the format xxxx16-24

The public subnet that is associated with your VPC to launch the bootstrap node into

Specify the PublicSubnetIds value from the output of the CloudFormation template forthe VPC

The master security group ID (for registering temporary rules)

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The VPC created resources will belong to

Specify the VpcId value from the output of the CloudFormation template for the VPC

Location to fetch bootstrap Ignition config file from

Specify the S3 bucket and file name in the form s3ltbucket_namegtbootstrapign

Whether or not to register a network load balancer (NLB)

ParameterKey RegisterNlbIpTargetsLambdaArn 17 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 18 ParameterKey ExternalApiTargetGroupArn 19 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 20 ParameterKey InternalApiTargetGroupArn 21 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 22 ParameterKey InternalServiceTargetGroupArn 23 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 24 ]

OpenShift Container Platform 46 Installing on AWS

310

16

17

18

19

20

21

22

23

24

1

2

3

4

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to an

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

3 Copy the template from the CloudFormation template for the bootstrap machine section ofthis topic and save it as a YAML file on your computer This template describes the bootstrapmachine that your cluster requires

4 Launch the CloudFormation template to create a stack of AWS resources that represent thebootstrap node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-bootstrap You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

You must explicitly declare the CAPABILITY_NAMED_IAM capability because theprovided template creates some AWSIAMRole and AWSIAMInstanceProfileresources

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3 --capabilities CAPABILITY_NAMED_IAM 4

CHAPTER 1 INSTALLING ON AWS

311

Example output

5 Confirm that the template components exist

After the StackStatus displays CREATE_COMPLETE the output displays values for thefollowing parameters You must provide these parameter values to the other CloudFormationtemplates that you run to create your cluster

BootstrapInstanceId

The bootstrap Instance ID

BootstrapPublicIp

The bootstrap node public IP address

BootstrapPrivateIp

The bootstrap node private IP address

18121 CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need foryour OpenShift Container Platform cluster

Example 143 CloudFormation template for the bootstrap machine

arnawscloudformationus-east-1269333783861stackcluster-bootstrap12944486-2add-11eb-9dee-12dace8e3a83

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Bootstrap (EC2 Instance Security Groups and IAM)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag cloud resources and identify items owned or used by the cluster Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AllowedBootstrapSshCidr AllowedPattern ^(([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5]))3([0-9]|[1-9][0-9]|1[0-9]2|2[0-4][0-9]|25[0-5])(([0-9]|1[0-9]|2[0-9]|3[0-2]))$ ConstraintDescription CIDR block parameter must be in the form xxxx0-32 Default 00000 Description CIDR block to allow SSH access to the bootstrap node

OpenShift Container Platform 46 Installing on AWS

312

Type String PublicSubnet Description The public subnet to launch the bootstrap node into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID for registering temporary rules Type AWSEC2SecurityGroupId VpcId Description The VPC-scoped resources will belong to this VPC Type AWSEC2VPCId BootstrapIgnitionLocation Default s3my-s3-bucketbootstrapign Description Ignition config file location Type String AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label default Load Balancer Automation Parameters

CHAPTER 1 INSTALLING ON AWS

313

- AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID AllowedBootstrapSshCidr default Allowed SSH Source PublicSubnet default Public Subnet RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Bootstrap Ignition Source MasterSecurityGroupId default Master Security Group ID AutoRegisterELB default Use Provided ELB Automation

Conditions DoRegistration Equals [yes Ref AutoRegisterELB]

Resources BootstrapIamRole Type AWSIAMRole Properties AssumeRolePolicyDocument Version 2012-10-17 Statement - Effect Allow Principal Service - ec2amazonawscom Action - stsAssumeRole Path Policies - PolicyName Join [- [Ref InfrastructureName bootstrap policy]] PolicyDocument Version 2012-10-17 Statement - Effect Allow Action ec2Describe Resource - Effect Allow Action ec2AttachVolume Resource - Effect Allow Action ec2DetachVolume Resource - Effect Allow Action s3GetObject

OpenShift Container Platform 46 Installing on AWS

314

Resource

BootstrapInstanceProfile Type AWSIAMInstanceProfile Properties Path Roles - Ref BootstrapIamRole

BootstrapSecurityGroup Type AWSEC2SecurityGroup Properties GroupDescription Cluster Bootstrap Security Group SecurityGroupIngress - IpProtocol tcp FromPort 22 ToPort 22 CidrIp Ref AllowedBootstrapSshCidr - IpProtocol tcp ToPort 19531 FromPort 19531 CidrIp 00000 VpcId Ref VpcId

BootstrapInstance Type AWSEC2Instance Properties ImageId Ref RhcosAmi IamInstanceProfile Ref BootstrapInstanceProfile InstanceType i3large NetworkInterfaces - AssociatePublicIpAddress true DeviceIndex 0 GroupSet - Ref BootstrapSecurityGroup - Ref MasterSecurityGroupId SubnetId Ref PublicSubnet UserData FnBase64 Sub - ignitionconfigreplacesource$S3Locversion310 - S3Loc Ref BootstrapIgnitionLocation

RegisterBootstrapApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties

CHAPTER 1 INSTALLING ON AWS

315

Additional resources

See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise LinuxCoreOS (RHCOS) AMIs for the AWS zones

1813 Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent the control plane nodes

IMPORTANT

The CloudFormation template creates a stack that represents three control plane nodes

NOTE

If you do not use the provided CloudFormation template to create your control planenodes you must review the provided information and manually create the infrastructureIf your cluster does not initialize correctly you might have to contact Red Hat supportwith your installation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

RegisterBootstrapInternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt BootstrapInstancePrivateIp

Outputs BootstrapInstanceId Description Bootstrap Instance ID Value Ref BootstrapInstance

BootstrapPublicIp Description The bootstrap node public IP address Value GetAtt BootstrapInstancePublicIp

BootstrapPrivateIp Description The bootstrap node private IP address Value GetAtt BootstrapInstancePrivateIp

OpenShift Container Platform 46 Installing on AWS

316

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

Procedure

1 Create a JSON file that contains the parameter values that the template requires

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2 ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey AutoRegisterDNS 5 ParameterValue yes 6 ParameterKey PrivateHostedZoneId 7 ParameterValue ltrandom_stringgt 8 ParameterKey PrivateHostedZoneName 9 ParameterValue myclusterexamplecom 10 ParameterKey Master0Subnet 11 ParameterValue subnet-ltrandom_stringgt 12 ParameterKey Master1Subnet 13 ParameterValue subnet-ltrandom_stringgt 14 ParameterKey Master2Subnet 15 ParameterValue subnet-ltrandom_stringgt 16 ParameterKey MasterSecurityGroupId 17 ParameterValue sg-ltrandom_stringgt 18 ParameterKey IgnitionLocation 19

CHAPTER 1 INSTALLING ON AWS

317

1

2

3

4

5

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control planemachines

Specify an AWSEC2ImageId value

Whether or not to perform DNS etcd registration

ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configmaster 20 ParameterKey CertificateAuthorities 21 ParameterValue datatextplaincharset=utf-8base64ABCxYz== 22 ParameterKey MasterInstanceProfileName 23 ParameterValue ltroles_stackgt-MasterInstanceProfile-ltrandom_stringgt 24 ParameterKey MasterInstanceType 25 ParameterValue m4xlarge 26 ParameterKey AutoRegisterELB 27 ParameterValue yes 28 ParameterKey RegisterNlbIpTargetsLambdaArn 29 ParameterValue arnawslambdaltregiongtltaccount_numbergtfunctionltdns_stack_namegt-RegisterNlbIpTargets-ltrandom_stringgt 30 ParameterKey ExternalApiTargetGroupArn 31 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Exter-ltrandom_stringgt 32 ParameterKey InternalApiTargetGroupArn 33 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 34 ParameterKey InternalServiceTargetGroupArn 35 ParameterValue arnawselasticloadbalancingltregiongtltaccount_numbergttargetgroupltdns_stack_namegt-Inter-ltrandom_stringgt 36 ]

OpenShift Container Platform 46 Installing on AWS

318

6

7

8

9

10

11 13 15

12 14 16

17

18

19

20

21

22

23

24

25

26

Specify yes or no If you specify yes you must provide hosted zone information

The Route 53 private zone ID to register the etcd targets with

Specify the PrivateHostedZoneId value from the output of the CloudFormation templatefor DNS and load balancing

The Route 53 zone to register the targets with

Specify ltcluster_namegtltdomain_namegt where ltdomain_namegt is the Route 53 basedomain that you used when you generated install-configyaml file for the cluster Do notinclude the trailing period () that is displayed in the AWS console

A subnet preferably private to launch the control plane machines on

Specify a subnet from the PrivateSubnets value from the output of theCloudFormation template for DNS and load balancing

The master security group ID to associate with master nodes

Specify the MasterSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch control plane Ignition config file from

Specify the generated Ignition config file location httpsapi-intltcluster_namegtltdomain_namegt22623configmaster

The base64 encoded certificate authority string to use

Specify the value from the masterign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with master nodes

Specify the MasterInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5xlarge

m52xlarge

CHAPTER 1 INSTALLING ON AWS

319

27

28

29

30

31

32

33

34

35

36

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c42xlarge

c44xlarge

c48xlarge

r4xlarge

r42xlarge

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 specify an m5 type such as m5xlarge instead

Whether or not to register a network load balancer (NLB)

Specify yes or no If you specify yes you must provide a Lambda Amazon Resource Name(ARN) value

The ARN for NLB IP target registration lambda group

Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for external API load balancer target group

Specify the ExternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal API load balancer target group

Specify the InternalApiTargetGroupArn value from the output of the CloudFormationtemplate for DNS and load balancing Use arnaws-us-gov if deploying the cluster to anAWS GovCloud region

The ARN for internal service load balancer target group

Specify the InternalServiceTargetGroupArn value from the output of theCloudFormation template for DNS and load balancing Use arnaws-us-gov if deployingthe cluster to an AWS GovCloud region

OpenShift Container Platform 46 Installing on AWS

320

1

2

3

2 Copy the template from the CloudFormation template for control plane machines section ofthis topic and save it as a YAML file on your computer This template describes the control planemachines that your cluster requires

3 If you specified an m5 instance type as the value for MasterInstanceType add that instancetype to the MasterInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent thecontrol plane nodes

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-control-plane Youneed the name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents three control planenodes

5 Confirm that the template components exist

18131 CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you needfor your OpenShift Container Platform cluster

Example 144 CloudFormation template for control plane machines

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-control-plane21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

$ aws cloudformation describe-stacks --stack-name ltnamegt

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 master instances)

CHAPTER 1 INSTALLING ON AWS

321

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId AutoRegisterDNS Default yes AllowedValues - yes - no Description Do you want to invoke DNS etcd registration which requires Hosted Zone information Type String PrivateHostedZoneId Description The Route53 private zone ID to register the etcd targets with such as Z21IXYZABCZ2A4 Type String PrivateHostedZoneName Description The Route53 zone to register the targets with such as clusterexamplecom Omit the trailing period Type String Master0Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master1Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId Master2Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId MasterSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configmaster Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String MasterInstanceProfileName Description IAM profile to associate with master nodes Type String MasterInstanceType Default m5xlarge Type String AllowedValues - m4xlarge

OpenShift Container Platform 46 Installing on AWS

322

- m42xlarge - m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c42xlarge - c44xlarge - c48xlarge - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge AutoRegisterELB Default yes AllowedValues - yes - no Description Do you want to invoke NLB registration which requires a Lambda ARN parameter Type String RegisterNlbIpTargetsLambdaArn Description ARN for NLB IP target registration lambda Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String ExternalApiTargetGroupArn Description ARN for external API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalApiTargetGroupArn Description ARN for internal API load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String InternalServiceTargetGroupArn Description ARN for internal service load balancer target group Supply the value from the cluster infrastructure or select no for AutoRegisterELB Type String

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - MasterInstanceType - RhcosAmi - IgnitionLocation

CHAPTER 1 INSTALLING ON AWS

323

- CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label default Network Configuration Parameters - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label default DNS Parameters - AutoRegisterDNS - PrivateHostedZoneName - PrivateHostedZoneId - Label default Load Balancer Automation Parameters - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels InfrastructureName default Infrastructure Name VpcId default VPC ID Master0Subnet default Master-0 Subnet Master1Subnet default Master-1 Subnet Master2Subnet default Master-2 Subnet MasterInstanceType default Master Instance Type MasterInstanceProfileName default Master Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID BootstrapIgnitionLocation default Master Ignition Source CertificateAuthorities default Ignition CA String MasterSecurityGroupId default Master Security Group ID AutoRegisterDNS default Use Provided DNS Automation AutoRegisterELB default Use Provided ELB Automation PrivateHostedZoneName default Private Hosted Zone Name PrivateHostedZoneId default Private Hosted Zone ID

OpenShift Container Platform 46 Installing on AWS

324

Conditions DoRegistration Equals [yes Ref AutoRegisterELB] DoDns Equals [yes Ref AutoRegisterDNS]

Resources Master0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master0Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster0 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master0PrivateIp

RegisterMaster0InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn

CHAPTER 1 INSTALLING ON AWS

325

TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master0PrivateIp

Master1 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master1Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster1 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master1PrivateIp

RegisterMaster1InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master1PrivateIp

OpenShift Container Platform 46 Installing on AWS

326

Master2 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref MasterInstanceProfileName InstanceType Ref MasterInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref MasterSecurityGroupId SubnetId Ref Master2Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

RegisterMaster2 Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref ExternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalApiTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalApiTargetGroupArn TargetIp GetAtt Master2PrivateIp

RegisterMaster2InternalServiceTarget Condition DoRegistration Type CustomNLBRegister Properties ServiceToken Ref RegisterNlbIpTargetsLambdaArn TargetArn Ref InternalServiceTargetGroupArn TargetIp GetAtt Master2PrivateIp

EtcdSrvRecords Condition DoDns Type AWSRoute53RecordSet

CHAPTER 1 INSTALLING ON AWS

327

Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [_etcd-server-ssl_tcp Ref PrivateHostedZoneName]] ResourceRecords - Join [ [0 10 2380 Join [ [etcd-0 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-1 Ref PrivateHostedZoneName]]] ] - Join [ [0 10 2380 Join [ [etcd-2 Ref PrivateHostedZoneName]]] ] TTL 60 Type SRV

Etcd0Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-0 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master0PrivateIp TTL 60 Type A

Etcd1Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-1 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master1PrivateIp TTL 60 Type A

Etcd2Record Condition DoDns Type AWSRoute53RecordSet Properties HostedZoneId Ref PrivateHostedZoneId Name Join [ [etcd-2 Ref PrivateHostedZoneName]] ResourceRecords - GetAtt Master2PrivateIp TTL 60 Type A

Outputs PrivateIPs Description The control-plane node private IP addresses Value

OpenShift Container Platform 46 Installing on AWS

328

1814 Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use

You can use the provided CloudFormation template and a custom parameter file to create a stack ofAWS resources that represent a worker node

IMPORTANT

The CloudFormation template creates a stack that represents one worker node Youmust create a stack for each worker node

NOTE

If you do not use the provided CloudFormation template to create your worker nodesyou must review the provided information and manually create the infrastructure If yourcluster does not initialize correctly you might have to contact Red Hat support with yourinstallation logs

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

Procedure

1 Create a JSON file that contains the parameter values that the CloudFormation templaterequires

Join [ [GetAtt Master0PrivateIp GetAtt Master1PrivateIp GetAtt Master2PrivateIp] ]

[ ParameterKey InfrastructureName 1 ParameterValue mycluster-ltrandom_stringgt 2

CHAPTER 1 INSTALLING ON AWS

329

1

2

3

4

5

6

7

8

9

10

The name for your cluster infrastructure that is encoded in your Ignition config files for thecluster

Specify the infrastructure name that you extracted from the Ignition config file metadatawhich has the format ltcluster-namegt-ltrandom-stringgt

Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes

Specify an AWSEC2ImageId value

A subnet preferably private to launch the worker nodes on

Specify a subnet from the PrivateSubnets value from the output of the CloudFormationtemplate for DNS and load balancing

The worker security group ID to associate with worker nodes

Specify the WorkerSecurityGroupId value from the output of the CloudFormationtemplate for the security group and roles

The location to fetch bootstrap Ignition config file from

Specify the generated Ignition config location httpsapi-intltcluster_namegtltdomain_namegt22623configworker

ParameterKey RhcosAmi 3 ParameterValue ami-ltrandom_stringgt 4 ParameterKey Subnet 5 ParameterValue subnet-ltrandom_stringgt 6 ParameterKey WorkerSecurityGroupId 7 ParameterValue sg-ltrandom_stringgt 8 ParameterKey IgnitionLocation 9 ParameterValue httpsapi-intltcluster_namegtltdomain_namegt22623configworker 10 ParameterKey CertificateAuthorities 11 ParameterValue 12 ParameterKey WorkerInstanceProfileName 13 ParameterValue 14 ParameterKey WorkerInstanceType 15 ParameterValue m4large 16 ]

OpenShift Container Platform 46 Installing on AWS

330

11

12

13

14

15

16

Base64 encoded certificate authority string to use

Specify the value from the workerign file that is in the installation directory This value isthe long string with the format datatextplaincharset=utf-8base64ABChellip xYz==

The IAM profile to associate with worker nodes

Specify the WorkerInstanceProfile parameter value from the output of theCloudFormation template for the security group and roles

The type of AWS instance to use for the control plane machines

Allowed values

m4large

m4xlarge

m42xlarge

m44xlarge

m48xlarge

m410xlarge

m416xlarge

m5large

m5xlarge

m52xlarge

m54xlarge

m58xlarge

m510xlarge

m516xlarge

c4large

c4xlarge

c42xlarge

c44xlarge

c48xlarge

r4large

r4xlarge

r42xlarge

CHAPTER 1 INSTALLING ON AWS

331

1

2

3

r44xlarge

r48xlarge

r416xlarge

IMPORTANT

If m4 instance types are not available in your region such as with eu-west-3 use m5 types instead

2 Copy the template from the CloudFormation template for worker machines section of thistopic and save it as a YAML file on your computer This template describes the networkingobjects and load balancers that your cluster requires

3 If you specified an m5 instance type as the value for WorkerInstanceType add that instancetype to the WorkerInstanceTypeAllowedValues parameter in the CloudFormation template

4 Launch the CloudFormation template to create a stack of AWS resources that represent aworker node

IMPORTANT

You must enter the command on a single line

ltnamegt is the name for the CloudFormation stack such as cluster-worker-1 You needthe name of this stack if you remove the cluster

lttemplategt is the relative path to and name of the CloudFormation template YAML filethat you saved

ltparametersgt is the relative path to and name of the CloudFormation parameters JSONfile

Example output

NOTE

The CloudFormation template creates a stack that represents one worker node

5 Confirm that the template components exist

$ aws cloudformation create-stack --stack-name ltnamegt 1 --template-body filelttemplategtyaml 2 --parameters fileltparametersgtjson 3

arnawscloudformationus-east-1269333783861stackcluster-worker-1729ee301-1c2a-11eb-348f-sd9888c65b59

$ aws cloudformation describe-stacks --stack-name ltnamegt

OpenShift Container Platform 46 Installing on AWS

332

6 Continue to create worker stacks until you have created enough worker machines for yourcluster You can create additional worker stacks by referencing the same template andparameter files and specifying a different stack name

IMPORTANT

You must create at least two worker machines so you must create at least twostacks that use this CloudFormation template

18141 CloudFormation template for worker machines

You can use the following CloudFormation template to deploy the worker machines that you need foryour OpenShift Container Platform cluster

Example 145 CloudFormation template for worker machines

AWSTemplateFormatVersion 2010-09-09Description Template for OpenShift Cluster Node Launch (EC2 worker instance)

Parameters InfrastructureName AllowedPattern ^([a-zA-Z][a-zA-Z0-9-]026)$ MaxLength 27 MinLength 1 ConstraintDescription Infrastructure name must be alphanumeric start with a letter and have a maximum of 27 characters Description A short unique cluster ID used to tag nodes for the kubelet cloud provider Type String RhcosAmi Description Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap Type AWSEC2ImageId Subnet Description The subnets recommend private to launch the master nodes into Type AWSEC2SubnetId WorkerSecurityGroupId Description The master security group ID to associate with master nodes Type AWSEC2SecurityGroupId IgnitionLocation Default httpsapi-int$CLUSTER_NAME$DOMAIN22623configworker Description Ignition config file location Type String CertificateAuthorities Default datatextplaincharset=utf-8base64ABCxYz== Description Base64 encoded certificate authority string to use Type String WorkerInstanceProfileName Description IAM profile to associate with master nodes Type String WorkerInstanceType Default m5large Type String AllowedValues - m4large - m4xlarge - m42xlarge

CHAPTER 1 INSTALLING ON AWS

333

- m44xlarge - m48xlarge - m410xlarge - m416xlarge - m5large - m5xlarge - m52xlarge - m54xlarge - m58xlarge - m510xlarge - m516xlarge - c4large - c4xlarge - c42xlarge - c44xlarge - c48xlarge - r4large - r4xlarge - r42xlarge - r44xlarge - r48xlarge - r416xlarge

Metadata AWSCloudFormationInterface ParameterGroups - Label default Cluster Information Parameters - InfrastructureName - Label default Host Information Parameters - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label default Network Configuration Parameters - Subnet ParameterLabels Subnet default Subnet InfrastructureName default Infrastructure Name WorkerInstanceType default Worker Instance Type WorkerInstanceProfileName default Worker Instance Profile Name RhcosAmi default Red Hat Enterprise Linux CoreOS AMI ID IgnitionLocation default Worker Ignition Source

OpenShift Container Platform 46 Installing on AWS

334

1815 Initializing the bootstrap sequence on AWS with user-provisionedinfrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS) you can start thebootstrap sequence that initializes the OpenShift Container Platform control plane

Prerequisites

You configured an AWS account

You added your AWS keys and region to your local AWS profile by running aws configure

You generated the Ignition config files for your cluster

CertificateAuthorities default Ignition CA String WorkerSecurityGroupId default Worker Security Group ID

Resources Worker0 Type AWSEC2Instance Properties ImageId Ref RhcosAmi BlockDeviceMappings - DeviceName devxvda Ebs VolumeSize 120 VolumeType gp2 IamInstanceProfile Ref WorkerInstanceProfileName InstanceType Ref WorkerInstanceType NetworkInterfaces - AssociatePublicIpAddress false DeviceIndex 0 GroupSet - Ref WorkerSecurityGroupId SubnetId Ref Subnet UserData FnBase64 Sub - ignitionconfigmerge[source$SOURCE]securitytlscertificateAuthorities[source$CA_BUNDLE]version310 - SOURCE Ref IgnitionLocation CA_BUNDLE Ref CertificateAuthorities Tags - Key Join [ [kubernetesiocluster Ref InfrastructureName]] Value shared

Outputs PrivateIP Description The compute node private IP address Value GetAtt Worker0PrivateIp

CHAPTER 1 INSTALLING ON AWS

335

1

2

You created and configured a VPC and associated subnets in AWS

You created and configured DNS load balancers and listeners in AWS

You created the security groups and roles required for your cluster in AWS

You created the bootstrap machine

You created the control plane machines

You created the worker nodes

Procedure

1 Change to the directory that contains the installation program and start the bootstrap processthat initializes the OpenShift Container Platform control plane

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different installation details specify warn debug or error instead of info

Example output

If the command exits without a FATAL warning your OpenShift Container Platform controlplane has initialized

NOTE

After the control plane initializes it sets up the compute nodes and installsadditional services in the form of Operators

Additional resources

See Monitoring installation progress for details about monitoring the installation bootstrap andcontrol plane logs as an OpenShift Container Platform installation progresses

See Gathering bootstrap node diagnostic data for information about troubleshooting issuesrelated to the bootstrap process

1816 Logging in to the cluster by using the CLI

You can log in to your cluster as a default system user by exporting the cluster kubeconfig file The

$ openshift-install wait-for bootstrap-complete --dir=ltinstallation_directorygt 1 --log-level=info 2

INFO Waiting up to 20m0s for the Kubernetes API at httpsapimyclusterexamplecom6443INFO API v1190+9f84db3 upINFO Waiting up to 30m0s for bootstrapping to completeINFO It is now safe to remove the bootstrap resourcesINFO Time elapsed 1s

OpenShift Container Platform 46 Installing on AWS

336

1

kubeconfig file contains information about the cluster that is used by the CLI to connect a client to thecorrect cluster and API server The file is specific to a cluster and is created during OpenShift ContainerPlatform installation

Prerequisites

You deployed an OpenShift Container Platform cluster

You installed the oc CLI

Procedure

1 Export the kubeadmin credentials

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

2 Verify you can run oc commands successfully using the exported configuration

Example output

1817 Approving the certificate signing requests for your machines

When you add machines to a cluster two pending certificate signing requests (CSRs) are generated foreach machine that you added You must confirm that these CSRs are approved or if necessary approvethem yourself The client requests must be approved first followed by the server requests

Prerequisites

You added machines to your cluster

Procedure

1 Confirm that the cluster recognizes the machines

Example output

$ export KUBECONFIG=ltinstallation_directorygtauthkubeconfig 1

$ oc whoami

systemadmin

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 63m v1190master-1 Ready master 63m v1190master-2 Ready master 64m v1190worker-0 NotReady worker 76s v1190worker-1 NotReady worker 70s v1190

CHAPTER 1 INSTALLING ON AWS

337

The output lists all of the machines that you created

NOTE

The preceding output might not include the compute nodes also known asworker nodes until some CSRs are approved

2 Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster

Example output

In this example two machines are joining the cluster You might see more approved CSRs in thelist

3 If the CSRs were not approved after all of the pending CSRs for the machines you added are in Pending status approve the CSRs for your cluster machines

NOTE

Because the CSRs rotate automatically approve your CSRs within an hour ofadding the machines to the cluster If you do not approve them within an hour thecertificates will rotate and more than two certificates will be present for eachnode You must approve all of these certificates After you approve the initialCSRs the subsequent node client CSRs are automatically approved by thecluster kube-controller-manager

NOTE

For clusters running on platforms that are not machine API enabled such as baremetal and other user-provisioned infrastructure you must implement a methodof automatically approving the kubelet serving certificate requests (CSRs) If arequest is not approved then the oc exec oc rsh and oc logs commandscannot succeed because a serving certificate is required when the API serverconnects to the kubelet Any operation that contacts the Kubelet endpointrequires this certificate approval to be in place The method must watch for newCSRs confirm that the CSR was submitted by the node-bootstrapper serviceaccount in the systemnode or systemadmin groups and confirm the identityof the node

To approve them individually run the following command for each valid CSR

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-8b2br 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pendingcsr-8vnps 15m systemserviceaccountopenshift-machine-config-operatornode-bootstrapper Pending

$ oc adm certificate approve ltcsr_namegt 1

OpenShift Container Platform 46 Installing on AWS

338

1

1

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

NOTE

Some Operators might not become available until some CSRs are approved

4 Now that your client requests are approved you must review the server requests for eachmachine that you added to the cluster

Example output

5 If the remaining CSRs are not approved and are in the Pending status approve the CSRs foryour cluster machines

To approve them individually run the following command for each valid CSR

ltcsr_namegt is the name of a CSR from the list of current CSRs

To approve all pending CSRs run the following command

6 After all client and server CSRs have been approved the machines have the Ready statusVerify this by running the following command

Example output

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs --no-run-if-empty oc adm certificate approve

$ oc get csr

NAME AGE REQUESTOR CONDITIONcsr-bfd72 5m26s systemnodeip-10-0-50-126us-east-2computeinternal Pendingcsr-c57lv 5m26s systemnodeip-10-0-95-157us-east-2computeinternal Pending

$ oc adm certificate approve ltcsr_namegt 1

$ oc get csr -o go-template=range itemsif not statusmetadatanamenendend | xargs oc adm certificate approve

$ oc get nodes

NAME STATUS ROLES AGE VERSIONmaster-0 Ready master 73m v1200master-1 Ready master 73m v1200

CHAPTER 1 INSTALLING ON AWS

339

NOTE

It can take a few minutes after approval of the server CSRs for the machines totransition to the Ready status

Additional information

For more information on CSRs see Certificate Signing Requests

1818 Initial Operator configuration

After the control plane initializes you must immediately configure some Operators so that they allbecome available

Prerequisites

Your control plane has initialized

Procedure

1 Watch the cluster components come online

Example output

master-2 Ready master 74m v1200worker-0 Ready worker 11m v1200worker-1 Ready worker 11m v1200

$ watch -n5 oc get clusteroperators

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCEauthentication 460 True False False 3h56mcloud-credential 460 True False False 29hcluster-autoscaler 460 True False False 29hconfig-operator 460 True False False 6h39mconsole 460 True False False 3h59mcsi-snapshot-controller 460 True False False 4h12mdns 460 True False False 4h15metcd 460 True False False 29himage-registry 460 True False False 3h59mingress 460 True False False 4h30minsights 460 True False False 29hkube-apiserver 460 True False False 29hkube-controller-manager 460 True False False 29hkube-scheduler 460 True False False 29hkube-storage-version-migrator 460 True False False 4h2mmachine-api 460 True False False 29hmachine-approver 460 True False False 6h34mmachine-config 460 True False False 3h56mmarketplace 460 True False False 4h2mmonitoring 460 True False False 6h31mnetwork 460 True False False 29hnode-tuning 460 True False False 4h30m

OpenShift Container Platform 46 Installing on AWS

340

2 Configure the Operators that are not available

18181 Image registry storage configuration

Amazon Web Services provides default storage which means the Image Registry Operator is availableafter installation However if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage you must manually configure registry storage

Instructions are shown for configuring a persistent volume which is required for production clustersWhere applicable instructions are shown for configuring an empty directory as the storage locationwhich is available for only non-production clusters

Additional instructions are provided for allowing the image registry to use block storage types by usingthe Recreate rollout strategy during upgrades

181811 Configuring registry storage for AWS with user-provisioned infrastructure

During installation your cloud credentials are sufficient to create an Amazon S3 bucket and the RegistryOperator will automatically configure storage

If the Registry Operator cannot create an S3 bucket and automatically configure storage you cancreate an S3 bucket and configure storage with the following procedure

Prerequisites

You have a cluster on AWS with user-provisioned infrastructure

For Amazon S3 storage the secret is expected to contain two keys

REGISTRY_STORAGE_S3_ACCESSKEY

REGISTRY_STORAGE_S3_SECRETKEY

Procedure

Use the following procedure if the Registry Operator cannot create an S3 bucket and automaticallyconfigure storage

1 Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old

2 Fill in the storage configuration in configsimageregistryoperatoropenshiftiocluster

Example configuration

openshift-apiserver 460 True False False 3h56mopenshift-controller-manager 460 True False False 4h36mopenshift-samples 460 True False False 4h30moperator-lifecycle-manager 460 True False False 29hoperator-lifecycle-manager-catalog 460 True False False 29hoperator-lifecycle-manager-packageserver 460 True False False 3h59mservice-ca 460 True False False 29hstorage 460 True False False 4h30m

$ oc edit configsimageregistryoperatoropenshiftiocluster

storage

CHAPTER 1 INSTALLING ON AWS

341

WARNING

To secure your registry images in AWS block public access to the S3 bucket

181812 Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator For non-production clusters you can setthe image registry to an empty directory If you do so all images are lost if you restart the registry

Procedure

To set the image registry storage to an empty directory

WARNING

Configure this option for only non-production clusters

If you run this command before the Image Registry Operator initializes its components the oc patch command fails with the following error

Wait a few minutes and run the command again

1819 Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster remove the bootstrap resourcesfrom Amazon Web Services (AWS)

Prerequisites

You completed the initial Operator configuration for your cluster

Procedure

1 Delete the bootstrap resources If you used the CloudFormation template delete its stack

s3 bucket ltbucket-namegt region ltregion-namegt

$ oc patch configsimageregistryoperatoropenshiftio cluster --type merge --patch specstorageemptyDir

Error from server (NotFound) configsimageregistryoperatoropenshiftio cluster not found

OpenShift Container Platform 46 Installing on AWS

342

1

Delete the stack by using the AWS CLI

ltnamegt is the name of your bootstrap stack

Delete the stack by using the AWS CloudFormation console

1820 Creating the Ingress DNS Records

If you removed the DNS Zone configuration manually create DNS records that point to the Ingress loadbalancer You can create either a wildcard record or specific records While the following procedure usesA records you can use other record types that you require such as CNAME or alias

Prerequisites

You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) thatuses infrastructure that you provisioned

You installed the OpenShift CLI (oc)

You installed the jq package

You downloaded the AWS CLI and installed it on your computer See Install the AWS CLI Usingthe Bundled Installer (Linux macOS or Unix)

Procedure

1 Determine the routes to create

To create a wildcard record use appsltcluster_namegtltdomain_namegt where ltcluster_namegt is your cluster name and ltdomain_namegt is the Route 53 base domainfor your OpenShift Container Platform cluster

To create specific records you must create a record for each route that your cluster uses asshown in the output of the following command

Example output

2 Retrieve the Ingress Operator load balancer status and note the value of the external IP addressthat it uses which is shown in the EXTERNAL-IP column

$ aws cloudformation delete-stack --stack-name ltnamegt 1

$ oc get --all-namespaces -o jsonpath=range items[]range statusingress[]hostnendend routes

oauth-openshiftappsltcluster_namegtltdomain_namegtconsole-openshift-consoleappsltcluster_namegtltdomain_namegtdownloads-openshift-consoleappsltcluster_namegtltdomain_namegtalertmanager-main-openshift-monitoringappsltcluster_namegtltdomain_namegtgrafana-openshift-monitoringappsltcluster_namegtltdomain_namegtprometheus-k8s-openshift-monitoringappsltcluster_namegtltdomain_namegt

$ oc -n openshift-ingress get service router-default

CHAPTER 1 INSTALLING ON AWS

343

1

1 2

Example output

3 Locate the hosted zone ID for the load balancer

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer that you obtained

Example output

The output of this command is the load balancer hosted zone ID

4 Obtain the public hosted zone ID for your clusterrsquos domain

For ltdomain_namegt specify the Route 53 base domain for your OpenShift ContainerPlatform cluster

Example output

The public hosted zone ID for your domain is shown in the command output In this example it is Z3URY6TWQ91KVV

5 Add the alias records to your private zone

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGErouter-default LoadBalancer 1723062215 ab328us-east-2elbamazonawscom 8031499TCP44330693TCP 5m

$ aws elb describe-load-balancers | jq -r LoadBalancerDescriptions[] | select(DNSName == ltexternal_ipgt)CanonicalHostedZoneNameID 1

Z3AADJGX6KTTL2

$ aws route53 list-hosted-zones-by-name --dns-name ltdomain_namegt 1 --query HostedZones[ ConfigPrivateZone = `true` ampamp Name == `ltdomain_namegt`]Id 2 --output text

hostedzoneZ3URY6TWQ91KVV

$ aws route53 change-resource-record-sets --hosted-zone-id ltprivate_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3

OpenShift Container Platform 46 Installing on AWS

344

1

2

3

4

1

2

3

4

For ltprivate_hosted_zone_idgt specify the value from the output of the CloudFormationtemplate for DNS and load balancing

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

6 Add the records to your public zone

For ltpublic_hosted_zone_idgt specify the public hosted zone for your domain

For ltcluster_domaingt specify the domain or subdomain that you use with yourOpenShift Container Platform cluster

For lthosted_zone_idgt specify the public hosted zone ID for the load balancer that youobtained

For ltexternal_ipgt specify the value of the external IP address of the Ingress Operatorload balancer Ensure that you include the trailing period () in this parameter value

1821 Completing an AWS installation on user-provisioned infrastructure

gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

$ aws route53 change-resource-record-sets --hosted-zone-id ltpublic_hosted_zone_idgt --change-batch 1gt Changes [gt gt Action CREATEgt ResourceRecordSet gt Name 052appsltcluster_domaingt 2gt Type Agt AliasTargetgt HostedZoneId lthosted_zone_idgt 3gt DNSName ltexternal_ipgt 4gt EvaluateTargetHealth falsegt gt gt gt ]gt

CHAPTER 1 INSTALLING ON AWS

345

1

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure monitor the deployment to completion

Prerequisites

You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure

You installed the oc CLI

Procedure

1 From the directory that contains the installation program complete the cluster installation

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

Example output

IMPORTANT

The Ignition config files that the installation program generates containcertificates that expire after 24 hours which are then renewed at that time If thecluster is shut down before renewing the certificates and the cluster is laterrestarted after the 24 hours have elapsed the cluster automatically recovers theexpired certificates The exception is that you must manually approve thepending node-bootstrapper certificate signing requests (CSRs) to recoverkubelet certificates See the documentation for Recovering from expired controlplane certificates for more information

2 Register your cluster on the Cluster registration page

1822 Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation You can loginto your cluster as the kubeadmin user by using the OpenShift Container Platform web console

Prerequisites

$ openshift-install --dir=ltinstallation_directorygt wait-for install-complete 1

INFO Waiting up to 40m0s for the cluster at httpsapimyclusterexamplecom6443 to initializeINFO Waiting up to 10m0s for the openshift-console route to be createdINFO Install completeINFO To access the cluster as the systemadmin user when using oc run export KUBECONFIG=homemyuserinstall_dirauthkubeconfigINFO Access the OpenShift web-console here httpsconsole-openshift-consoleappsmyclusterexamplecomINFO Login to the console with user kubeadmin and password 4vYBz-Fe5en-ymBEc-Wt6NLINFO Time elapsed 1s

OpenShift Container Platform 46 Installing on AWS

346

You have access to the installation host

You completed a cluster installation and all cluster Operators are available

Procedure

1 Obtain the password for the kubeadmin user from the kubeadmin-password file on theinstallation host

NOTE

Alternatively you can obtain the kubeadmin password from the ltinstallation_directorygtopenshift_installlog log file on the installation host

2 List the OpenShift Container Platform web console route

NOTE

Alternatively you can obtain the OpenShift Container Platform route from the ltinstallation_directorygtopenshift_installlog log file on the installation host

Example output

3 Navigate to the route detailed in the output of the preceding command in a web browser andlog in as the kubeadmin user

1823 Additional resources

See Accessing the web console for more details about accessing and understanding theOpenShift Container Platform web console

See Working with stacks in the AWS documentation for more information about AWSCloudFormation stacks

1824 Next steps

Validate an installation

Customize your cluster

If the mirror registry that you used to install your cluster has a trusted CA add it to the cluster byconfiguring additional trust stores

If necessary you can opt out of remote health reporting

$ cat ltinstallation_directorygtauthkubeadmin-password

$ oc get routes -n openshift-console | grep console-openshift

console console-openshift-consoleappsltcluster_namegtltbase_domaingt console https reencryptRedirect None

CHAPTER 1 INSTALLING ON AWS

347

1

2

19 UNINSTALLING A CLUSTER ON AWS

You can remove a cluster that you deployed to Amazon Web Services (AWS)

191 Removing a cluster that uses installer-provisioned infrastructure

You can remove a cluster that uses installer-provisioned infrastructure from your cloud

Prerequisites

Have a copy of the installation program that you used to deploy the cluster

Have the files that the installation program generated when you created your cluster

Procedure

1 From the directory that contains the installation program on the computer that you used toinstall the cluster run the following command

For ltinstallation_directorygt specify the path to the directory that you stored theinstallation files in

To view different details specify warn debug or error instead of info

NOTE

You must specify the directory that contains the cluster definition files for yourcluster The installation program requires the metadatajson file in this directoryto delete the cluster

2 Optional Delete the ltinstallation_directorygt directory and the OpenShift Container Platforminstallation program

$ openshift-install destroy cluster --dir=ltinstallation_directorygt --log-level=info 1 2

OpenShift Container Platform 46 Installing on AWS

348

  • Table of Contents
  • CHAPTER 1 INSTALLING ON AWS
    • 11 CONFIGURING AN AWS ACCOUNT
      • 111 Configuring Route 53
        • 1111 Ingress Operator endpoint configuration for AWS Route 53
          • 112 AWS account limits
          • 113 Required AWS permissions
          • 114 Creating an IAM user
          • 115 Supported AWS regions
          • 116 Next steps
            • 12 INSTALLING A CLUSTER ON AWS WITH CUSTOMIZATIONS
              • 121 Prerequisites
              • 122 Internet and Telemetry access for OpenShift Container Platform
              • 123 Generating an SSH private key and adding it to the agent
              • 124 Obtaining the installation program
              • 125 Creating the installation configuration file
                • 1251 Installation configuration parameters
                • 1252 Sample customized install-configyaml file for AWS
                  • 126 Deploying the cluster
                  • 127 Installing the OpenShift CLI by downloading the binary
                    • 1271 Installing the OpenShift CLI on Linux
                    • 1272 Installing the OpenShift CLI on Windows
                    • 1273 Installing the OpenShift CLI on macOS
                      • 128 Logging in to the cluster by using the CLI
                      • 129 Logging in to the cluster by using the web console
                      • 1210 Next steps
                        • 13 INSTALLING A CLUSTER ON AWS WITH NETWORK CUSTOMIZATIONS
                          • 131 Prerequisites
                          • 132 Internet and Telemetry access for OpenShift Container Platform
                          • 133 Generating an SSH private key and adding it to the agent
                          • 134 Obtaining the installation program
                          • 135 Creating the installation configuration file
                            • 1351 Installation configuration parameters
                            • 1352 Network configuration parameters
                            • 1353 Sample customized install-configyaml file for AWS
                              • 136 Modifying advanced network configuration parameters
                              • 137 Configuring an Ingress Controller Network Load Balancer on a new AWS cluster
                              • 138 Cluster Network Operator configuration
                                • 1381 Configuration parameters for the OpenShift SDN default CNI network provider
                                • 1382 Configuration parameters for the OVN-Kubernetes default CNI network provider
                                • 1383 Cluster Network Operator example configuration
                                  • 139 Configuring hybrid networking with OVN-Kubernetes
                                  • 1310 Deploying the cluster
                                  • 1311 Installing the OpenShift CLI by downloading the binary
                                    • 13111 Installing the OpenShift CLI on Linux
                                    • 13112 Installing the OpenShift CLI on Windows
                                    • 13113 Installing the OpenShift CLI on macOS
                                      • 1312 Logging in to the cluster by using the CLI
                                      • 1313 Logging in to the cluster by using the web console
                                      • 1314 Next steps
                                        • 14 INSTALLING A CLUSTER ON AWS INTO AN EXISTING VPC
                                          • 141 Prerequisites
                                          • 142 About using a custom VPC
                                            • 1421 Requirements for using your VPC
                                            • 1422 VPC validation
                                            • 1423 Division of permissions
                                            • 1424 Isolation between clusters
                                              • 143 Internet and Telemetry access for OpenShift Container Platform
                                              • 144 Generating an SSH private key and adding it to the agent
                                              • 145 Obtaining the installation program
                                              • 146 Creating the installation configuration file
                                                • 1461 Installation configuration parameters
                                                • 1462 Sample customized install-configyaml file for AWS
                                                • 1463 Configuring the cluster-wide proxy during installation
                                                  • 147 Deploying the cluster
                                                  • 148 Installing the OpenShift CLI by downloading the binary
                                                    • 1481 Installing the OpenShift CLI on Linux
                                                    • 1482 Installing the OpenShift CLI on Windows
                                                    • 1483 Installing the OpenShift CLI on macOS
                                                      • 149 Logging in to the cluster by using the CLI
                                                      • 1410 Logging in to the cluster by using the web console
                                                      • 1411 Next steps
                                                        • 15 INSTALLING A PRIVATE CLUSTER ON AWS
                                                          • 151 Prerequisites
                                                          • 152 Private clusters
                                                            • 1521 Private clusters in AWS
                                                              • 153 About using a custom VPC
                                                                • 1531 Requirements for using your VPC
                                                                • 1532 VPC validation
                                                                • 1533 Division of permissions
                                                                • 1534 Isolation between clusters
                                                                  • 154 Internet and Telemetry access for OpenShift Container Platform
                                                                  • 155 Generating an SSH private key and adding it to the agent
                                                                  • 156 Obtaining the installation program
                                                                  • 157 Manually creating the installation configuration file
                                                                    • 1571 Installation configuration parameters
                                                                    • 1572 Sample customized install-configyaml file for AWS
                                                                    • 1573 Configuring the cluster-wide proxy during installation
                                                                      • 158 Deploying the cluster
                                                                      • 159 Installing the OpenShift CLI by downloading the binary
                                                                        • 1591 Installing the OpenShift CLI on Linux
                                                                        • 1592 Installing the OpenShift CLI on Windows
                                                                        • 1593 Installing the OpenShift CLI on macOS
                                                                          • 1510 Logging in to the cluster by using the CLI
                                                                          • 1511 Logging in to the cluster by using the web console
                                                                          • 1512 Next steps
                                                                            • 16 INSTALLING A CLUSTER ON AWS INTO A GOVERNMENT REGION
                                                                              • 161 Prerequisites
                                                                              • 162 AWS government regions
                                                                              • 163 Private clusters
                                                                                • 1631 Private clusters in AWS
                                                                                  • 164 About using a custom VPC
                                                                                    • 1641 Requirements for using your VPC
                                                                                    • 1642 VPC validation
                                                                                    • 1643 Division of permissions
                                                                                    • 1644 Isolation between clusters
                                                                                      • 165 Internet and Telemetry access for OpenShift Container Platform
                                                                                      • 166 Generating an SSH private key and adding it to the agent
                                                                                      • 167 Obtaining the installation program
                                                                                      • 168 Manually creating the installation configuration file
                                                                                        • 1681 Installation configuration parameters
                                                                                        • 1682 Sample customized install-configyaml file for AWS
                                                                                        • 1683 AWS regions without a published RHCOS AMI
                                                                                        • 1684 Uploading a custom RHCOS AMI in AWS
                                                                                        • 1685 Configuring the cluster-wide proxy during installation
                                                                                          • 169 Deploying the cluster
                                                                                          • 1610 Installing the OpenShift CLI by downloading the binary
                                                                                            • 16101 Installing the OpenShift CLI on Linux
                                                                                            • 16102 Installing the OpenShift CLI on Windows
                                                                                            • 16103 Installing the OpenShift CLI on macOS
                                                                                              • 1611 Logging in to the cluster by using the CLI
                                                                                              • 1612 Logging in to the cluster by using the web console
                                                                                              • 1613 Next steps
                                                                                                • 17 INSTALLING A CLUSTER ON USER-PROVISIONED INFRASTRUCTURE IN AWS BY USING CLOUDFORMATION TEMPLATES
                                                                                                  • 171 Prerequisites
                                                                                                  • 172 Internet and Telemetry access for OpenShift Container Platform
                                                                                                  • 173 Required AWS infrastructure components
                                                                                                    • 1731 Cluster machines
                                                                                                    • 1732 Certificate signing requests management
                                                                                                    • 1733 Other infrastructure components
                                                                                                    • 1734 Required AWS permissions
                                                                                                      • 174 Obtaining the installation program
                                                                                                      • 175 Generating an SSH private key and adding it to the agent
                                                                                                      • 176 Creating the installation files for AWS
                                                                                                        • 1761 Optional Creating a separate var partition
                                                                                                        • 1762 Creating the installation configuration file
                                                                                                        • 1763 Configuring the cluster-wide proxy during installation
                                                                                                        • 1764 Creating the Kubernetes manifest and Ignition config files
                                                                                                          • 177 Extracting the infrastructure name
                                                                                                          • 178 Creating a VPC in AWS
                                                                                                            • 1781 CloudFormation template for the VPC
                                                                                                              • 179 Creating networking and load balancing components in AWS
                                                                                                                • 1791 CloudFormation template for the network and load balancers
                                                                                                                  • 1710 Creating security group and roles in AWS
                                                                                                                    • 17101 CloudFormation template for security objects
                                                                                                                      • 1711 RHCOS AMIs for the AWS infrastructure
                                                                                                                        • 17111 AWS regions without a published RHCOS AMI
                                                                                                                        • 17112 Uploading a custom RHCOS AMI in AWS
                                                                                                                          • 1712 Creating the bootstrap node in AWS
                                                                                                                            • 17121 CloudFormation template for the bootstrap machine
                                                                                                                              • 1713 Creating the control plane machines in AWS
                                                                                                                                • 17131 CloudFormation template for control plane machines
                                                                                                                                  • 1714 Creating the worker nodes in AWS
                                                                                                                                    • 17141 CloudFormation template for worker machines
                                                                                                                                      • 1715 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
                                                                                                                                      • 1716 Installing the OpenShift CLI by downloading the binary
                                                                                                                                        • 17161 Installing the OpenShift CLI on Linux
                                                                                                                                        • 17162 Installing the OpenShift CLI on Windows
                                                                                                                                        • 17163 Installing the OpenShift CLI on macOS
                                                                                                                                          • 1717 Logging in to the cluster by using the CLI
                                                                                                                                          • 1718 Approving the certificate signing requests for your machines
                                                                                                                                          • 1719 Initial Operator configuration
                                                                                                                                            • 17191 Image registry storage configuration
                                                                                                                                              • 1720 Deleting the bootstrap resources
                                                                                                                                              • 1721 Creating the Ingress DNS Records
                                                                                                                                              • 1722 Completing an AWS installation on user-provisioned infrastructure
                                                                                                                                              • 1723 Logging in to the cluster by using the web console
                                                                                                                                              • 1724 Additional resources
                                                                                                                                              • 1725 Next steps
                                                                                                                                                • 18 INSTALLING A CLUSTER ON AWS THAT USES MIRRORED INSTALLATION CONTENT
                                                                                                                                                  • 181 Prerequisites
                                                                                                                                                  • 182 About installations in restricted networks
                                                                                                                                                    • 1821 Additional limits
                                                                                                                                                      • 183 Internet and Telemetry access for OpenShift Container Platform
                                                                                                                                                      • 184 Required AWS infrastructure components
                                                                                                                                                        • 1841 Cluster machines
                                                                                                                                                        • 1842 Certificate signing requests management
                                                                                                                                                        • 1843 Other infrastructure components
                                                                                                                                                        • 1844 Required AWS permissions
                                                                                                                                                          • 185 Generating an SSH private key and adding it to the agent
                                                                                                                                                          • 186 Creating the installation files for AWS
                                                                                                                                                            • 1861 Optional Creating a separate var partition
                                                                                                                                                            • 1862 Creating the installation configuration file
                                                                                                                                                            • 1863 Configuring the cluster-wide proxy during installation
                                                                                                                                                            • 1864 Creating the Kubernetes manifest and Ignition config files
                                                                                                                                                              • 187 Extracting the infrastructure name
                                                                                                                                                              • 188 Creating a VPC in AWS
                                                                                                                                                                • 1881 CloudFormation template for the VPC
                                                                                                                                                                  • 189 Creating networking and load balancing components in AWS
                                                                                                                                                                    • 1891 CloudFormation template for the network and load balancers
                                                                                                                                                                      • 1810 Creating security group and roles in AWS
                                                                                                                                                                        • 18101 CloudFormation template for security objects
                                                                                                                                                                          • 1811 RHCOS AMIs for the AWS infrastructure
                                                                                                                                                                          • 1812 Creating the bootstrap node in AWS
                                                                                                                                                                            • 18121 CloudFormation template for the bootstrap machine
                                                                                                                                                                              • 1813 Creating the control plane machines in AWS
                                                                                                                                                                                • 18131 CloudFormation template for control plane machines
                                                                                                                                                                                  • 1814 Creating the worker nodes in AWS
                                                                                                                                                                                    • 18141 CloudFormation template for worker machines
                                                                                                                                                                                      • 1815 Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
                                                                                                                                                                                      • 1816 Logging in to the cluster by using the CLI
                                                                                                                                                                                      • 1817 Approving the certificate signing requests for your machines
                                                                                                                                                                                      • 1818 Initial Operator configuration
                                                                                                                                                                                        • 18181 Image registry storage configuration
                                                                                                                                                                                          • 1819 Deleting the bootstrap resources
                                                                                                                                                                                          • 1820 Creating the Ingress DNS Records
                                                                                                                                                                                          • 1821 Completing an AWS installation on user-provisioned infrastructure
                                                                                                                                                                                          • 1822 Logging in to the cluster by using the web console
                                                                                                                                                                                          • 1823 Additional resources
                                                                                                                                                                                          • 1824 Next steps
                                                                                                                                                                                            • 19 UNINSTALLING A CLUSTER ON AWS
                                                                                                                                                                                              • 191 Removing a cluster that uses installer-provisioned infrastructure
Page 5: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 6: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 7: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 8: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 9: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 10: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 11: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 12: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 13: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 14: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 15: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 16: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 17: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 18: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 19: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 20: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 21: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 22: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 23: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 24: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 25: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 26: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 27: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 28: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 29: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 30: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 31: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 32: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 33: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 34: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 35: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 36: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 37: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 38: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 39: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 40: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 41: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 42: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 43: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 44: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 45: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 46: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 47: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 48: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 49: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 50: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 51: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 52: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 53: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 54: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 55: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 56: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 57: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 58: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 59: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 60: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 61: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 62: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 63: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 64: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 65: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 66: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 67: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 68: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 69: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 70: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 71: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 72: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 73: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 74: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 75: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 76: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 77: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 78: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 79: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 80: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 81: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 82: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 83: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 84: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 85: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 86: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 87: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 88: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 89: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 90: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 91: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 92: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 93: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 94: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 95: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 96: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 97: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 98: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 99: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 100: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 101: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 102: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 103: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 104: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 105: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 106: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 107: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 108: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 109: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 110: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 111: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 112: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 113: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 114: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 115: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 116: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 117: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 118: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 119: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 120: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 121: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 122: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 123: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 124: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 125: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 126: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 127: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 128: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 129: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 130: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 131: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 132: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 133: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 134: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 135: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 136: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 137: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 138: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 139: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 140: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 141: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 142: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 143: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 144: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 145: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 146: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 147: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 148: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 149: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 150: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 151: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 152: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 153: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 154: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 155: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 156: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 157: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 158: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 159: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 160: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 161: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 162: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 163: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 164: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 165: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 166: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 167: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 168: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 169: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 170: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 171: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 172: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 173: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 174: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 175: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 176: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 177: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 178: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 179: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 180: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 181: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 182: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 183: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 184: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 185: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 186: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 187: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 188: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 189: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 190: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 191: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 192: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 193: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 194: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 195: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 196: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 197: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 198: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 199: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 200: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 201: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 202: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 203: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 204: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 205: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 206: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 207: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 208: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 209: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 210: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 211: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 212: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 213: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 214: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 215: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 216: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 217: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 218: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 219: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 220: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 221: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 222: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 223: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 224: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 225: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 226: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 227: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 228: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 229: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 230: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 231: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 232: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 233: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 234: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 235: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 236: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 237: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 238: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 239: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 240: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 241: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 242: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 243: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 244: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 245: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 246: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 247: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 248: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 249: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 250: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 251: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 252: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 253: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 254: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 255: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 256: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 257: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 258: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 259: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 260: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 261: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 262: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 263: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 264: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 265: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 266: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 267: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 268: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 269: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 270: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 271: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 272: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 273: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 274: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 275: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 276: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 277: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 278: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 279: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 280: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 281: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 282: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 283: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 284: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 285: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 286: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 287: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 288: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 289: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 290: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 291: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 292: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 293: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 294: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 295: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 296: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 297: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 298: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 299: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 300: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 301: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 302: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 303: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 304: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 305: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 306: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 307: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 308: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 309: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 310: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 311: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 312: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 313: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 314: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 315: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 316: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 317: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 318: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 319: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 320: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 321: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 322: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 323: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 324: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 325: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 326: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 327: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 328: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 329: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 330: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 331: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 332: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 333: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 334: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 335: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 336: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 337: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 338: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 339: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 340: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 341: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 342: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 343: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 344: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 345: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 346: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 347: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 348: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 349: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 350: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6
Page 351: OpenShift Container Platform 4...1.8.14. Creating the worker nodes in AWS OpenShift Container Platform 4.6 Installing on AWS 4. OpenShift Container Platform 4.6 Installing on AWS 6