{"id":15854,"date":"2021-03-09T07:07:26","date_gmt":"2021-03-09T06:07:26","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/"},"modified":"2021-03-09T07:07:26","modified_gmt":"2021-03-09T06:07:26","slug":"rancher-up-and-running-on-ec2-2-three-nodes","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/","title":{"rendered":"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes"},"content":{"rendered":"<p>In the <a href=\"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-1-one-node\/\" target=\"_blank\" rel=\"noopener\">last post<\/a> we&#8217;ve brought up a RKE Kubernetes cluster on a single node. While that is cool for demonstration purposes or testing, this is nothing for a real life setup. Running the control pane, the <a href=\"https:\/\/etcd.io\/\" target=\"_blank\" rel=\"noopener\">etcd<\/a> nodes and the worker nodes all on one node, is nothing you want to do usually, as you can not guarantee fault tolerance with such a setup. To make the RKE cluster highly available we&#8217;ll be adding two additional nodes to the configuration in this post. We&#8217;ll end up with three nodes, all running etcd, control pane and workers.<\/p>\n<p><!--more--><\/p>\n<p>Before you can add the additional nodes, they need to be prepared in very much the same way as the first node: Bring the system to the latest release, install a supported version Docker, create the group and the user, and use the same SSH configuration as on the first node. <\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n$ sudo apt update &amp;&amp; sudo apt dist-upgrade -y &amp;&amp; sudo systemctl reboot\n$ sudo hostnamectl set-hostname rancher2\n$ # sudo hostnamectl set-hostname rancher3 # on the third node\n$ echo \"10.0.1.168 rancher2 rancher2.it.dbi-services.com\" &gt;&gt; \/etc\/hosts\n$ # echo \"10.0.1.168 rancher3 rancher3.it.dbi-services.com\" &gt;&gt; \/etc\/hosts # on the third node\n$ exit\n$ sudo curl https:\/\/releases.rancher.com\/install-docker\/19.03.sh | sh\n$ sudo bash\n$ echo \"rancher ALL=(ALL) NOPASSWD: ALL\" &gt;&gt; \/etc\/sudoers\n$ exit\n$ sudo systemctl reboot\n$ sudo groupadd rancher\n$ sudo useradd -g rancher -G docker -m -s \/bin\/bash rancher\n$ sudo passwd rancher\n$ sudo su - rancher\n<\/pre>\n<p>Before proceeding make sure that you use the same ssk key for the rancher user on the additional nodes, and that you can login from the first node without being prompted for a password:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n$ mkdir .ssh\n$ chmod 700 .ssh\/\n$ echo \"-----BEGIN OPENSSH PRIVATE KEY-----\n&gt; b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABFwAAAAdzc2gtcn\n&gt; NhAAAAAwEAAQAAAQEAx+iJ2W\/nGWytnVxyEeRuUDf8UyX3XOxEv7w+TeNGm3o6votXzsEY\n&gt; CclNxZ0KBt72OnPlpCjNgMOZhKC7XIDwEkhldLyMUVV8jdh\/03qfJDyVBp4zqpQ2s1yf\/b\n&gt; SU8cqOrj0gSYmozQdbGybZHmzgj+q9HS5iCAJZ7DUeM43E6kUvHpBJ6a1uP2fIr6+BRd25\n&gt; sejcT7kgu50Dv\/cVxQ1s0hVydX29kAe0S9IFZUWIlsPCNzPxUGNJxigoC2tAcsXttyeguQ\n&gt; dtCzTYPgm3wBOoIOR9pAns8kHfiaajZK36vdF6\/nEuaI2pw0IpkAct6aFqWq54utgdG9zv\n&gt; a8mqci\/94QAAA8i6pMTbuqTE2wAAAAdzc2gtcnNhAAABAQDH6InZb+cZbK2dXHIR5G5QN\/\n&gt; xTJfdc7ES\/vD5N40abejq+i1fOwRgJyU3FnQoG3vY6c+WkKM2Aw5mEoLtcgPASSGV0vIxR\n&gt; VXyN2H\/Tep8kPJUGnjOqlDazXJ\/9tJTxyo6uPSBJiajNB1sbJtkebOCP6r0dLmIIAlnsNR\n&gt; 4zjcTqRS8ekEnprW4\/Z8ivr4FF3bmx6NxPuSC7nQO\/9xXFDWzSFXJ1fb2QB7RL0gVlRYiW\n&gt; w8I3M\/FQY0nGKCgLa0Byxe23J6C5B20LNNg+CbfAE6gg5H2kCezyQd+JpqNkrfq90Xr+cS\n&gt; 5ojanDQimQBy3poWparni62B0b3O9ryapyL\/3hAAAAAwEAAQAAAQAueLVK8cOUWnpFoY72\n&gt; 79ZhGZKztZi6ZkZZGCaXrqTkUdbEItpnuuWeqMhGjwocrMoqrnSM49tZ+p5+gWrsxyCH74\n&gt; J+T7KC2c+ZneGhRNkn8Flob3BtUAUjTv32WXtidgcTJCyUS8cM2o\/oUPCaLQ9LBXOvC\/BI\n&gt; ElvbGEIMFAHZv4+eVcZt1NJG3qlu8CXfxRAe6UPLAJOATRyFoNBycPyYu9Hhpr2vXvzksc\n&gt; QJUT177q2nu5U+UbCAatekQSGVqv18RWnECKJP4ntSbUMhg\/PoPQALnWC09epD+397Yqwp\n&gt; uevR76u7S78q0SnycCvT9EMwpGRjl1e\/FTZFejEs9rY9AAAAgQDlMVjYrJ4l5jIrT6GBPE\n&gt; 7cBBlMW7P0sr1qFxjQQ05JC4CpgCkvqQDqL4alErQ5KTwk9ZsgJY1N49tQk6Rtxv98BK8K\n&gt; x3d0dth\/2q690iDG6LzExTFI26fjPK0a22FLouXSexoQtsHqnpefR9HuJWHPAIhBlgjX98\n&gt; Ce\/A9McrIfOAAAAIEA\/jhYGQaiqhZJIo7ggXVT3yj2ysXjPQ9TR+WRb+Ze3esi\/bAUfKfK\n&gt; 2XtZTALNTFw6+KlorHK5ZgvMdpPLSeAg0htO5g6dLhmVv8VuAItVFQMm\/R6AGFc\/+EJw9k\n&gt; iWaGakJzmzCBRwfyZFh3MeMM9sxq60HyV1VHx\/SzQvwKNVOJsAAACBAMlO2QU4r1H8kyzu\n&gt; jn5\/NgX0lO6iHDhQWKQywrQ3NjYmtYRBhwpT62MpnpHpev6OpkR2xPOJ+9fDG2K1Q3raSP\n&gt; jfKaurZlMqmvVeziIhQEXrB3L3vnyq5Jx85oqHv7sh7PYCBD4J6zgL5o66fZOoqdc57GLC\n&gt; K+XnWjDZpULuQxUzAAAAD3JhbmNoZXJAcmFuZ2VyMQECAw==\n&gt; -----END OPENSSH PRIVATE KEY-----\" &gt; .ssh\/id_rsa\n$ chmod 600 .ssh\/id_rsa \n$ echo \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDH6InZb+cZbK2dXHIR5G5QN\/xTJfdc7ES\/vD5N40abejq+i1fOwRgJyU3FnQoG3vY6c+WkKM2Aw5mEoLtcgPASSGV0vIxRVXyN2H\/Tep8kPJUGnjOqlDazXJ\/9tJTxyo6uPSBJiajNB1sbJtkebOCP6r0dLmIIAlnsNR4zjcTqRS8ekEnprW4\/Z8ivr4FF3bmx6NxPuSC7nQO\/9xXFDWzSFXJ1fb2QB7RL0gVlRYiWw8I3M\/FQY0nGKCgLa0Byxe23J6C5B20LNNg+CbfAE6gg5H2kCezyQd+JpqNkrfq90Xr+cS5ojanDQimQBy3poWparni62B0b3O9ryapyL\/3h rancher@rancher1\" &gt;&gt; .ssh\/authorized_keys\nrancher@rancher1:~$ ssh 10.0.1.253\nThe authenticity of host '10.0.1.253 (10.0.1.253)' can't be established.\nECDSA key fingerprint is SHA256:\/JzK5lFQv6qsM5zi4A+1JYwS5u0Iup3uUUV8927MF50.\nAre you sure you want to continue connecting (yes\/no)? yes\nWarning: Permanently added '10.0.1.253' (ECDSA) to the list of known hosts.\nLinux rancher2 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64\n\nThe programs included with the Debian GNU\/Linux system are free software;\nthe exact distribution terms for each program are described in the\nindividual files in \/usr\/share\/doc\/*\/copyright.\n\nDebian GNU\/Linux comes with ABSOLUTELY NO WARRANTY, to the extent\npermitted by applicable law.\nrancher@rancher2:~$ logout\nConnection to 10.0.1.253 closed.\nrancher@rancher1:~$ ssh 10.0.1.73\nThe authenticity of host '10.0.1.73 (10.0.1.73)' can't be established.\nECDSA key fingerprint is SHA256:oVfRCbqh5PIdTx16+wNmMS8CNnHTnQXsjlpybHmPVlY.\nAre you sure you want to continue connecting (yes\/no)? yes\nWarning: Permanently added '10.0.1.73' (ECDSA) to the list of known hosts.\nLinux rancher3 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64\n\nThe programs included with the Debian GNU\/Linux system are free software;\nthe exact distribution terms for each program are described in the\nindividual files in \/usr\/share\/doc\/*\/copyright.\n\nDebian GNU\/Linux comes with ABSOLUTELY NO WARRANTY, to the extent\npermitted by applicable law.\n<\/pre>\n<p>Once that is confirmed, we need to adjust the RKE cluster configuration file, to include the new nodes. Currently the node section looks like this:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\n# please consult the documentation on how to configure custom RKE images.\nnodes:\n- address: 10.0.1.168\n  port: \"22\"\n  internal_address: \"\"\n  role:\n  - controlplane\n  - worker\n  - etcd\n  hostname_override: \"\"\n  user: rancher\n  docker_socket: \/var\/run\/docker.sock\n  ssh_key: \"\"\n  ssh_key_path: ~\/.ssh\/id_rsa\n  ssh_cert: \"\"\n  ssh_cert_path: \"\"\n  labels: {}\n  taints: []\n<\/pre>\n<p>We need to add the two additional nodes. As this setup is on EC2, you need to specify the public and the internal IP addresses:<br \/>\n<a href=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/rancher2.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/rancher2.jpg\" alt=\"\" width=\"1682\" height=\"125\" class=\"aligncenter size-full wp-image-48130\" \/><\/a><\/p>\n<p>The node section in the yaml file looks like this (I am assuming that you are familiar with <a href=\"https:\/\/docs.aws.amazon.com\/AWSEC2\/latest\/UserGuide\/ec2-security-groups.html\" target=\"_blank\" rel=\"noopener\">security groups<\/a> and traffic is allowed between the node):<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\nnodes:\n- address: 18.195.249.125\n  port: \"22\"\n  internal_address: \"10.0.1.168\"\n  role:\n  - controlplane\n  - worker\n  - etcd\n  hostname_override: \"rancher1\"\n  user: rancher\n  docker_socket: \/var\/run\/docker.sock\n  ssh_key: \"\"\n  ssh_key_path: ~\/.ssh\/id_rsa\n  ssh_cert: \"\"\n  ssh_cert_path: \"\"\n  labels: {}\n  taints: []\n- address: 3.64.193.173\n  port: \"22\"\n  internal_address: \"10.0.1.253\"\n  role:\n  - controlplane\n  - worker\n  - etcd\n  hostname_override: \"rancher2\"\n  user: rancher\n  docker_socket: \/var\/run\/docker.sock\n  ssh_key: \"\"\n  ssh_key_path: ~\/.ssh\/id_rsa\n  ssh_cert: \"\"\n  ssh_cert_path: \"\"\n  labels: {}\n  taints: []\n- address: 18.185.105.131\n  port: \"22\"\n  internal_address: \"10.0.1.73\"\n  role:\n  - controlplane\n  - worker\n  - etcd\n  hostname_override: \"rancher3\"\n  user: rancher\n  docker_socket: \/var\/run\/docker.sock\n  ssh_key: \"\"\n  ssh_key_path: ~\/.ssh\/id_rsa\n  ssh_cert: \"\"\n  ssh_cert_path: \"\"\n  labels: {}\n  taints: []\n<\/pre>\n<p>That&#8217;s all you need to do. Use &#8220;rke up&#8221; to apply the changed configuration:<\/p>\n<pre class=\"brush: sql; gutter: true; first-line: 1; highlight: [1]\">\nrancher@rancher1:~$ rke up\nINFO[0000] Running RKE version: v1.1.15                 \nINFO[0000] Initiating Kubernetes cluster                \nINFO[0000] [dialer] Setup tunnel for host [3.64.193.173] \nINFO[0000] [dialer] Setup tunnel for host [18.185.105.131] \nINFO[0000] [dialer] Setup tunnel for host [18.195.249.125] \nINFO[0000] Checking if container [cluster-state-deployer] is running on host [3.64.193.173], try #1 \nINFO[0000] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0000] Starting container [cluster-state-deployer] on host [3.64.193.173], try #1 \nINFO[0000] [state] Successfully started [cluster-state-deployer] container on host [3.64.193.173] \nINFO[0000] Checking if container [cluster-state-deployer] is running on host [18.185.105.131], try #1 \nINFO[0000] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0000] Starting container [cluster-state-deployer] on host [18.185.105.131], try #1 \nINFO[0001] [state] Successfully started [cluster-state-deployer] container on host [18.185.105.131] \nINFO[0001] Checking if container [cluster-state-deployer] is running on host [18.195.249.125], try #1 \nINFO[0001] [certificates] Generating CA kubernetes certificates \nINFO[0001] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates \nINFO[0002] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates \nINFO[0002] [certificates] Generating Kubernetes API server certificates \nINFO[0003] [certificates] Generating Service account token key \nINFO[0003] [certificates] Generating Kube Controller certificates \nINFO[0003] [certificates] Generating Kube Scheduler certificates \nINFO[0003] [certificates] Generating Kube Proxy certificates \nINFO[0003] [certificates] Generating Node certificate   \nINFO[0003] [certificates] Generating admin certificates and kubeconfig \nINFO[0003] [certificates] Generating Kubernetes API server proxy client certificates \nINFO[0004] [certificates] Generating kube-etcd-10-0-1-168 certificate and key \nINFO[0004] [certificates] Generating kube-etcd-10-0-1-253 certificate and key \nINFO[0004] [certificates] Generating kube-etcd-10-0-1-73 certificate and key \nINFO[0005] Successfully Deployed state file at [.\/cluster.rkestate] \nINFO[0005] Building Kubernetes cluster                  \nINFO[0005] [dialer] Setup tunnel for host [18.185.105.131] \nINFO[0005] [dialer] Setup tunnel for host [18.195.249.125] \nINFO[0005] [dialer] Setup tunnel for host [3.64.193.173] \nINFO[0005] [network] Deploying port listener containers \nINFO[0005] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0005] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0005] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0005] Starting container [rke-etcd-port-listener] on host [18.185.105.131], try #1 \nINFO[0005] Starting container [rke-etcd-port-listener] on host [18.195.249.125], try #1 \nINFO[0005] Starting container [rke-etcd-port-listener] on host [3.64.193.173], try #1 \nINFO[0005] [network] Successfully started [rke-etcd-port-listener] container on host [18.185.105.131] \nINFO[0005] [network] Successfully started [rke-etcd-port-listener] container on host [18.195.249.125] \nINFO[0005] [network] Successfully started [rke-etcd-port-listener] container on host [3.64.193.173] \nINFO[0005] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0005] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0005] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0006] Starting container [rke-cp-port-listener] on host [18.195.249.125], try #1 \nINFO[0006] Starting container [rke-cp-port-listener] on host [18.185.105.131], try #1 \nINFO[0006] Starting container [rke-cp-port-listener] on host [3.64.193.173], try #1 \nINFO[0006] [network] Successfully started [rke-cp-port-listener] container on host [3.64.193.173] \nINFO[0006] [network] Successfully started [rke-cp-port-listener] container on host [18.185.105.131] \nINFO[0006] [network] Successfully started [rke-cp-port-listener] container on host [18.195.249.125] \nINFO[0006] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0006] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0006] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0006] Starting container [rke-worker-port-listener] on host [18.185.105.131], try #1 \nINFO[0006] Starting container [rke-worker-port-listener] on host [18.195.249.125], try #1 \nINFO[0006] Starting container [rke-worker-port-listener] on host [3.64.193.173], try #1 \nINFO[0006] [network] Successfully started [rke-worker-port-listener] container on host [3.64.193.173] \nINFO[0006] [network] Successfully started [rke-worker-port-listener] container on host [18.185.105.131] \nINFO[0006] [network] Successfully started [rke-worker-port-listener] container on host [18.195.249.125] \nINFO[0006] [network] Port listener containers deployed successfully \nINFO[0006] [network] Running etcd  etcd port checks  \nINFO[0006] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0006] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0006] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0007] Starting container [rke-port-checker] on host [18.185.105.131], try #1 \nINFO[0007] Starting container [rke-port-checker] on host [3.64.193.173], try #1 \nINFO[0007] Starting container [rke-port-checker] on host [18.195.249.125], try #1 \nINFO[0007] [network] Successfully started [rke-port-checker] container on host [3.64.193.173] \nINFO[0007] [network] Successfully started [rke-port-checker] container on host [18.185.105.131] \nINFO[0007] [network] Successfully started [rke-port-checker] container on host [18.195.249.125] \nINFO[0007] Removing container [rke-port-checker] on host [3.64.193.173], try #1 \nINFO[0007] Removing container [rke-port-checker] on host [18.195.249.125], try #1 \nINFO[0008] Removing container [rke-port-checker] on host [18.185.105.131], try #1 \nINFO[0008] [network] Running control plane -&gt; etcd port checks \nINFO[0008] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0008] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0008] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0008] Starting container [rke-port-checker] on host [18.195.249.125], try #1 \nINFO[0008] Starting container [rke-port-checker] on host [3.64.193.173], try #1 \nINFO[0008] Starting container [rke-port-checker] on host [18.185.105.131], try #1 \nINFO[0008] [network] Successfully started [rke-port-checker] container on host [18.195.249.125] \nINFO[0008] [network] Successfully started [rke-port-checker] container on host [3.64.193.173] \nINFO[0008] [network] Successfully started [rke-port-checker] container on host [18.185.105.131] \nINFO[0008] Removing container [rke-port-checker] on host [18.195.249.125], try #1 \nINFO[0008] Removing container [rke-port-checker] on host [3.64.193.173], try #1 \nINFO[0008] Removing container [rke-port-checker] on host [18.185.105.131], try #1 \nINFO[0008] [network] Running control plane -&gt; worker port checks \nINFO[0008] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0008] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0008] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0009] Starting container [rke-port-checker] on host [18.185.105.131], try #1 \nINFO[0009] Starting container [rke-port-checker] on host [18.195.249.125], try #1 \nINFO[0009] Starting container [rke-port-checker] on host [3.64.193.173], try #1 \nINFO[0009] [network] Successfully started [rke-port-checker] container on host [18.195.249.125] \nINFO[0009] [network] Successfully started [rke-port-checker] container on host [18.185.105.131] \nINFO[0009] [network] Successfully started [rke-port-checker] container on host [3.64.193.173] \nINFO[0009] Removing container [rke-port-checker] on host [18.195.249.125], try #1 \nINFO[0009] Removing container [rke-port-checker] on host [18.185.105.131], try #1 \nINFO[0009] Removing container [rke-port-checker] on host [3.64.193.173], try #1 \nINFO[0009] [network] Running workers -&gt; control plane port checks \nINFO[0009] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0009] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0009] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0009] Starting container [rke-port-checker] on host [3.64.193.173], try #1 \nINFO[0009] Starting container [rke-port-checker] on host [18.185.105.131], try #1 \nINFO[0009] Starting container [rke-port-checker] on host [18.195.249.125], try #1 \nINFO[0009] [network] Successfully started [rke-port-checker] container on host [3.64.193.173] \nINFO[0009] [network] Successfully started [rke-port-checker] container on host [18.185.105.131] \nINFO[0009] [network] Successfully started [rke-port-checker] container on host [18.195.249.125] \nINFO[0009] Removing container [rke-port-checker] on host [3.64.193.173], try #1 \nINFO[0009] Removing container [rke-port-checker] on host [18.185.105.131], try #1 \nINFO[0009] Removing container [rke-port-checker] on host [18.195.249.125], try #1 \nINFO[0009] [network] Checking KubeAPI port Control Plane hosts \nINFO[0009] [network] Removing port listener containers  \nINFO[0009] Removing container [rke-etcd-port-listener] on host [18.195.249.125], try #1 \nINFO[0009] Removing container [rke-etcd-port-listener] on host [18.185.105.131], try #1 \nINFO[0009] Removing container [rke-etcd-port-listener] on host [3.64.193.173], try #1 \nINFO[0010] [remove\/rke-etcd-port-listener] Successfully removed container on host [3.64.193.173] \nINFO[0010] [remove\/rke-etcd-port-listener] Successfully removed container on host [18.185.105.131] \nINFO[0010] [remove\/rke-etcd-port-listener] Successfully removed container on host [18.195.249.125] \nINFO[0010] Removing container [rke-cp-port-listener] on host [18.195.249.125], try #1 \nINFO[0010] Removing container [rke-cp-port-listener] on host [3.64.193.173], try #1 \nINFO[0010] Removing container [rke-cp-port-listener] on host [18.185.105.131], try #1 \nINFO[0010] [remove\/rke-cp-port-listener] Successfully removed container on host [18.185.105.131] \nINFO[0010] [remove\/rke-cp-port-listener] Successfully removed container on host [18.195.249.125] \nINFO[0010] [remove\/rke-cp-port-listener] Successfully removed container on host [3.64.193.173] \nINFO[0010] Removing container [rke-worker-port-listener] on host [18.195.249.125], try #1 \nINFO[0010] Removing container [rke-worker-port-listener] on host [18.185.105.131], try #1 \nINFO[0010] Removing container [rke-worker-port-listener] on host [3.64.193.173], try #1 \nINFO[0010] [remove\/rke-worker-port-listener] Successfully removed container on host [3.64.193.173] \nINFO[0010] [remove\/rke-worker-port-listener] Successfully removed container on host [18.185.105.131] \nINFO[0010] [remove\/rke-worker-port-listener] Successfully removed container on host [18.195.249.125] \nINFO[0010] [network] Port listener containers removed successfully \nINFO[0010] [certificates] Deploying kubernetes certificates to Cluster nodes \nINFO[0010] Checking if container [cert-deployer] is running on host [18.195.249.125], try #1 \nINFO[0010] Checking if container [cert-deployer] is running on host [3.64.193.173], try #1 \nINFO[0010] Checking if container [cert-deployer] is running on host [18.185.105.131], try #1 \nINFO[0010] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0010] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0010] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0011] Starting container [cert-deployer] on host [3.64.193.173], try #1 \nINFO[0011] Starting container [cert-deployer] on host [18.185.105.131], try #1 \nINFO[0011] Starting container [cert-deployer] on host [18.195.249.125], try #1 \nINFO[0011] Checking if container [cert-deployer] is running on host [3.64.193.173], try #1 \nINFO[0011] Checking if container [cert-deployer] is running on host [18.185.105.131], try #1 \nINFO[0011] Checking if container [cert-deployer] is running on host [18.195.249.125], try #1 \nINFO[0016] Checking if container [cert-deployer] is running on host [3.64.193.173], try #1 \nINFO[0016] Removing container [cert-deployer] on host [3.64.193.173], try #1 \nINFO[0016] Checking if container [cert-deployer] is running on host [18.185.105.131], try #1 \nINFO[0016] Removing container [cert-deployer] on host [18.185.105.131], try #1 \nINFO[0016] Checking if container [cert-deployer] is running on host [18.195.249.125], try #1 \nINFO[0016] Removing container [cert-deployer] on host [18.195.249.125], try #1 \nINFO[0016] [reconcile] Rebuilding and updating local kube config \nINFO[0016] Successfully Deployed local admin kubeconfig at [.\/kube_config_cluster.yml] \nINFO[0016] Successfully Deployed local admin kubeconfig at [.\/kube_config_cluster.yml] \nINFO[0016] Successfully Deployed local admin kubeconfig at [.\/kube_config_cluster.yml] \nINFO[0016] [certificates] Successfully deployed kubernetes certificates to Cluster nodes \nINFO[0016] [file-deploy] Deploying file [\/etc\/kubernetes\/audit-policy.yaml] to node [18.195.249.125] \nINFO[0016] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0016] Starting container [file-deployer] on host [18.195.249.125], try #1 \nINFO[0017] Successfully started [file-deployer] container on host [18.195.249.125] \nINFO[0017] Waiting for [file-deployer] container to exit on host [18.195.249.125] \nINFO[0017] Waiting for [file-deployer] container to exit on host [18.195.249.125] \nINFO[0017] Container [file-deployer] is still running on host [18.195.249.125]: stderr: [], stdout: [] \nINFO[0018] Waiting for [file-deployer] container to exit on host [18.195.249.125] \nINFO[0018] Removing container [file-deployer] on host [18.195.249.125], try #1 \nINFO[0018] [remove\/file-deployer] Successfully removed container on host [18.195.249.125] \nINFO[0018] [file-deploy] Deploying file [\/etc\/kubernetes\/audit-policy.yaml] to node [3.64.193.173] \nINFO[0018] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0018] Starting container [file-deployer] on host [3.64.193.173], try #1 \nINFO[0018] Successfully started [file-deployer] container on host [3.64.193.173] \nINFO[0018] Waiting for [file-deployer] container to exit on host [3.64.193.173] \nINFO[0018] Waiting for [file-deployer] container to exit on host [3.64.193.173] \nINFO[0018] Container [file-deployer] is still running on host [3.64.193.173]: stderr: [], stdout: [] \nINFO[0019] Waiting for [file-deployer] container to exit on host [3.64.193.173] \nINFO[0019] Removing container [file-deployer] on host [3.64.193.173], try #1 \nINFO[0019] [remove\/file-deployer] Successfully removed container on host [3.64.193.173] \nINFO[0019] [file-deploy] Deploying file [\/etc\/kubernetes\/audit-policy.yaml] to node [18.185.105.131] \nINFO[0019] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0019] Starting container [file-deployer] on host [18.185.105.131], try #1 \nINFO[0020] Successfully started [file-deployer] container on host [18.185.105.131] \nINFO[0020] Waiting for [file-deployer] container to exit on host [18.185.105.131] \nINFO[0020] Waiting for [file-deployer] container to exit on host [18.185.105.131] \nINFO[0020] Container [file-deployer] is still running on host [18.185.105.131]: stderr: [], stdout: [] \nINFO[0021] Waiting for [file-deployer] container to exit on host [18.185.105.131] \nINFO[0021] Removing container [file-deployer] on host [18.185.105.131], try #1 \nINFO[0021] [remove\/file-deployer] Successfully removed container on host [18.185.105.131] \nINFO[0021] [\/etc\/kubernetes\/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes \nINFO[0021] [reconcile] Reconciling cluster state        \nINFO[0021] [reconcile] This is newly generated cluster  \nINFO[0021] Pre-pulling kubernetes images                \nINFO[0021] Pulling image [rancher\/hyperkube:v1.18.16-rancher1] on host [18.185.105.131], try #1 \nINFO[0021] Pulling image [rancher\/hyperkube:v1.18.16-rancher1] on host [3.64.193.173], try #1 \nINFO[0021] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] \nINFO[0047] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] \nINFO[0047] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] \nINFO[0047] Kubernetes images pulled successfully        \nINFO[0047] [etcd] Building up etcd plane..              \nINFO[0047] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0047] Starting container [etcd-fix-perm] on host [18.195.249.125], try #1 \nINFO[0047] Successfully started [etcd-fix-perm] container on host [18.195.249.125] \nINFO[0047] Waiting for [etcd-fix-perm] container to exit on host [18.195.249.125] \nINFO[0047] Waiting for [etcd-fix-perm] container to exit on host [18.195.249.125] \nINFO[0047] Container [etcd-fix-perm] is still running on host [18.195.249.125]: stderr: [], stdout: [] \nINFO[0048] Waiting for [etcd-fix-perm] container to exit on host [18.195.249.125] \nINFO[0048] Removing container [etcd-fix-perm] on host [18.195.249.125], try #1 \nINFO[0048] [remove\/etcd-fix-perm] Successfully removed container on host [18.195.249.125] \nINFO[0048] Image [rancher\/coreos-etcd:v3.4.3-rancher1] exists on host [18.195.249.125] \nINFO[0048] Starting container [etcd] on host [18.195.249.125], try #1 \nINFO[0049] [etcd] Successfully started [etcd] container on host [18.195.249.125] \nINFO[0049] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [18.195.249.125] \nINFO[0049] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0049] Starting container [etcd-rolling-snapshots] on host [18.195.249.125], try #1 \nINFO[0049] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.195.249.125] \nINFO[0054] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0054] Starting container [rke-bundle-cert] on host [18.195.249.125], try #1 \nINFO[0054] [certificates] Successfully started [rke-bundle-cert] container on host [18.195.249.125] \nINFO[0054] Waiting for [rke-bundle-cert] container to exit on host [18.195.249.125] \nINFO[0054] Container [rke-bundle-cert] is still running on host [18.195.249.125]: stderr: [], stdout: [] \nINFO[0055] Waiting for [rke-bundle-cert] container to exit on host [18.195.249.125] \nINFO[0055] [certificates] successfully saved certificate bundle [\/opt\/rke\/etcd-snapshots\/\/pki.bundle.tar.gz] on host [18.195.249.125] \nINFO[0055] Removing container [rke-bundle-cert] on host [18.195.249.125], try #1 \nINFO[0056] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0056] Starting container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0056] [etcd] Successfully started [rke-log-linker] container on host [18.195.249.125] \nINFO[0056] Removing container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0056] [remove\/rke-log-linker] Successfully removed container on host [18.195.249.125] \nINFO[0056] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0062] Starting container [etcd-fix-perm] on host [3.64.193.173], try #1 \nINFO[0062] Successfully started [etcd-fix-perm] container on host [3.64.193.173] \nINFO[0062] Waiting for [etcd-fix-perm] container to exit on host [3.64.193.173] \nINFO[0062] Waiting for [etcd-fix-perm] container to exit on host [3.64.193.173] \nINFO[0062] Container [etcd-fix-perm] is still running on host [3.64.193.173]: stderr: [], stdout: [] \nINFO[0063] Waiting for [etcd-fix-perm] container to exit on host [3.64.193.173] \nINFO[0063] Removing container [etcd-fix-perm] on host [3.64.193.173], try #1 \nINFO[0063] [remove\/etcd-fix-perm] Successfully removed container on host [3.64.193.173] \nINFO[0063] Pulling image [rancher\/coreos-etcd:v3.4.3-rancher1] on host [3.64.193.173], try #1 \nINFO[0067] Image [rancher\/coreos-etcd:v3.4.3-rancher1] exists on host [3.64.193.173] \nINFO[0067] Starting container [etcd] on host [3.64.193.173], try #1 \nINFO[0067] [etcd] Successfully started [etcd] container on host [3.64.193.173] \nINFO[0067] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [3.64.193.173] \nINFO[0067] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0067] Starting container [etcd-rolling-snapshots] on host [3.64.193.173], try #1 \nINFO[0067] [etcd] Successfully started [etcd-rolling-snapshots] container on host [3.64.193.173] \nINFO[0072] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0073] Starting container [rke-bundle-cert] on host [3.64.193.173], try #1 \nINFO[0073] [certificates] Successfully started [rke-bundle-cert] container on host [3.64.193.173] \nINFO[0073] Waiting for [rke-bundle-cert] container to exit on host [3.64.193.173] \nINFO[0073] Container [rke-bundle-cert] is still running on host [3.64.193.173]: stderr: [], stdout: [] \nINFO[0074] Waiting for [rke-bundle-cert] container to exit on host [3.64.193.173] \nINFO[0074] [certificates] successfully saved certificate bundle [\/opt\/rke\/etcd-snapshots\/\/pki.bundle.tar.gz] on host [3.64.193.173] \nINFO[0074] Removing container [rke-bundle-cert] on host [3.64.193.173], try #1 \nINFO[0074] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0074] Starting container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0074] [etcd] Successfully started [rke-log-linker] container on host [3.64.193.173] \nINFO[0074] Removing container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0075] [remove\/rke-log-linker] Successfully removed container on host [3.64.193.173] \nINFO[0075] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0078] Starting container [etcd-fix-perm] on host [18.185.105.131], try #1 \nINFO[0079] Successfully started [etcd-fix-perm] container on host [18.185.105.131] \nINFO[0079] Waiting for [etcd-fix-perm] container to exit on host [18.185.105.131] \nINFO[0079] Waiting for [etcd-fix-perm] container to exit on host [18.185.105.131] \nINFO[0079] Container [etcd-fix-perm] is still running on host [18.185.105.131]: stderr: [], stdout: [] \nINFO[0080] Waiting for [etcd-fix-perm] container to exit on host [18.185.105.131] \nINFO[0080] Removing container [etcd-fix-perm] on host [18.185.105.131], try #1 \nINFO[0080] [remove\/etcd-fix-perm] Successfully removed container on host [18.185.105.131] \nINFO[0080] Pulling image [rancher\/coreos-etcd:v3.4.3-rancher1] on host [18.185.105.131], try #1 \nINFO[0084] Image [rancher\/coreos-etcd:v3.4.3-rancher1] exists on host [18.185.105.131] \nINFO[0084] Starting container [etcd] on host [18.185.105.131], try #1 \nINFO[0084] [etcd] Successfully started [etcd] container on host [18.185.105.131] \nINFO[0084] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [18.185.105.131] \nINFO[0084] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0084] Starting container [etcd-rolling-snapshots] on host [18.185.105.131], try #1 \nINFO[0084] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.185.105.131] \nINFO[0089] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0089] Starting container [rke-bundle-cert] on host [18.185.105.131], try #1 \nINFO[0090] [certificates] Successfully started [rke-bundle-cert] container on host [18.185.105.131] \nINFO[0090] Waiting for [rke-bundle-cert] container to exit on host [18.185.105.131] \nINFO[0090] Container [rke-bundle-cert] is still running on host [18.185.105.131]: stderr: [], stdout: [] \nINFO[0091] Waiting for [rke-bundle-cert] container to exit on host [18.185.105.131] \nINFO[0091] [certificates] successfully saved certificate bundle [\/opt\/rke\/etcd-snapshots\/\/pki.bundle.tar.gz] on host [18.185.105.131] \nINFO[0091] Removing container [rke-bundle-cert] on host [18.185.105.131], try #1 \nINFO[0091] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0091] Starting container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0091] [etcd] Successfully started [rke-log-linker] container on host [18.185.105.131] \nINFO[0091] Removing container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0092] [remove\/rke-log-linker] Successfully removed container on host [18.185.105.131] \nINFO[0092] [etcd] Successfully started etcd plane.. Checking etcd cluster health \nINFO[0092] [etcd] etcd host [18.195.249.125] reported healthy=true \nINFO[0092] [controlplane] Building up Controller Plane.. \nINFO[0092] Checking if container [service-sidekick] is running on host [18.195.249.125], try #1 \nINFO[0092] Checking if container [service-sidekick] is running on host [18.185.105.131], try #1 \nINFO[0092] Checking if container [service-sidekick] is running on host [3.64.193.173], try #1 \nINFO[0092] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0092] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0092] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0092] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] \nINFO[0092] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] \nINFO[0092] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] \nINFO[0092] Starting container [kube-apiserver] on host [18.185.105.131], try #1 \nINFO[0092] Starting container [kube-apiserver] on host [3.64.193.173], try #1 \nINFO[0092] Starting container [kube-apiserver] on host [18.195.249.125], try #1 \nINFO[0092] [controlplane] Successfully started [kube-apiserver] container on host [18.185.105.131] \nINFO[0092] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.185.105.131] \nINFO[0092] [controlplane] Successfully started [kube-apiserver] container on host [18.195.249.125] \nINFO[0092] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.195.249.125] \nINFO[0092] [controlplane] Successfully started [kube-apiserver] container on host [3.64.193.173] \nINFO[0092] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [3.64.193.173] \nINFO[0102] [healthcheck] service [kube-apiserver] on host [18.185.105.131] is healthy \nINFO[0102] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0102] [healthcheck] service [kube-apiserver] on host [3.64.193.173] is healthy \nINFO[0102] [healthcheck] service [kube-apiserver] on host [18.195.249.125] is healthy \nINFO[0102] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0102] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0102] Starting container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0102] Starting container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0102] Starting container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.185.105.131] \nINFO[0103] Removing container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [3.64.193.173] \nINFO[0103] Removing container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.195.249.125] \nINFO[0103] Removing container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0103] [remove\/rke-log-linker] Successfully removed container on host [18.185.105.131] \nINFO[0103] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] \nINFO[0103] [remove\/rke-log-linker] Successfully removed container on host [3.64.193.173] \nINFO[0103] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] \nINFO[0103] [remove\/rke-log-linker] Successfully removed container on host [18.195.249.125] \nINFO[0103] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] \nINFO[0103] Starting container [kube-controller-manager] on host [18.185.105.131], try #1 \nINFO[0103] Starting container [kube-controller-manager] on host [3.64.193.173], try #1 \nINFO[0103] Starting container [kube-controller-manager] on host [18.195.249.125], try #1 \nINFO[0103] [controlplane] Successfully started [kube-controller-manager] container on host [18.185.105.131] \nINFO[0103] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.185.105.131] \nINFO[0103] [controlplane] Successfully started [kube-controller-manager] container on host [3.64.193.173] \nINFO[0103] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [3.64.193.173] \nINFO[0103] [controlplane] Successfully started [kube-controller-manager] container on host [18.195.249.125] \nINFO[0103] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.195.249.125] \nINFO[0108] [healthcheck] service [kube-controller-manager] on host [18.185.105.131] is healthy \nINFO[0108] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0108] [healthcheck] service [kube-controller-manager] on host [3.64.193.173] is healthy \nINFO[0108] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0108] [healthcheck] service [kube-controller-manager] on host [18.195.249.125] is healthy \nINFO[0108] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0109] Starting container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0109] Starting container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0109] Starting container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0109] [controlplane] Successfully started [rke-log-linker] container on host [18.185.105.131] \nINFO[0109] [controlplane] Successfully started [rke-log-linker] container on host [3.64.193.173] \nINFO[0109] Removing container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0109] Removing container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0109] [controlplane] Successfully started [rke-log-linker] container on host [18.195.249.125] \nINFO[0109] Removing container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0109] [remove\/rke-log-linker] Successfully removed container on host [3.64.193.173] \nINFO[0109] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] \nINFO[0109] [remove\/rke-log-linker] Successfully removed container on host [18.185.105.131] \nINFO[0109] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] \nINFO[0109] Starting container [kube-scheduler] on host [3.64.193.173], try #1 \nINFO[0109] [remove\/rke-log-linker] Successfully removed container on host [18.195.249.125] \nINFO[0109] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] \nINFO[0109] Starting container [kube-scheduler] on host [18.185.105.131], try #1 \nINFO[0109] Starting container [kube-scheduler] on host [18.195.249.125], try #1 \nINFO[0109] [controlplane] Successfully started [kube-scheduler] container on host [3.64.193.173] \nINFO[0109] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [3.64.193.173] \nINFO[0109] [controlplane] Successfully started [kube-scheduler] container on host [18.185.105.131] \nINFO[0109] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.185.105.131] \nINFO[0109] [controlplane] Successfully started [kube-scheduler] container on host [18.195.249.125] \nINFO[0109] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.195.249.125] \nINFO[0115] [healthcheck] service [kube-scheduler] on host [3.64.193.173] is healthy \nINFO[0115] [healthcheck] service [kube-scheduler] on host [18.185.105.131] is healthy \nINFO[0115] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0115] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0115] [healthcheck] service [kube-scheduler] on host [18.195.249.125] is healthy \nINFO[0115] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0115] Starting container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0115] Starting container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0115] Starting container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0115] [controlplane] Successfully started [rke-log-linker] container on host [3.64.193.173] \nINFO[0115] Removing container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0115] [controlplane] Successfully started [rke-log-linker] container on host [18.185.105.131] \nINFO[0115] Removing container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0115] [controlplane] Successfully started [rke-log-linker] container on host [18.195.249.125] \nINFO[0115] Removing container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0115] [remove\/rke-log-linker] Successfully removed container on host [3.64.193.173] \nINFO[0115] [remove\/rke-log-linker] Successfully removed container on host [18.185.105.131] \nINFO[0116] [remove\/rke-log-linker] Successfully removed container on host [18.195.249.125] \nINFO[0116] [controlplane] Successfully started Controller Plane.. \nINFO[0116] [authz] Creating rke-job-deployer ServiceAccount \nINFO[0116] [authz] rke-job-deployer ServiceAccount created successfully \nINFO[0116] [authz] Creating system:node ClusterRoleBinding \nINFO[0116] [authz] system:node ClusterRoleBinding created successfully \nINFO[0116] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding \nINFO[0116] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully \nINFO[0116] Successfully Deployed state file at [.\/cluster.rkestate] \nINFO[0116] [state] Saving full cluster state to Kubernetes \nINFO[0116] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state \nINFO[0116] [worker] Building up Worker Plane..          \nINFO[0116] Checking if container [service-sidekick] is running on host [18.185.105.131], try #1 \nINFO[0116] Checking if container [service-sidekick] is running on host [18.195.249.125], try #1 \nINFO[0116] Checking if container [service-sidekick] is running on host [3.64.193.173], try #1 \nINFO[0116] [sidekick] Sidekick container already created on host [3.64.193.173] \nINFO[0116] [sidekick] Sidekick container already created on host [18.185.105.131] \nINFO[0116] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] \nINFO[0116] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] \nINFO[0116] [sidekick] Sidekick container already created on host [18.195.249.125] \nINFO[0116] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] \nINFO[0116] Starting container [kubelet] on host [3.64.193.173], try #1 \nINFO[0116] Starting container [kubelet] on host [18.185.105.131], try #1 \nINFO[0116] Starting container [kubelet] on host [18.195.249.125], try #1 \nINFO[0116] [worker] Successfully started [kubelet] container on host [3.64.193.173] \nINFO[0116] [healthcheck] Start Healthcheck on service [kubelet] on host [3.64.193.173] \nINFO[0116] [worker] Successfully started [kubelet] container on host [18.185.105.131] \nINFO[0116] [healthcheck] Start Healthcheck on service [kubelet] on host [18.185.105.131] \nINFO[0116] [worker] Successfully started [kubelet] container on host [18.195.249.125] \nINFO[0116] [healthcheck] Start Healthcheck on service [kubelet] on host [18.195.249.125] \nINFO[0121] [healthcheck] service [kubelet] on host [3.64.193.173] is healthy \nINFO[0121] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0121] [healthcheck] service [kubelet] on host [18.185.105.131] is healthy \nINFO[0121] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0121] [healthcheck] service [kubelet] on host [18.195.249.125] is healthy \nINFO[0121] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0121] Starting container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0121] Starting container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0121] Starting container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0121] [worker] Successfully started [rke-log-linker] container on host [3.64.193.173] \nINFO[0121] Removing container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0121] [worker] Successfully started [rke-log-linker] container on host [18.185.105.131] \nINFO[0121] Removing container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0122] [worker] Successfully started [rke-log-linker] container on host [18.195.249.125] \nINFO[0122] Removing container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0122] [remove\/rke-log-linker] Successfully removed container on host [3.64.193.173] \nINFO[0122] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] \nINFO[0122] [remove\/rke-log-linker] Successfully removed container on host [18.185.105.131] \nINFO[0122] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] \nINFO[0122] Starting container [kube-proxy] on host [3.64.193.173], try #1 \nINFO[0122] [remove\/rke-log-linker] Successfully removed container on host [18.195.249.125] \nINFO[0122] Starting container [kube-proxy] on host [18.185.105.131], try #1 \nINFO[0122] Image [rancher\/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] \nINFO[0122] [worker] Successfully started [kube-proxy] container on host [3.64.193.173] \nINFO[0122] [healthcheck] Start Healthcheck on service [kube-proxy] on host [3.64.193.173] \nINFO[0122] Starting container [kube-proxy] on host [18.195.249.125], try #1 \nINFO[0122] [worker] Successfully started [kube-proxy] container on host [18.185.105.131] \nINFO[0122] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.185.105.131] \nINFO[0122] [worker] Successfully started [kube-proxy] container on host [18.195.249.125] \nINFO[0122] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.195.249.125] \nINFO[0127] [healthcheck] service [kube-proxy] on host [3.64.193.173] is healthy \nINFO[0127] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0127] [healthcheck] service [kube-proxy] on host [18.185.105.131] is healthy \nINFO[0127] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0127] Starting container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0127] [healthcheck] service [kube-proxy] on host [18.195.249.125] is healthy \nINFO[0127] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0127] Starting container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0127] Starting container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0128] [worker] Successfully started [rke-log-linker] container on host [3.64.193.173] \nINFO[0128] Removing container [rke-log-linker] on host [3.64.193.173], try #1 \nINFO[0128] [worker] Successfully started [rke-log-linker] container on host [18.185.105.131] \nINFO[0128] Removing container [rke-log-linker] on host [18.185.105.131], try #1 \nINFO[0128] [worker] Successfully started [rke-log-linker] container on host [18.195.249.125] \nINFO[0128] Removing container [rke-log-linker] on host [18.195.249.125], try #1 \nINFO[0128] [remove\/rke-log-linker] Successfully removed container on host [3.64.193.173] \nINFO[0128] [remove\/rke-log-linker] Successfully removed container on host [18.185.105.131] \nINFO[0128] [remove\/rke-log-linker] Successfully removed container on host [18.195.249.125] \nINFO[0128] [worker] Successfully started Worker Plane.. \nINFO[0128] Image [rancher\/rke-tools:v0.1.72] exists on host [3.64.193.173] \nINFO[0128] Image [rancher\/rke-tools:v0.1.72] exists on host [18.185.105.131] \nINFO[0128] Image [rancher\/rke-tools:v0.1.72] exists on host [18.195.249.125] \nINFO[0128] Starting container [rke-log-cleaner] on host [18.185.105.131], try #1 \nINFO[0128] Starting container [rke-log-cleaner] on host [18.195.249.125], try #1 \nINFO[0128] Starting container [rke-log-cleaner] on host [3.64.193.173], try #1 \nINFO[0129] [cleanup] Successfully started [rke-log-cleaner] container on host [18.185.105.131] \nINFO[0129] Removing container [rke-log-cleaner] on host [18.185.105.131], try #1 \nINFO[0129] [cleanup] Successfully started [rke-log-cleaner] container on host [18.195.249.125] \nINFO[0129] Removing container [rke-log-cleaner] on host [18.195.249.125], try #1 \nINFO[0129] [cleanup] Successfully started [rke-log-cleaner] container on host [3.64.193.173] \nINFO[0129] Removing container [rke-log-cleaner] on host [3.64.193.173], try #1 \nINFO[0129] [remove\/rke-log-cleaner] Successfully removed container on host [18.185.105.131] \nINFO[0129] [remove\/rke-log-cleaner] Successfully removed container on host [18.195.249.125] \nINFO[0129] [remove\/rke-log-cleaner] Successfully removed container on host [3.64.193.173] \nINFO[0129] [sync] Syncing nodes Labels and Taints       \nINFO[0129] [sync] Successfully synced nodes Labels and Taints \nINFO[0129] [network] Setting up network plugin: canal   \nINFO[0129] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes \nINFO[0129] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes \nINFO[0129] [addons] Executing deploy job rke-network-plugin \nINFO[0134] [addons] Setting up coredns                  \nINFO[0134] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes \nINFO[0134] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes \nINFO[0134] [addons] Executing deploy job rke-coredns-addon \nINFO[0139] [addons] CoreDNS deployed successfully       \nINFO[0139] [dns] DNS provider coredns deployed successfully \nINFO[0139] [addons] Setting up Metrics Server           \nINFO[0139] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes \nINFO[0139] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes \nINFO[0139] [addons] Executing deploy job rke-metrics-addon \nINFO[0144] [addons] Metrics Server deployed successfully \nINFO[0144] [ingress] Setting up nginx ingress controller \nINFO[0144] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes \nINFO[0144] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes \nINFO[0144] [addons] Executing deploy job rke-ingress-controller \nINFO[0149] [ingress] ingress controller nginx deployed successfully \nINFO[0149] [addons] Setting up user addons              \nINFO[0149] [addons] no user addons defined              \nINFO[0149] Finished building Kubernetes cluster successfully\n<\/pre>\n<p>If all went fine you should have a three node cluster after a few minutes:<\/p>\n<pre class=\"brush: bash; gutter: true; first-line: 1\">\nrancher@rancher1:~$ export KUBECONFIG=kube_config_cluster.yml \nrancher@rancher1:~$ kubectl get nodes -o wide\nNAME       STATUS   ROLES                      AGE     VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION          CONTAINER-RUNTIME\nrancher2    Ready    controlplane,etcd,worker   4m51s   v1.18.16   10.0.1.253            Debian GNU\/Linux 10 (buster)   4.19.0-14-cloud-amd64   docker:\/\/19.3.15\nrancher1    Ready    controlplane,etcd,worker   4m51s   v1.18.16   10.0.1.168            Debian GNU\/Linux 10 (buster)   4.19.0-14-cloud-amd64   docker:\/\/19.3.15\nrancher3    Ready    controlplane,etcd,worker   4m51s   v1.18.16   10.0.1.73             Debian GNU\/Linux 10 (buster)   4.19.0-14-cloud-amd64   docker:\/\/19.3.15\n<\/pre>\n<p>Again, very easy to setup. We still do not have Rancher running, just RKE in a three node configuration. The installation of Rancher itself will be the topic for the next post.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the last post we&#8217;ve brought up a RKE Kubernetes cluster on a single node. While that is cool for demonstration purposes or testing, this is nothing for a real life setup. Running the control pane, the etcd nodes and the worker nodes all on one node, is nothing you want to do usually, as [&hellip;]<\/p>\n","protected":false},"author":29,"featured_media":15855,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1865,229,1504,1522],"tags":[89,2277,309],"type_dbi":[],"class_list":["post-15854","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aws","category-database-administration-monitoring","category-docker","category-kubernetes","tag-kubernetes","tag-ranger","tag-suse"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.4) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes\" \/>\n<meta property=\"og:description\" content=\"In the last post we&#8217;ve brought up a RKE Kubernetes cluster on a single node. While that is cool for demonstration purposes or testing, this is nothing for a real life setup. Running the control pane, the etcd nodes and the worker nodes all on one node, is nothing you want to do usually, as [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2021-03-09T06:07:26+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/rancher2.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1682\" \/>\n\t<meta property=\"og:image:height\" content=\"125\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Daniel Westermann\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@westermanndanie\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Westermann\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"26 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/\"},\"author\":{\"name\":\"Daniel Westermann\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"headline\":\"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes\",\"datePublished\":\"2021-03-09T06:07:26+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/\"},\"wordCount\":332,\"commentCount\":0,\"image\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2022\\\/04\\\/rancher2.jpg\",\"keywords\":[\"kubernetes\",\"Ranger\",\"SuSE\"],\"articleSection\":[\"AWS\",\"Database Administration &amp; Monitoring\",\"Docker\",\"Kubernetes\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/\",\"name\":\"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2022\\\/04\\\/rancher2.jpg\",\"datePublished\":\"2021-03-09T06:07:26+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/8d08e9bd996a89bd75c0286cbabf3c66\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2022\\\/04\\\/rancher2.jpg\",\"contentUrl\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/wp-content\\\/uploads\\\/sites\\\/2\\\/2022\\\/04\\\/rancher2.jpg\",\"width\":1682,\"height\":125},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/rancher-up-and-running-on-ec2-2-three-nodes\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#website\",\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/#\\\/schema\\\/person\\\/8d08e9bd996a89bd75c0286cbabf3c66\",\"name\":\"Daniel Westermann\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g\",\"caption\":\"Daniel Westermann\"},\"description\":\"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\\\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.\",\"sameAs\":[\"https:\\\/\\\/x.com\\\/westermanndanie\"],\"url\":\"https:\\\/\\\/www.dbi-services.com\\\/blog\\\/author\\\/daniel-westermann\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/","og_locale":"en_US","og_type":"article","og_title":"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes","og_description":"In the last post we&#8217;ve brought up a RKE Kubernetes cluster on a single node. While that is cool for demonstration purposes or testing, this is nothing for a real life setup. Running the control pane, the etcd nodes and the worker nodes all on one node, is nothing you want to do usually, as [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/","og_site_name":"dbi Blog","article_published_time":"2021-03-09T06:07:26+00:00","og_image":[{"width":1682,"height":125,"url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/rancher2.jpg","type":"image\/jpeg"}],"author":"Daniel Westermann","twitter_card":"summary_large_image","twitter_creator":"@westermanndanie","twitter_misc":{"Written by":"Daniel Westermann","Est. reading time":"26 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/"},"author":{"name":"Daniel Westermann","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"headline":"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes","datePublished":"2021-03-09T06:07:26+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/"},"wordCount":332,"commentCount":0,"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/rancher2.jpg","keywords":["kubernetes","Ranger","SuSE"],"articleSection":["AWS","Database Administration &amp; Monitoring","Docker","Kubernetes"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/","url":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/","name":"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/#primaryimage"},"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/rancher2.jpg","datePublished":"2021-03-09T06:07:26+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/#primaryimage","url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/rancher2.jpg","contentUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2022\/04\/rancher2.jpg","width":1682,"height":125},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/rancher-up-and-running-on-ec2-2-three-nodes\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Rancher, up and running, on EC2 \u2013 2 \u2013 Three nodes"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/8d08e9bd996a89bd75c0286cbabf3c66","name":"Daniel Westermann","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/31350ceeecb1dd8986339a29bf040d4cd3cd087d410deccd8f55234466d6c317?s=96&d=mm&r=g","caption":"Daniel Westermann"},"description":"Daniel Westermann is Principal Consultant and Technology Leader Open Infrastructure at dbi services. He has more than 15 years of experience in management, engineering and optimization of databases and infrastructures, especially on Oracle and PostgreSQL. Since the beginning of his career, he has specialized in Oracle Technologies and is Oracle Certified Professional 12c and Oracle Certified Expert RAC\/GridInfra. Over time, Daniel has become increasingly interested in open source technologies, becoming \u201cTechnology Leader Open Infrastructure\u201d and PostgreSQL expert. \u00a0Based on community or EnterpriseDB tools, he develops and installs complex high available solutions with PostgreSQL. He is also a certified PostgreSQL Plus 9.0 Professional and a Postgres Advanced Server 9.4 Professional. He is a regular speaker at PostgreSQL conferences in Switzerland and Europe. Today Daniel is also supporting our customers on AWS services such as AWS RDS, database migrations into the cloud, EC2 and automated infrastructure management with AWS SSM (System Manager). He is a certified AWS Solutions Architect Professional. Prior to dbi services, Daniel was Management System Engineer at LC SYSTEMS-Engineering AG in Basel. Before that, he worked as Oracle Developper &amp;\u00a0Project Manager at Delta Energy Solutions AG in Basel (today Powel AG). Daniel holds a diploma in Business Informatics (DHBW, Germany). His branch-related experience mainly covers the pharma industry, the financial sector, energy, lottery and telecommunications.","sameAs":["https:\/\/x.com\/westermanndanie"],"url":"https:\/\/www.dbi-services.com\/blog\/author\/daniel-westermann\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/15854","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/29"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=15854"}],"version-history":[{"count":0,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/15854\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media\/15855"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=15854"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=15854"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=15854"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=15854"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}