Rancher RKE Example Configuration¶
Estimated time to read: 13 minutes
- Originally Written: August, 2023
Note
This is just an example of an RKE YAML configuration I've used in the past in case you need to bring up a simple Rancher cluster. You could build everything in Ansible or a similar tool but I found that sometimes it's quicker and easier just to have a basic YAML config and a couple of setup instructions.
Pre-requisities¶
- The following script was used to install some tools on my laptop.
kubectl
provides Kubernetes cluster access and management andrke
is used to deploy and manage Rancher on the nodes
#!/bin/bash
echo '> Installing Kubectl'
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version --client
echo '> Installing RKE'
curl -s https://api.github.com/repos/rancher/rke/releases/latest | grep download_url | grep amd64 | cut -d '"' -f 4 | wget -qi -
chmod +x rke_linux-amd64
sudo mv rke_linux-amd64 /usr/local/bin/rke
rke --version
- I ran this configuration with 3 x control nodes and 6 x worker nodes
- The nodes were Ubuntu VMs which were created prior to running the
rke up
command - The VMs could be built manually or using a tool such as Hashicorp Packer
- RKE uses SSH to access the nodes and provision Kubernetes. You can generate an
ed25519
key using the command below and then copy this key to each of the nodes. I find it easier to build the key once and add it to a VM template which can be cloned, rather than copying it across to every node. However I've also included a script to help copy it to multiple places if needed
$ ssh-keygen -t ed25519
Generating public/private ed25519 key pair.
Enter file in which to save the key (/Users/my_user/.ssh/id_ed25519): my_cluster
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in my_cluster
Your public key has been saved in my_cluster.pub
PasswordAuthentication
I configured the /etc/ssh/sshd_config.d/*.conf
and /etc/ssh/sshd_config
config files on the Rancher VMs with PasswordAuthentication yes
so that when I run the script I can initially login with my user account and have the Rancher my_cluster
key copied across to the nodes.
- I have a file called
rancher_nodes
which contains the Rancher VM IP addresses
10.113.105.15
10.113.105.16
10.113.105.17
10.113.105.18
10.113.105.19
10.113.105.20
10.113.105.21
10.113.105.22
10.113.105.23
- This script copies the newly created key to the Rancher nodes. As already mentioned, when creating the VM initially it's probably easier to perform this step once and then just clone the VM to create the nodes.
Config Files¶
Rancher cluster.yml
nodes:
- hostname_override: control-1
address: 10.113.105.15
port: "22"
internal_address: ""
role:
- controlplane
- etcd
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_ed25519
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- hostname_override: control-2
address: 10.113.105.16
port: "22"
internal_address: ""
role:
- controlplane
- etcd
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_ed25519
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- hostname_override: control-3
address: 10.113.105.17
port: "22"
internal_address: ""
role:
- controlplane
- etcd
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_ed25519
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- hostname_override: worker-1
address: 10.113.105.18
port: "22"
internal_address: ""
role:
- worker
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_ed25519
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- hostname_override: worker-2
address: 10.113.105.19
port: "22"
internal_address: ""
role:
- worker
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_ed25519
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- hostname_override: worker-3
address: 10.113.105.20
port: "22"
internal_address: ""
role:
- worker
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_ed25519
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- hostname_override: worker-4
address: 10.113.105.21
port: "22"
internal_address: ""
role:
- worker
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_ed25519
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- hostname_override: worker-5
address: 10.113.105.22
port: "22"
internal_address: ""
role:
- worker
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_ed25519
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
- hostname_override: worker-6
address: 10.113.105.23
port: "22"
internal_address: ""
role:
- worker
user: ubuntu
docker_socket: /var/run/docker.sock
ssh_key: ""
ssh_key_path: ~/.ssh/id_ed25519
ssh_cert: ""
ssh_cert_path: ""
labels: {}
taints: []
services:
etcd:
image: ""
extra_args: {}
extra_args_array: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_args_array: {}
win_extra_binds: []
win_extra_env: []
external_urls: []
ca_cert: ""
cert: ""
key: ""
path: ""
uid: 0
gid: 0
snapshot: null
retention: ""
creation: ""
backup_config: null
kube-api:
image: ""
extra_args_array: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_args_array: {}
win_extra_binds: []
win_extra_env: []
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: ""
pod_security_policy: false
always_pull_images: false
secrets_encryption_config: null
audit_log: null
admission_configuration: null
event_rate_limit: null
kube-controller:
image: ""
extra_args: {}
extra_args_array: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_args_array: {}
win_extra_binds: []
win_extra_env: []
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
scheduler:
image: ""
extra_args: {}
extra_args_array: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_args_array: {}
win_extra_binds: []
win_extra_env: []
kubelet:
image: ""
extra_args:
runtime-request-timeout: "2h"
extra_args_array: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_args_array: {}
win_extra_binds: []
win_extra_env: []
cluster_domain: cluster.local
infra_container_image: ""
cluster_dns_server: 10.43.0.10
fail_swap_on: false
generate_serving_certificate: false
kubeproxy:
image: ""
extra_args: {}
extra_args_array: {}
extra_binds: []
extra_env: []
win_extra_args: {}
win_extra_args_array: {}
win_extra_binds: []
win_extra_env: []
network:
# http_proxy: "http://my-proxy.com:80/"
# https_proxy: "http://my-proxy.com:80/"
# no_proxy: "localhost,127.0.0.1,10.43.0.1"
plugin: calico
options: {}
mtu: 0
node_selector: {}
update_strategy: null
tolerations: []
authentication:
strategy: x509
sans: []
webhook: null
addons: |-
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: metallb-nginx-ingress
namespace: metallb-system
spec:
autoAssign: false
addresses:
- 10.113.105.197/32
- 10.113.105.198/32
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: metallb-auto-assign
namespace: metallb-system
spec:
addresses:
- 10.113.105.213/32
- 10.113.105.214/32
- 10.113.105.215/32
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: metallb
namespace: metallb-system
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-external
namespace: ingress-nginx
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
selector:
app: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
sessionAffinity: None
type: LoadBalancer
loadBalancerIP: 10.113.105.197
addons_include:
- https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
system_images:
etcd: rancher/mirrored-coreos-etcd:v3.5.4
alpine: rancher/rke-tools:v0.1.87
nginx_proxy: rancher/rke-tools:v0.1.87
cert_downloader: rancher/rke-tools:v0.1.87
kubernetes_services_sidecar: rancher/rke-tools:v0.1.87
kubedns: rancher/mirrored-k8s-dns-kube-dns:1.21.1
dnsmasq: rancher/mirrored-k8s-dns-dnsmasq-nanny:1.21.1
kubedns_sidecar: rancher/mirrored-k8s-dns-sidecar:1.21.1
kubedns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.5
coredns: rancher/mirrored-coredns-coredns:1.9.3
coredns_autoscaler: rancher/mirrored-cluster-proportional-autoscaler:1.8.5
nodelocal: rancher/mirrored-k8s-dns-node-cache:1.21.1
kubernetes: rancher/hyperkube:v1.24.4-rancher1
flannel: rancher/mirrored-coreos-flannel:v0.15.1
flannel_cni: rancher/flannel-cni:v0.3.0-rancher6
calico_node: rancher/mirrored-calico-node:v3.22.0
calico_cni: rancher/calico-cni:v3.22.0-rancher1
calico_controllers: rancher/mirrored-calico-kube-controllers:v3.22.0
calico_ctl: rancher/mirrored-calico-ctl:v3.22.0
calico_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.22.0
canal_node: rancher/mirrored-calico-node:v3.22.0
canal_cni: rancher/calico-cni:v3.22.0-rancher1
canal_controllers: rancher/mirrored-calico-kube-controllers:v3.22.0
canal_flannel: rancher/mirrored-flannelcni-flannel:v0.17.0
canal_flexvol: rancher/mirrored-calico-pod2daemon-flexvol:v3.22.0
weave_node: weaveworks/weave-kube:2.8.1
weave_cni: weaveworks/weave-npc:2.8.1
pod_infra_container: rancher/mirrored-pause:3.6
ingress: rancher/nginx-ingress-controller:nginx-1.2.1-rancher1
ingress_backend: rancher/mirrored-nginx-ingress-controller-defaultbackend:1.5-rancher1
ingress_webhook: rancher/mirrored-ingress-nginx-kube-webhook-certgen:v1.1.1
metrics_server: rancher/mirrored-metrics-server:v0.6.1
windows_pod_infra_container: rancher/mirrored-pause:3.6
aci_cni_deploy_container: noiro/cnideploy:5.2.3.2.1d150da
aci_host_container: noiro/aci-containers-host:5.2.3.2.1d150da
aci_opflex_container: noiro/opflex:5.2.3.2.1d150da
aci_mcast_container: noiro/opflex:5.2.3.2.1d150da
aci_ovs_container: noiro/openvswitch:5.2.3.2.1d150da
aci_controller_container: noiro/aci-containers-controller:5.2.3.2.1d150da
aci_gbp_server_container: noiro/gbp-server:5.2.3.2.1d150da
aci_opflex_server_container: noiro/opflex-server:5.2.3.2.1d150da
ssh_key_path: id_ed25519
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
mode: rbac
options: {}
ignore_docker_version: null
enable_cri_dockerd: null
kubernetes_version: ""
private_registries: []
ingress:
provider: nginx
network_mode: none
# extra_args:
# default-ssl-certificate: "ingress-nginx/my-ingress-cert-com-tls"
cluster_name: ""
cloud_provider:
name: ""
prefix_path: ""
win_prefix_path: ""
addon_job_timeout: 0
bastion_host:
address: ""
port: ""
user: ""
ssh_key: ""
ssh_key_path: ""
ssh_cert: ""
ssh_cert_path: ""
ignore_proxy_env_vars: false
monitoring:
provider: ""
options: {}
node_selector: {}
update_strategy: null
replicas: null
tolerations: []
metrics_server_priority_class_name: ""
restore:
restore: false
snapshot_name: ""
rotate_encryption_key: false
dns: null
longhorn-storageclass.yaml
# Source: longhorn/templates/storageclass.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: longhorn-storageclass
namespace: longhorn-system
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
data:
storageclass.yaml: |
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn-static
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Retain"
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fromBackup: ""
fsType: "ext4"
dataLocality: "disabled"
longhorn-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: longhorn-ingress
namespace: longhorn-system
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: my-cluster.com
http:
paths:
- path: /longhorn(/|$)(.*)
pathType: Prefix
backend:
service:
name: longhorn-frontend
port:
number: 80
tls:
- hosts:
- my-cluster.com
secretName: my-ingress-cert-com-tls
- Generate the RKE
cluster.yml
config file or use the example above
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: id_ed25519
[+] Number of Hosts [1]: 3
[+] SSH Address of host (1) [none]: 10.113.105.199
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (10.113.105.199) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (10.113.105.199) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: id_ed25519
[+] SSH User of host (10.113.105.199) [ubuntu]:
[+] Is host (10.113.105.199) a Control Plane host (y/n)? [y]:
[+] Is host (10.113.105.199) a Worker host (y/n)? [n]:
[+] Is host (10.113.105.199) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (10.113.105.199) [none]: control_1
[+] Internal IP of host (10.113.105.199) [none]:
[+] Docker socket path on host (10.113.105.199) [/var/run/docker.sock]:
[+] SSH Address of host (2) [none]: 10.113.105.204
[+] SSH Port of host (2) [22]:
[+] SSH Private Key Path of host (10.113.105.204) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (10.113.105.204) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: id_ed25519
[+] SSH User of host (10.113.105.204) [ubuntu]:
[+] Is host (10.113.105.204) a Control Plane host (y/n)? [y]: n
[+] Is host (10.113.105.204) a Worker host (y/n)? [n]: y
[+] Is host (10.113.105.204) an etcd host (y/n)? [n]:
[+] Override Hostname of host (10.113.105.204) [none]: worker_1
[+] Internal IP of host (10.113.105.204) [none]:
[+] Docker socket path on host (10.113.105.204) [/var/run/docker.sock]:
[+] SSH Address of host (3) [none]: 10.113.105.209
[+] SSH Port of host (3) [22]:
[+] SSH Private Key Path of host (10.113.105.209) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (10.113.105.209) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: id_ed25519
[+] SSH User of host (10.113.105.209) [ubuntu]:
[+] Is host (10.113.105.209) a Control Plane host (y/n)? [y]: n
[+] Is host (10.113.105.209) a Worker host (y/n)? [n]: y
[+] Is host (10.113.105.209) an etcd host (y/n)? [n]:
[+] Override Hostname of host (10.113.105.209) [none]: worker_2
[+] Internal IP of host (10.113.105.209) [none]:
[+] Docker socket path on host (10.113.105.209) [/var/run/docker.sock]:
[+] Network Plugin Type (flannel, calico, weave, canal, aci) [canal]: calico
[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]:
[+] Kubernetes Docker image [rancher/hyperkube:v1.24.4-rancher1]:
[+] Cluster domain [cluster.local]: cluster.local
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]:
Build the cluster¶
- Run
rke up
from the folder which contains thecluster.yml
file
Install MetalLB into cluster¶
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
- Re-run
rke up
to apply the Metallb settings from thecluster.yml
file
Install Longhorn CSI Driver Using Helm¶
- Use Helm to install Longhorn version 1.4.2
helm install longhorn longhorn/longhorn --namespace longhorn-system --version 1.4.2 --create-namespace
- Confirm the pods are installing
- Update the Longhorn storageclass name to `standard
- After a few minutes check that it has updated correctly
- Add the ingress