Merge pull request #1354 from gravitl/feature_v0.14.5_kube

renaming folder
This commit is contained in:
dcarns 2022-07-07 16:54:55 -04:00 committed by GitHub
commit 4fd523487e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
6 changed files with 252 additions and 0 deletions

101
k8s/server/README.md Normal file
View file

@ -0,0 +1,101 @@
# Netmaker K8S YAML Templates - Run Netmaker on Kubernetes
This will walk you through setting up a highly-available netmaker deployment on Kubernetes. Note: do not attempt to control networking on a Kubernetes cluster where Netmaker is deployed. This can result in circular logic and unreachable networks! Typically, a cluster should be designated as a "control" cluster.
You may want a more simple Kubernetes setup. We recommend [this community project](https://github.com/geragcp/netmaker-k3s). It is specific to K3S, but should be editable to work on most Kubernetes distributions.
### 0. Prerequisites
Your cluster must meet a few conditions to host a netmaker server. Primarily:
a) **Nodes:** You must have at least 3 worker nodes available for Netmaker to deploy. Netmaker nodes have anti-affinity and will not deploy on the same kubernetes node.
b) **Storage:** RWX and RWO storage classes must be available
c) **Ingress:** Ingress must be configured with certs. Traefik + cert-manager are preferred. Additionally, be sure to have a wildcard DNS entry for use with Ingress/Netmaker
d) **MQ Broker Considerations:** Our method uses a raw LoadBalancer object for MQ. This means you must have an external load balancer configured. Alternatively, you can expost MQ via NodePort. To do this, you must modify your server settings to use the IP Address of **one node.** However, in this case, MQ will not be highly available, so be forewarned. Finally, you can use a special TCPIngressRoute if Traefik is your Ingress provider ([see this repo](https://github.com/geragcp/netmaker-k3s) for an example). However, since this is not a standard k8s object, we avoid it for our setup to make it more replicable for different setups.
e) **Helm:** For our Postgresql installation we rely on a helm chart, so you must have helm installed and configured.
Assuming you are prepared for the above, we can begin to deploy Netmaker.
### 1. Create Namespace
`kubectl create ns netmaker`
`kubectl config set-context --current --namespace=netmaker`
### 2. Deploy Database
Netmaker can use sqlite, postgres, or rqlite as a backing database. For HA, we recommend using Postgres, as Bitnami provides a reliable helm chart for deploying an HA pqsql cluster.
Follow these instructions:
https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha
`helm install postgres bitnami/postgresql`
Once completed, retrieve the password to access postgres:
`kubectl get secret --namespace netmaker postgres-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d`
### 3. Deploy MQTT
Our deployment of MQTT will not be HA. For this, you require an external LoadBalancer or a TCPIngressRoute (Traefik Ingress only). We recommend using an HA setup but this will depend on your specific cluster. For now, we will just use a NodePort.
**Important:** If you choose a different method like LoadBalancer, make sure that latency is not significant between clients and MQ. In testing, we found that some LoadBalancers introduce too much latency, causing MQ to be unuseable.
Choose a cluster node to house MQTT and then run the following:
`kubectl label node <your node name> mqhost=true`
`sed -i 's/MQ_NODE_NAME/<your node name>/g' mosquitto.yaml`
You also need an RWX storage class. Run the following to input your RWX storage class:
`sed -i 's/RWX_STORAGE_CLASS/<your storage class name>/g' mosquitto.yaml`
Now, apply the file:
`kubectl apply -f mosquitto.yaml`
MQ should be in CrashLoopBackoff until Netmaker is deployed. If it's in pending state, check the pvc or the pod status (node selector may be incorrect).
### 4. Deploy Netmaker Server
Make sure Wildcard DNS is set up for a netmaker subdomain, for instance: nm.mydomain.com. If you do not wish to use wildcard, edit the YAML file directly. Note you will need entries for broker.domain, api.domain, and dashboard.domain.
`sed -i 's/NETMAKER_SUBDOMAIN/<your subdomain>/g' netmaker-server.yaml`
Next, enter your postgres info, including the name of your postgres deployment and the password you retrieved above.
`sed -i 's/DB_NAME/<postgres helm name>/g' netmaker-server.yaml`
`sed -i 's/DB_PASS/<postgres helm password>/g' netmaker-server.yaml`
Next, choose a secret password for your Netmaker API:
`sed -i 's/REPLACE_MASTER_KEY/<super secret password>/g' netmaker-server.yaml`
Finally, you will need to create an Ingress object for your Netmaker API. An example is included in the YAML file for nginx + letsencrypt. You may use/modify this example, or create your own ingress which routes to the netmaker-rest service on port 8081. But make sure to deploy Ingress before moving on!
### 5. Deploy Netmaker UI
Much like above, you must make sure wildcard DNS is configured and make considerations for Ingress. Once again, add in your subdomain:
`sed -i 's/NETMAKER_SUBDOMAIN/<your subdomain>/g' netmaker-server.yaml`
Again, Ingress is commented out. If you are using Nginx + LetsEncrypt, you can uncomment and use the yaml. Otherwise, set up Ingress manually.
At this point, you should be able to reach the server at domain.yourdomain and start setting up your networks.
### Troubleshooting
Sometimes, the server has a hard time connecting to MQ using the self-generated certs on the first try. If this happens, try the following:
1. restart MQ: `kubectl delete pod <mq pod name>`
2. restart netmaker pods:
2.a. `kubectl scale sts netmaker --replicas=0`
2.b. `kubectl delete pods netmaker-0 netmaker-1 netmaker-2`
2.c. `kubectl scale sts netmaker --replicas=3`
In addition, try deleting the certs in MQ before running the above:
1. `kubectl exec -it mosquitto-<pod name> /bin/sh`
2. `rm mosquitto/certs/*`
3. `exit`
4. `kubectl delete pod mosquitto-<pod name>`
2. `kubectl scale sts netmaker --replicas=0` (wait until pods are down) `kubectl scale sts netmaker --replicas=3`

151
k8s/server/mosquitto.yaml Normal file
View file

@ -0,0 +1,151 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mosquitto
spec:
progressDeadlineSeconds: 600
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: mosquitto
app.kubernetes.io/name: mosquitto
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/instance: mosquitto
app.kubernetes.io/name: mosquitto
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: mqhost
operator: In
values:
- "true"
containers:
- image: eclipse-mosquitto:2.0.11-openssl
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 8883
timeoutSeconds: 1
name: mosquitto
ports:
- containerPort: 1883
name: mqtt
protocol: TCP
- containerPort: 8883
name: mqtt2
protocol: TCP
readinessProbe:
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 8883
timeoutSeconds: 1
resources: {}
startupProbe:
failureThreshold: 30
periodSeconds: 5
successThreshold: 1
tcpSocket:
port: 8883
timeoutSeconds: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mosquitto/config/mosquitto.conf
name: mosquitto-config
subPath: mosquitto.conf
- mountPath: /mosquitto/certs
name: shared-certs
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- configMap:
name: mosquitto-config
name: mosquitto-config
- name: shared-certs
persistentVolumeClaim:
claimName: shared-certs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mq
namespace: netmaker
spec:
ports:
- name: mqtt2
port: 1883
protocol: TCP
targetPort: mqtt2
- name: mqtt
port: 8883
protocol: TCP
targetPort: mqtt
selector:
app.kubernetes.io/instance: mosquitto
app.kubernetes.io/name: mosquitto
sessionAffinity: None
---
apiVersion: v1
data:
mosquitto.conf: |
per_listener_settings true
listener 8883
allow_anonymous false
require_certificate true
use_identity_as_username true
cafile /mosquitto/certs/root.pem
certfile /mosquitto/certs/server.pem
keyfile /mosquitto/certs/server.key
listener 1883
allow_anonymous true
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/instance: mosquitto
app.kubernetes.io/name: mosquitto
name: mosquitto-config
namespace: netmaker
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: shared-certs-pvc
spec:
storageClassName: RWX_STORAGE_CLASS
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: Service
metadata:
labels:
name: 'netmaker-mqtt'
spec:
externalTrafficPolicy: Local
type: NodePort
selector:
app.kubernetes.io/instance: mosquitto
app.kubernetes.io/name: mosquitto
ports:
- port: 31883
nodePort: 31883
protocol: TCP
targetPort: 8883
name: nm-mqtt