Kubernetes multicluster add-on bootstrapping at scale with GitOps using ArgoCD’s ApplicationSet Controller
Organizations adopting containers and Kubernetes will end up running and maintaining multiple Kubernetes clusters. Some are used as production, some as QA while others are used as short-lived dev environments. Development teams might require third-party off-the-shelf applications (aka cluster add-ons) like Prometheus, Grafana, Elasticsearch, Kafka, Argo Workflows etc. for various reasons.
Kubernetes administrators are responsible for bootstrapping the add-ons required by the development teams. Since installing these add-ons requires cluster-level permissions not held by individual development teams, installation is the responsibility of the infrastructure/ops team of an organization, and within a large organization this team might be responsible for tens, hundreds, or thousands of Kubernetes clusters.
With new clusters being added/modified/removed on a regular basis, it is necessary for the organizations to automate cluster add-on bootstrapping to scale across a large number of clusters, and automatically respond to the lifecycle of new clusters.
Cluster GitOps is one of the solutions to this problem.
What is Cluster GitOps and ArgoCD?
Cluster GitOps allows you to create and manage your actual clusters declaratively like you are used to with your application workloads. Declarative configurations of cluster add-ons such as the Prometheus operator, or controllers such as the argo-workflows controller are maintained in a Git repository(single source of truth) like source code and an automated process to ensure the these desired state across multiple clusters.
ArgoCD is a declarative GitOps tool built to deploy applications to Kubernetes. It ensures, the declarative configurations of desired state defined in Git repository and Kubernetes environment states are always in sync.

The following diagram depicts a scenario of an infrastructure team maintaining a Git repository containing application manifests for the Argo Workflows controller, and Prometheus operator. And the infrastructure team would like to deploy both these add-on to a large number of clusters, using Argo CD.

Environment Setup
For this post, I am going to use k3d to setup multicluster environment on my Ubuntu 16.04 machine. I have written posts about Kubernetes cluster setup and ArgoCD setup for development on a personal laptop. I recommend to go through those posts before proceeding.
If you already have multiple clusters, you can ignore this section and go directly to the next section “ArgoCD Installation”.

First, let us setup two k3s kuberenetes clusters custer1
and cluster2
using k3d. For detailed k3d installation instructions, refer my previous post.
# My host IP is 10.76.111.19
# Replace the host IP with yours
# Using 6443 as API port for cluster1k3d cluster create cluster1 --port 8080:80@loadbalancer --port 8443:443@loadbalancer --api-port 10.76.111.19:6443 --k3s-server-arg --tls-san="10.76.111.19"# Using 6444 as API port for cluster1# Replace the host IP 10.76.111.19 with yoursk3d cluster create cluster2 --api-port 10.76.111.19:6444 --k3s-server-arg --tls-san="10.76.111.19"# Confirm the cluster by viewing the configurationkubectl config view#Above command should return something similar to belowapiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.76.111.19:6443
name: k3d-cluster1
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.76.111.19:6444
name: k3d-cluster2
contexts:
- context:
cluster: k3d-cluster1
user: admin@k3d-cluster1
name: k3d-cluster1
- context:
cluster: k3d-cluster2
user: admin@k3d-cluster2
name: k3d-cluster2
current-context: k3d-cluster2
kind: Config
preferences: {}
users:
- name: admin@k3d-cluster1
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: admin@k3d-cluster2
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
ArgoCD Installation
Now, let us install the ArgoCD in cluster1
. Refer my previous post for detailed ArgoCD setup instructions.
# Switch the context to use cluster1
kubectl config use-context k3d-cluster1# Download ArgoCD manifest for version 2.0.1wget https://raw.githubusercontent.com/argoproj/argo-cd/v2.0.1/manifests/install.yaml# Edit the downloaded install.yaml in your favourite text editor
# Add --insecure and --rootpath parameters to the argocd-server container...
containers:
- command:
- argocd-server
- --staticassets
- /shared/app
# Add insecure and argocd as rootpath
- --insecure
- --rootpath
- /argocd
image: quay.io/argoproj/argocd:v2.0.1
imagePullPolicy: Always
...# Create a argocd namespacekubectl create namespace argocd# Deploy ArgoCD resources in argocd namespacekubectl create -n argocd -f install.yaml#Create an Ingress to redirect /argocd to the argocd servicecat > ingress.yaml << EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argocd-ingress
labels:
app: argocd
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /argocd
backend:
serviceName: argocd-server
servicePort: 80
EOF#Apply ingress configurationkubectl apply -f ingress.yaml -n argocd
This will create a new namespace argocd
, where Argo CD services and application resources are deployed. This step could take several seconds to bring up the argo cd application.
Once application is deployed, you can view the Argo CD web UI on http://localhost:8080/argocd

The initial password for the admin
account is auto-generated and stored as clear text in the field password
in a secret named argocd-initial-admin-secret
in your Argo CD installation namespace. You can simply retrieve this password using kubectl
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
Using the username admin
and the password from above, you can login to Argo CD web UI.

Also you can see that our local cluster cluster1
is already registered with ArgoCD.

Let us add our cluster2
to Argo CD. We need to setup ArgoCD CLI for adding clusters to the ArgoCD. More detailed ArgoCD CLI installation instructions can be found from the official CLI installation documentation.
Argo CLI has to talk to the Argo server APIs. There are several options to expose the Argo server APIs which can be referred from the official documentation. For this post, I am going to use port forwarding option.
kubectl port-forward svc/argocd-server -n argocd 8081:443
Using the username admin
and the password from above, login to Argo CD's IP or hostname
argocd login localhost:8081WARNING: server is not configured with TLS. Proceed (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context 'localhost:8081' updated
Next step is to add the cluster2 to ArgoCD
argocd cluster add k3d-cluster2INFO[0000] ServiceAccount "argocd-manager" created in namespace "kube-system"
INFO[0000] ClusterRole "argocd-manager-role" created
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" created
Cluster 'https://10.76.111.19:6444' added
Now you can see both of the local clusters are listed from the ArgoCD web UI.

ArgoCD’s ApplicationSet Controller
Argo CD v2.0 introduces new feature Argo CD Application Set to make managing thousands of applications almost as easy as managing one application.
The ApplicationSet controller is a Kubernetes controller that adds support for an ApplicationSet
CustomResourceDefinition (CRD). This controller enables both automation and greater flexibility when managing Argo CD Applications across a large number of clusters. The ApplicationSet controller works alongside an existing Argo CD installation.
Read more about it from https://argocd-applicationset.readthedocs.io/
How ApplicationSet controller interacts with Argo CD
When you create, update, or delete an ApplicationSet
resource, the ApplicationSet controller responds by creating, updating, or deleting one or more corresponding Argo CD Application
resources. The controller's only job is to ensure that the Application
resources remain consistent with the defined declarative ApplicationSet
resource.

App of Apps pattern vs ApplicationSet
Developers can use an app-of-apps pattern to define the entire application stack via a git repository and the cluster administrators can then review/accept changes to this repository via merge requests. While this might sound like an effective solution, a major disadvantage is that a high degree of trust/scrutiny is needed to accept commits containing Argo CD Application
spec changes. This is because there are many sensitive fields contained within the Application
spec, including project
, cluster
, and namespace
. An inadvertent merge might allow applications to access namespaces/clusters where they did not belong.
Thus in the self-service use case, administrators desire to only allow some fields of the Application
spec to be controlled by developers (eg the Git source repository) but not other fields (eg the target namespace, or target cluster, should be restricted).
Fortunately, the ApplicationSet
controller presents an alternative solution to this use case: cluster administrators may safely create an ApplicationSet
resource containing a Git generator that restricts deployment of application resources to fixed values with the template
field, while allowing customization of 'safe' fields by developers, at will.
Deploy a Argo Workflows cluster add-on via ApplicationSet controller
As a cluster add-on example, I like to demonstrate the deployment of Argo Workflows to multiple kubernetes clusters using ArgoCD’s ApplicationSet controller. I recommend reading my previous post on Argo Workflow Setup.
In your case, you may want to apply the same logic for other cluster add-ons such prometheus, grafana, kafka, elasticsearch etc.
The first step is to install the ApplicationSet controller on cluster1
to run along with the Argo CD application.
# Switch the context to use cluster1
kubectl config use-context k3d-cluster1kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj-labs/applicationset/v0.1.0/manifests/install.yaml
Once the controller is installed, we can define ApplicationSet
resource to pull the Argo Workflow kubernetes artifacts from my public github repository.
You can customize the ApplicationSet
resource as per your your usecase.
cat > application-set.yaml << EOF
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: argo-workflows
spec:
generators:
- clusters: {}
template:
metadata:
name: '{{name}}-argo-workflows'
spec:
project: "default"
source:
repoURL: https://github.com/surenraju/argo-workflows-manifest.git
targetRevision: HEAD
path: kubernetes
destination:
server: '{{server}}'
namespace: argo
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true
EOFkubectl create -n argocd -f application-set.yaml
Now, Argo CD’s ApplicationSet
controller will create an Application
resources to deploy the argo workflow application in both cluster1
and cluster2
which can be verified from the ArgoCD’s web UI.

Following picture shows the Argo Workflow application on Cluster2
managed by Argo CD Application
running on Cluster1

Access Argo Server Dashboard
Let’s access the argo workflows web UI by doing a port forward. In real world, you may want to create an Ingress
or LoadBalancer
to expose the service.
kubectl -n argo port-forward deployment/argo-server 2746:2746
This will serve the user interface on https://localhost:2746

In summary, Kubernetes administrators can use Cluster GitOps principles along Argo CD Application Set to make managing thousands of applications almost as easy as managing one application.