Deploy a Kubernetes gitops ready / AKS + Terraform + argocd

Thomas Decaux
5 min readOct 1, 2023

Be ready for a journey into the world of modern cloud-native infrastructure and DevOps practices. In this quick guide, we will explore the fascinating realm of Kubernetes and GitOps, and demonstrate how to deploy a Kubernetes cluster that is GitOps-ready with ArgoCD, all while leveraging the power of Terraform to orchestrate the infrastructure. Our destination? The Azure cloud platform, where we’ll seamlessly integrate with Azure Kubernetes Service (AKS).

Architecture

  1. Infrastructure Setup: We use Infrastructure as Code (IAC) to create core infrastructure components including networking, host names, identity, storage, and compute resources, followed by provisioning a cloud-managed Kubernetes cluster.
  2. Kubernetes Configuration: Once the Kubernetes cluster is operational, IAC tool is utilized to deploy essential configurations such as Secrets and ConfigMaps. And, we deploy a GitOps tool with a bootstrap application definition.
  3. GitOps Deployment: The GitOps tool takes action by reading the specifications of the bootstrap application. It fetches code from a GitOps repository and proceeds to install all defined workloads (ingress, monitoring, etc…).
  4. Load Balancer Creation: A workload of kind Ingress will create a cloud Load Balancer

This approach streamlines the deployment process, ensuring a robust and automated setup for your GitOps ready Kubernetes cluster.

Disclaimer:

I am not going to give full code sources, but more an overview of the architecture, and some small code snippets. I am talking about git-ops bootstraping architecture, not full git-ops project.

Implementation

Let’s deploy this by using Azure managed Kuberners: AKS.

IAC / Terraform

I split my IAC code into 2 terraform projects, one to setup the infrastructure, one to bootstrap the git-ops.

Terraform / infra

base

locals {
infra_rg_name = "aks-poc"
infra_nodes_rg_name = "aks-poc-nodes"
}

# The main resource group
resource "azurerm_resource_group" "main" { name = local.infra_rg_name ...}

# The AKS nodes resource group (auto generated by AKS cluster)
data "azurerm_resource_group" "nodes" {
depends_on = [azurerm_kubernetes_cluster.main]
name = local.infra_nodes_rg_name
}

networking (vnet / subnet / NSG rules, public IP)

# use an existing VNET
data "azurerm_virtual_network" "main" { ... }

# create the subnet
resource "azurerm_subnet" "aks" {
...
virtual_network_name = data.azurerm_virtual_network.main.name
}

# secure it
resource "azurerm_network_security_group" "aks" {
security_rule { ... }
security_rule { ... }
}

# create a public IP for the LB
resource "azurerm_public_ip" "public_lb" {
name = "aks-poc-lb"
resource_group_name = local.infra_nodes_rg_name
}

# add DNS entry
resource "cloudns_dns_record" "public_lb_a" {
value = azurerm_public_ip.public_lb.ip_address
type = "A"
}

identity (service principal, permissions for Kubernetes to create LoadBalancer)

# azureAD application created by AKS
data "azuread_service_principal" "main" {
depends_on = [azurerm_kubernetes_cluster.main]
display_name = var.kubernetes_cluster_name
}

# Give full rights under AKS nodes resource group
resource "azurerm_role_assignment" "sudo_rg_nodes" {
scope = data.azurerm_resource_group.nodes.id
role_definition_name = "Contributor"
principal_id = data.azuread_service_principal.main.id
skip_service_principal_aad_check = true
}

# If VNET is under another RG, give rights to read stuff
resource "azurerm_role_definition" "network_vnet_reader" {
name = "custom-k8s-network-vnet-reader"
scope = data.azurerm_virtual_network.main.id

permissions {
actions = [
"Microsoft.Network/virtualNetworks/read",
"Microsoft.Network/virtualNetworks/subnets/read",
"Microsoft.Network/routeTables/routes/read",
"Microsoft.Network/routeTables/routes/write"
]
not_actions = []
}
}

resource "azurerm_role_assignment" "network_vnet_reader" {
scope = data.azurerm_virtual_network.main.id
role_definition_id = azurerm_role_definition.network_vnet_reader.role_definition_resource_id
...
}

aks instance

resource "azurerm_kubernetes_cluster" "main" {
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
name = "poc"
...
node_resource_group = local.infra_nodes_rg_name
...

default_node_pool {
name = "default"
temporary_name_for_rotation = "tmpdefault"
vnet_subnet_id = azurerm_subnet.aks.id
....
}

linux_profile {
....
}

...

identity {
type = "SystemAssigned"
}

azure_active_directory_role_based_access_control {
managed = true
azure_rbac_enabled = true
admin_group_object_ids = var.kubernetes_rbac_admin_groups
}

}

A few lines, even for a big cluster ! At this time, we have a full managed Kubernetes cluster, ready to use. To use kubectl, a very handy method:

az aks get-credentials --resource-group poc-aks --name poc --admin

Terraform/ git-ops bootstrap

The second terraform project, very small, only to configure and deploy the git-ops tool, I use “data” TF instruction to retrieve infra resources:

locals {
infra_rg_name = "aks-poc"
infra_nodes_rg_name = "aks-poc-nodes"
}

data "azurerm_kubernetes_cluster" "main" {
name = var.kubernetes_cluster_name
resource_group_name = local.infra_rg_name
}

data "azurerm_public_ip" "public_lb" {
name = "aks-poc-lb"
resource_group_name = local.infra_nodes_rg_name
}

We can configure Kubernetes and helm provider:

provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.main.kube_admin_config.0.host
username = data.azurerm_kubernetes_cluster.main.kube_admin_config.0.username
password = data.azurerm_kubernetes_cluster.main.kube_admin_config.0.password
client_certificate = base64decode(data.azurerm_kubernetes_cluster.main.kube_admin_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.main.kube_admin_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.main.kube_admin_config.0.cluster_ca_certificate)
}

provider "helm" {
kubernetes {
....
}
}

When deploying Kubernetes resources, add special labels and annotations so git-ops tool will not try to remove it after! As you might have guessed, Terraform will initially create the GitOps tool, which will subsequently be managed and modified by GitOps itself!

locals {
argocd_resources_labels = {
"app.kubernetes.io/instance" = "argocd"
"argocd.argoproj.io/instance" = "argocd"
}

argocd_resources_annotations = {
"argocd.argoproj.io/compare-options" = "IgnoreExtraneous"
"argocd.argoproj.io/sync-options" = "Prune=false,Delete=false"
}
}

Declare some resources, and the git-ops tool:

resource "kubernetes_namespace" "argocd" {
depends_on = [data.azurerm_kubernetes_cluster.main]

metadata {
name = "argocd"
}
}

# Auth to fetch git-ops code
resource "kubernetes_secret" "argocd_repo_credentials" {
depends_on = [kubernetes_namespace.argocd]
metadata {
name = "argocd-repo-credentials"
namespace = "argocd"
labels = merge(local.argocd_resources_labels, {
"argocd.argoproj.io/secret-type" = "repo-creds"
})
annotations = local.argocd_resources_annotations
}
type = "Opaque"
data = {
url = "git@github.com:ORG"
sshPrivateKey = file("../files/githubSSHPrivateKey.key")
}
}

resource "helm_release" "argocd" {
name = "argocd"
namespace = "argocd"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-cd"
version = "5.46.7"
skip_crds = true
depends_on = [
kubernetes_secret.argocd_repo_credentials,
]
values = [
file("../files/argocd-bootstrap-values.yaml"),
]
}

And the last piece, the bootstrap application:

resource "kubectl_manifest" "argocd_bootstrap" {
depends_on = [helm_release.argocd]

yaml_body = yamlencode({
apiVersion = "argoproj.io/v1alpha1"
kind = "Application"

metadata = {
name = "bootstrap-${var.kubernetes_cluster_name}"
namespace = "argocd"
}

spec = {
destination = {
namespace = "argocd"
name = "in-cluster"
}
source = {
repoURL = "git@github.com:ORG/GIT_OPS_PROJECT"
helm = {
values = yamlencode({
network = {
public = {
domain = "aks-poc.ORG.com"
ipName = data.azurerm_public_ip.public_lb.name
ip = data.azurerm_public_ip.public_lb.ip_address
}
}
})
}
}
}
})

}

after apply this terraform project, there is argocd running, let’s open the UI:

kubectl port-forward -n argocd svc/argocd-server 8080:80

If we use app of apps pattern, we should see:

(Argocd app of apps could be another story)

--

--