CI / CD
This project come with Tekton as CI/CD engine. He is made to work in a Kubernetes cluster.
Why Tekton ?
Tekton is a Kubernetes-native open source framework for creating continuous integration and delivery (CI/CD) systems. It consists of various components that run as pods in a Kubernetes cluster. These components work together to provide a complete CI/CD solution, from source code management to deployment.
The main argument that made me chose it is that it is a graduated project from the Continuous Delivery Foundation (CDF). The way Tekton work allow the hability to migrate easily between different CI/CD engine.
How to use it ?
Using the Tekton pipeline that i provide is quite simple.
- Register your repository in the master Flux Gitops repository.
- Then with the right permissions you create a folder named deploy at the root of the repository.
- In this folder you create two folder:
- cicdv2 that will contains an helm chart containing everything needed in the CI/CD pipeline
- helm that will contains the helm chart of the application that you want to deploy
- Then the owner (me) has to trigger some terraform script to create what's needed (Harbor project, Pipeline Webhook secret, ...)
- Once it's done and the FluxCI triggered you can see in the Tekton dashboard the pipeline definition.
- Then you can setup your repository to call a webhook that will trigger the pipeline.
- Enjoy !!
What's needed in the cicdv2 ?
One of the good point of the fact that i use this project is that you can find working example in the cicdv2 folder here.
cicdv2/helm
This folder provide different kind of ressources:
- /dep : include each task used in this project (git-clone, helm-upgrade,kubernetes-action, sonarqube, buildkit, ...)
- pipeline.yaml : the pipeline definition includings parameters (like the commit to clone), Workspaces (kubernetes Volume, Secret or Configmap that can be shared between tasks) then the Tasks (the steps making the pipeline and a mapping of the Workspaces and parameters)
- eventListener.yaml : Listen to the webhook and execute intereceptors to validate the payload and then trigger the pipeline and bind the parameters.
- triggerBindings.yaml : Bind parameter from the payload provided by the webhook.
- triggerTemplate.yaml : Define the template of the payload that will be provided by the webhook, provide the parameters from the triggerBindings and the Workspaces.
- ingress.yaml : Expose the eventListener to the outside world.
- rbac.yaml : Provide the right to the eventListener to trigger the pipeline and deploy everything that is needed. (take into account that these role are not cluster wide, there is a job executed on the pipeline that reduce these right to the namespace where the pipeline is executed)
- job-kubeconfig.yaml : Terraform CronJob that will create the kubeconfig needed by the pipeline to execute task on the namespace.
- job-volume.yaml : volume used to keep the state of the kubeconfig secret between each execution of the job.
- terraform-sa.yaml : include the terraform source code that will create the needed ressources
In addition to that i provide some other ressource generated by Terraform job like:
- harbor-secret : that allow the push of your image to the harbor registry or OCI artifact.
- github-secret : containing the secret that the webhook need to include.
- sonarqube-secret : provide the token needed to authenticate the sonarqube scanner.
- oidc-secret : if needed provide an oidc application to setup auth in your app.