" It also works work existing Kubernetes clusters if just want to use GitOps for some applications. Just make sure that the argocd and glasskube-system namespaces are not yet in use. See: https://github.com/glasskube/gitops-template/ "
I assume this statement is for running this?
glasskube bootstrap git --url <your-repo> --username <org-or-username> --token <your-token>
I think I'd like to understand what the argo cd / git ops template is and how its different than argocd autopilot. Maybe some pictures of how argo is deploying apps. Etc.
IIUC, it's basically "manage your Glasskube packages from Git, thanks to ArgoCD".
The `glasskube install` command does a bunch of stuff that ends up as resources in your Kubernetes cluster, that are then interpreted by the Glasskube operator.
The "Gitops template" make use of ArgoCD and Git to do what `glasskube install` would have done.
Thanks. It sounds more like glasskube is a plugin for ArgoCD IIUC
I am not super thrilled about critical applications like Argo getting a plethora of plugins, otherwise we end up looping back to Jenkins and plugin hell
Off-topic: To be honest, after trying almost all the CI/CD offerings out there, CircleCI, Github Actions, Gitlab CI, Travis, etc... I've started to believe that none of them actually did it better than Jenkins (despite all its flaws).
On-topic: Glasskube isn't really an ArgoCD plugin as it can work standalone, but in 2024, can you really propose a package manager for k8s without having some integration with ArgoCD and GitOps in general?
If you want to migrate, having interoperability between the tools can make the process smoother. And if you don't want to migrate and still benefit from a centralized, curated, audited repository of packages for Kubernetes so that your "Powered by ArgoCD" GitOps are easier to manage, that's what the GitOps template propose.
In Debian, you can just `apt install <that big thing i don't want to write a deploy script for>`. Imagine doing that with the usual big operators you want in your cluster (cert-manager, a hashicorp vault operator, istio or nginx ingress controller or envoy or ...)
FluxCD is fully batteries-included, but the UI (3rd party!) leaves a lot to be desired, and turns off a lot of developers, and as a result makes it difficult to get team buy-in
ArgoCD is missing some critical systems, like how to tell when the underlying image needs updating. There are a number of ways to handle this but it's either a kludge, a plugin, or both. However, the UI is fantastic and very easy to pickup and understand, team buy-in is usually close to 100%
I generally recommend starting with argo, and once the team/project/s have matured, migrate to FluxCD. Eventually you want to lock everyone out of the CD system, but initially people are skeptical and want to understand/watch everything work, especially during debug.
Thanks linkdd. Exactly Glasskube in "GitOps mode" will output these package custom resources as yaml so you can commit them to git and argo pulls these resources from git into the cluster.
I just setup a argo cd autopilot repository https://github.com/pmig/argocd-autopilot as a comparison. Autopilot gives you a great opinionated scaffold for internal applications.
Our template already includes update automations with the renovate integration and easier integration of third party applications.
I mean renovate in github is 1 file and a app integration. It takes very little effort to setup. What exactly do you mean by easier integration of third party apps? Why wouldnt someone just use https://operatorhub.io/. ?
Tldr.: If you are already using open shift, make use of the operator hub, else glasskube is the more lightweight and simpler solution with similar concepts.
sure , but all the operator hubs are just wrappers of the real operator, which is published on their operator hub page, so you can use https://github.com/apache/flink-kubernetes-operator which doesnt require openshift.
Yes, you can get started by executing this commands.
Our bootstrap git sub command is similar to argo cd autopliot. I give it a try right now to be able to better state differences and follow up on this question.
How would I integrate it into existing setup that uses tools like terraform, along with helm?
See, there's a bunch of stuff that I deploy using terraform (VPC, DNS, EKS) and a bunch of stuff that I deploy with FluxCD. But in between, there's some awkward stuff like the monitoring stack that requires cloudy things and Kube things, and they're tightly coupled.
Right now I end up going with terraform and the awful helm provider. Many of these helm charts have sub-charts, nested values etc, but thankfully the monitoring stack doesn't change that much. It's still not ideal but it works as a one-shot deployment that sets up buckets, policies, IAM roles for service accounts, more policies for the roles, and finally the helm charts with values from the outputs of those clouds resources.
Instead of using the Terraforms Helm provider, you can simply install the Glasskube package controller and provision package custom resource definitions via Flux. This is also how we manage our internal clusters.
I recommend giving Glasskube a try with Minikube locally and join our Discord to interact with the community.
Does glasskube create any of the AWS resources? Or does it have a terraform provider that's better than the helm provider? If neither are "yes" then I didn't get my point across, or you didn't parse out the important point.
I want a single code project that describes and deploys my monitoring stack. With terraform and the helm provider I can create cloud resources with terraform and deploy the kube resources using the terraform helm provider, using values that come from the outputs of the terraform cloud resources, in a single operation.
I don't think glasskube can replace helm in this instance. Wouldn't I have to split my monitoring stack into cloud ops and Kube ops, and manually paste outputs from terraform into glasskube configs?
When you make a change to a gigantic values yaml, it shows you the worst possible diff: the entire block removed, a whole new block added, even for a 1 line change.
Any time I touch the monitoring stack, which has several helm releases with large values blocks (kube-prometheus-stack, promtail, Loki, Mimir), it's absolutely nightmarish. The plan can be hundreds of lines that have to be diffed manually.
A package manager for a software stack the does not change that often (a) can move managed services like databases, message queues or even more complex obseravbility tools inside the cluster with the same convenience of a manage service.
If your setup become more complex and changes often (b) a I would recommend breaking it up into smaller pieces.
For both scenarios it makes sense to use git to keep track of revision and previous states of your Kubernetes cluster and incorporate review processes with pull requests.
Teams can also build applications with the apps of apps pattern with Argo CD or Flux kustomizations. They both feature concepts of dependencies. But these packages can then not published and shared between different clusters and organization that don't share a git repository.
There are multiple reasons and limitations with current tooling that we want to overcome.
We have abstracted all packages as custom resources and have a controller that reconciles these resources to (1) enable drift detection. Additionally, we use admission controllers (2) to validate dependencies and package configurations before they are applied to the cluster, while also working with custom resources to store and update the status of installed packages.
Genuinely interested. What problems did you have dealing with the standard reconciliation mechanism provided by ArgoCD and by k8s itself. I understand the advantage of the operator approach, but it might be hard to show the state in ArgoCD and somewhat breaks the idea of gitops.
Can we benefit your project in a more limited but agentless way? Limiting the types and CRDs we allow in k8s makes operations better, especially with the aggressive upgrade cycle that k8s already imposes.
A deeper integration into Argo CD (similar as how helm is integrated) will be needed to in order to display all status conditions.
I don't think that idea of gitops is broken if the glasskube package controller and all custom resources are versioned you will always lead to a reproducible result.
> Can we benefit your project in a more limited but agentless way?
We are building a central package repository with a lot of ci / cd testing infrastrucutre to increase quality of kubernetes packages in general: https://github.com/glasskube/packages
I assume this statement is for running this? glasskube bootstrap git --url <your-repo> --username <org-or-username> --token <your-token>
I think I'd like to understand what the argo cd / git ops template is and how its different than argocd autopilot. Maybe some pictures of how argo is deploying apps. Etc.
The `glasskube install` command does a bunch of stuff that ends up as resources in your Kubernetes cluster, that are then interpreted by the Glasskube operator.
The "Gitops template" make use of ArgoCD and Git to do what `glasskube install` would have done.
I am not super thrilled about critical applications like Argo getting a plethora of plugins, otherwise we end up looping back to Jenkins and plugin hell
On-topic: Glasskube isn't really an ArgoCD plugin as it can work standalone, but in 2024, can you really propose a package manager for k8s without having some integration with ArgoCD and GitOps in general?
If you want to migrate, having interoperability between the tools can make the process smoother. And if you don't want to migrate and still benefit from a centralized, curated, audited repository of packages for Kubernetes so that your "Powered by ArgoCD" GitOps are easier to manage, that's what the GitOps template propose.
In Debian, you can just `apt install <that big thing i don't want to write a deploy script for>`. Imagine doing that with the usual big operators you want in your cluster (cert-manager, a hashicorp vault operator, istio or nginx ingress controller or envoy or ...)
I certainly prefer FluxCD, myself.
FluxCD is fully batteries-included, but the UI (3rd party!) leaves a lot to be desired, and turns off a lot of developers, and as a result makes it difficult to get team buy-in
ArgoCD is missing some critical systems, like how to tell when the underlying image needs updating. There are a number of ways to handle this but it's either a kludge, a plugin, or both. However, the UI is fantastic and very easy to pickup and understand, team buy-in is usually close to 100%
I generally recommend starting with argo, and once the team/project/s have matured, migrate to FluxCD. Eventually you want to lock everyone out of the CD system, but initially people are skeptical and want to understand/watch everything work, especially during debug.
Our template already includes update automations with the renovate integration and easier integration of third party applications.
Tldr.: If you are already using open shift, make use of the operator hub, else glasskube is the more lightweight and simpler solution with similar concepts.
kubectl create -f https://operatorhub.io/install/flink-kubernetes-operator.yam...
[1] https://operatorhub.io/how-to-install-an-operator
Our bootstrap git sub command is similar to argo cd autopliot. I give it a try right now to be able to better state differences and follow up on this question.
See, there's a bunch of stuff that I deploy using terraform (VPC, DNS, EKS) and a bunch of stuff that I deploy with FluxCD. But in between, there's some awkward stuff like the monitoring stack that requires cloudy things and Kube things, and they're tightly coupled.
Right now I end up going with terraform and the awful helm provider. Many of these helm charts have sub-charts, nested values etc, but thankfully the monitoring stack doesn't change that much. It's still not ideal but it works as a one-shot deployment that sets up buckets, policies, IAM roles for service accounts, more policies for the roles, and finally the helm charts with values from the outputs of those clouds resources.
I recommend giving Glasskube a try with Minikube locally and join our Discord to interact with the community.
I want a single code project that describes and deploys my monitoring stack. With terraform and the helm provider I can create cloud resources with terraform and deploy the kube resources using the terraform helm provider, using values that come from the outputs of the terraform cloud resources, in a single operation.
I don't think glasskube can replace helm in this instance. Wouldn't I have to split my monitoring stack into cloud ops and Kube ops, and manually paste outputs from terraform into glasskube configs?
https://github.com/hashicorp/terraform-provider-helm/issues/...
Any time I touch the monitoring stack, which has several helm releases with large values blocks (kube-prometheus-stack, promtail, Loki, Mimir), it's absolutely nightmarish. The plan can be hundreds of lines that have to be diffed manually.
I've tamed terraform, mostly. Individual providers can be awful though.
a.) in Kubernetes setups that operate the same software stack, with the ongoing updates and regular releases.
b.) in Kubernetes setups that frequently install new software/diverse software
If your setup become more complex and changes often (b) a I would recommend breaking it up into smaller pieces.
For both scenarios it makes sense to use git to keep track of revision and previous states of your Kubernetes cluster and incorporate review processes with pull requests.
Sure, you could make your own .deb, your own repository, and manage dependencies yourself. But do you really want to?
How does it compare with OpenShifts operator hub?
We have abstracted all packages as custom resources and have a controller that reconciles these resources to (1) enable drift detection. Additionally, we use admission controllers (2) to validate dependencies and package configurations before they are applied to the cluster, while also working with custom resources to store and update the status of installed packages.
Can we benefit your project in a more limited but agentless way? Limiting the types and CRDs we allow in k8s makes operations better, especially with the aggressive upgrade cycle that k8s already imposes.
I don't think that idea of gitops is broken if the glasskube package controller and all custom resources are versioned you will always lead to a reproducible result.
> Can we benefit your project in a more limited but agentless way?
We are building a central package repository with a lot of ci / cd testing infrastrucutre to increase quality of kubernetes packages in general: https://github.com/glasskube/packages
is glasskube a reboot of jenkins-x ?