What a great fortune to reach top of HN and get product feedback, then get rate limited by GitHub ( that was an easy fix though...) Then Hetzner gets a partial outage, failing to create new instances, causing most of the new cluster to get stuck in "creating" mode. Thanks everyone for the interest, I was not expecting this from a side project and I'll do my best to improve the platform in the future and address all feedback received.
lagniappe 7 hours ago [-]
It happens to almost all of us :) No worries, you did great! Thanks for sharing your project with us
camil 4 hours ago [-]
Thank you for your kind words! I really appreciate it.
That way we can use Raku as a scripting language for deployment.
tzahifadida 51 minutes ago [-]
For anyone interested I am in the last stages of building a course operating around Kube-Hetzner (https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne... 3k github stars). Basically a lot of scripts that shows how this works and how to perform backups, restores etc... and a lot of exercises for common use cases and failure troubleshooting. Intentionally NOT abstracting away anything so you can see how this works. Without understanding you are going to get stuck.
Probably the easiest out there is https://github.com/vitobotta/hetzner-k3s. There are many options, depending on how low level you want to go. Hetzner terraform project is probably the most complex and complete, but it takes time to configure all those. The main idea was to provide simplification, not just to Kubernetes provisioning in Hetzner, but also to the most common apps and tools that extend Kubernetes capabilities, like ingress controllers, prometheus, elasticsearch, databases and so on.
abound 14 hours ago [-]
There's also Talos, which also also supports Hetzner [1] and is similarly streamlined. Not quite the same idea but very similar.
This certainly looks like a pleasingly straight-forward way to spin up k8s.
I do notice that this deploys onto their cloud offering, which we've (https://lithus.eu) found to be a little shaky in a few places. We deploy clients onto their bare metal line-up which we find to be pretty rock solid. The worst that typically happens is the scheduled restart of an upstream router, which we mitigate via multi-AZ deployments.
That being said, there is a base cluster size under which a custom bare-metal deployment isn't really viable in terms of economics/effort. So I'll definitely keep an eye on this.
flowerthoughts 2 hours ago [-]
My biggest issue with k8s on Hetzner is that there is no way of going from tiny (1 machine) to medium (10-100 machines) purely on bare metal. I was able to get Gateway API to provide ingress nodes after a bit of Envoy tinkering, but storage is an issue. You can't run Ceph Rook on a single machine (too heavy,) and I couldn't find a think local volume wrapper that lets me easily migrate from local volume management to distributed.
Feels like there should be a PV daemon that can do local, and transparent migrations and is lightweight enough to run on a single machine. Once my PV has been migrated to Ceph, the proxy could configure itself away from that job.
I agree, this is probably the most complete solution out there. My intentions with this project are to provide various layers of abstraction, not only for Kubernetes provisioning, but also for the most common apps and tools that are usually extending the Kubernetes capabilities and also allow some low level configuration options.
andix 14 hours ago [-]
Thanks for the feedback. I've stumbled upon it when it when the project was quite new, and it looked promising.
kube-hetzner seems to be a bit stuck, they have a big backlog for the next major release, but it might never happen.
figassis 14 hours ago [-]
I have yet to see a guide to automate k8s on Hetzner's beefy bare metal instances. True, you want cattle, but being able to include some bare metal instance with amazing CPUs and memory would be great, and I do just that. My clusters include both cloud and bare metal instances. In the past I had used Hetzner virtual switch to create a shared L2 network between cloud and bare metal nodes. Now I just use tailscale.
But the TF and other tools are using the API to add and kill nodes, if you could pass a class of nodes to those tools that they know can't create but are able to wipe and rebuild, this would be ideal.
Any plans to expand further than Hetzner?
They're pretty restrictive on certain usages(e.g VPNs), I'd be really interested in support for Datapacket for example.
bflesch 11 hours ago [-]
I cant seem to figure out where this company is located and if it is a scam or not. Website has no imprint, no contact address. There is one email address in the privacy statement but it is "redacted by cloudflare". Also in privacy statement it says "Edka Digital S.L." but no idea which country it is registered it.
For me it does not pass the smell test. No physical address, no idea who is running it, no idea if company is indeed registered or not. The pricing FAQ at least talks about VAT and I assume it is EU VAT but could be anything.
camil 11 hours ago [-]
Hello there, as I mentioned in the post, I build this as a side project by my self and I'm running it as a freelancer registered in Spain, you can check my VAT number ESY1848661G. I was planning to collect some feedback and honestly didn't expect such interest in the project. I will make the necessary adjustments to the privacy policy and terms of service. When I started this, I had in mind to convert it into a company, but I'm still running it as a freelancer. Thanks for your feedback! I will correct my mistake.
bflesch 11 hours ago [-]
Hey, thanks for your immediate reply. Congrats for starting your own business. If you're Spanish-based maybe something like "aviso legal" at [1] or "legal notice" (imprint) at hetzner [2] is needed so people can validate that you/your company actually exist.
I'm not familiar with Spanish S.L. (Sociedad Limitada) but it seems to be a private, share-based legal entity with minimum 3000 EUR share capital and at least one director. It seems the share capital does not need to be paid in full [3] which is a risk for potential customers if things go wrong.
If you're based in a EU country I'd suggest to also clearly communicate all these legal information, because it's easier for potential customers to build trust into your services.
Thanks! I made a quick update to the Privacy Policy and Terms of use. I will review all legal documents in depth in the following days. Meanwhile, you can check my legal entity informations here: https://www.einforma.com/rapp/ficha/empresas?id=dWSG1MwtU312...
Personally, I trust companies more that put a name and face on their website too. So I can check if the person behind it is real (mostly using LinkedIn).
Lucasoato 7 hours ago [-]
An Hetzner employee once told me that they’ve been trying for years to develop their own Kubernetes-as-a-service solution, I wonder if they’re still working on that or not.
mdaniel 6 hours ago [-]
Years?! what. the. actual. fuck
Well, I guess from a platform that has no intrinsic IAM offering, I take that back, I guess keeping track of whose special console password is the current one is, in fact, hard work
Seattle3503 7 hours ago [-]
Hetzner is working on their own managed offering too, but it doesn't seem like anyone has an idea when it will land.
1) What are the limitations of the scaling you do? Can I do this programmatically? I.e. send some requests to get additional pods of a specific type online?
2) What have you done in terms of security hardening? you mention hardened pods/cluster, but specifically, did you do pentest? Just follow best practice? Periodic scans? Stress tests?
camil 13 hours ago [-]
Thanks for your questions!
1) The platform provides a control plane to help you deploy the cluster on your own Hetzner account, so you are in control of resources and pay direct usage costs to Hetzner.
2) Because you have full access to kubernetes cluster and it runs on your own Hetzner account, the security of the cluster is a shared responsibility and you can fine tune the configuration according to your requirements. The platform security is totally our responsibility. We try to follow best practices and internal penetration tests were conducted, but we're still in beta and try to see if there's interest for such product before launching the stable version.
physix 11 hours ago [-]
This is a great idea. I really like it!
We considered reaching out in May, but held back because we want to run on bare metal.
Any chance to get this provisioned on bare metal at Hetzner?
We have K8S running on bare metal there. It's a slog to get it all working, but for our use case, having a dedicated 10G LAN between nodes (and a bare metal Cassandra cluster in the same rack) makes a big difference in performance.
Also, from a cost perspective. We run AX41-NVMe dedicated servers that cost us about EUR 64 per server with a 10G LAN, all in the same rack. Getting the same horsepower using Cloud instances I guess would be a CCX43, which costs almost double.
adamcharnock 11 hours ago [-]
We're setting up a data-heavy client at the moment who has a similar need. We're working with Hetzner's custom solutions team to provision a multi-AZ setup, with 25G networking and 100G AZ interconnects. Link in bio if you want to chat, email is adam@...
VoidWhisperer 11 hours ago [-]
Are you asking if it can provision bare metal servers with hetzner in a similar way to what it is doing with cloud servers, or if it can manage clusters on your hetzner bare metal servers (in the case of the second, a tool like Rancher might be better)
physix 10 hours ago [-]
I was thinking more of the former, whereby I "bring my own servers".
I haven't really thought it through yet, whether that even makes sense.
VoidWhisperer 10 hours ago [-]
That might be a bit challenging unless they sort out an integration directly with hetzner as I don't think their API supports anything related to bare metal provisioning, just cloud and 'storage boxes'
moondev 3 hours ago [-]
Cluster-api project is what you want. It's the holy grail of cluster lifecycle.
julienmarie 6 hours ago [-]
What is the difference with Syself.com ? I was looking into them recently ?
camil 3 hours ago [-]
I didn't have the chance to test their platform yet, but I expect it to be a mature product. My intentions with this platform are to make it more accessible to developers and small companies that do not have Kubernetes knowledge yet or want to spin clusters fast for development, testing etc.
betaby 12 hours ago [-]
Site doesn't answer how storage is 'solved'. Is this solution uses local folder provisioning when using PostgreSQL for example.
camil 12 hours ago [-]
Sorry for that, I wasn't expecting such interest. There are still undocumented parts, but happy to answer any question. It uses https://github.com/hetznercloud/csi-driver to attach persistent volumes to PostgreSQL pods.
__turbobrew__ 7 hours ago [-]
I wonder how trustworthy hetzner distributed storage is. I always saw hetzner as just a control plane to allocate bare compute and nothing more, I wouldn’t necessarily trust their managed storage solutions, but I also don’t have much experience with it.
Honestly I’m kindof surprised that something like rook it not used instead, but I guess it is easier to trust hetzner storage and hope for the best.
pwmtr 12 hours ago [-]
If you are looking for Postgres on Hetzner, you may want to check out Ubicloud.
We host in various bare metal providers, including Hetzner. (I am the lead engineer building Ubicloud PostgreSQL, so if you have questions I can answer them)
mfrye0 12 hours ago [-]
This is incredibly timely. I've been an AWS customer for 10+ years and have been having a tough time with them lately. Looking at potentially moving off and considering options.
My theory is that with terraform and a container based infra, that it should be pretty easier with Claude Code to migrate wherever.
adamcharnock 11 hours ago [-]
This is exactly what we [1] do! We migrate clients out of AWS and into Hetzner bare-metal k8s clusters, and then we also become the client's DevOps team (typically for a lot less than Amazon charges)
I will say that there is a fair bit of lifting required to spin up a k8s cluster on bare metal, particularly for things such as monitoring and distributed block storage (we use OpenEBS). I would ballpark it as a small number of months.
It is likely easier on their cloud offering, but we've found that to be a little less reliable than we would hope.
I'm using AWS for small k8s clusters. I stay away from most of the "managed" AWS products except S3 and ECR. My k8s stack is packer + tofu + k3s + zfs: It's easy, concise, self managed, and costs are easy to predict.
deknos 13 hours ago [-]
Am i the only one who is confused about "Hetzner" in the title and "AWS KMS" in the body?
camil 12 hours ago [-]
Thanks for the feedback! Didn't plan to bring any confusion with that. The AWS KMS is used by the platform to encrypt/decrypt sensitive data before/after storing it in Vault and is part of the tech stack used to develop the platform.
deknos 1 hours ago [-]
it's more the thing, that if you put secrets on AWS, you are STILL dependent on AWS even if you run things on hetzner. It would be better, if you find a solution for secrets maintenance which runs on hetzner..
marcosscriven 11 hours ago [-]
What are the connectivity options between heztner dedicated servers? I see they allow you to pay to have in a single rack, with a dedicated switch. But does that introduce a risk of single point of failure in the rack power or switch?
rumblefrog 11 hours ago [-]
I tried to deploy a small cluster in the US VA region, but the cluster status kept flipping between Failed and Creating with no clear way of troubleshooting it: 7ad975fb-3c8e-47a9-b03d-9e6bec81f0db
camil 11 hours ago [-]
Hello there, sorry for that I will look into it right now.
slig 14 hours ago [-]
Congrats on shipping! I see that you have WordPress as a pro app. As someone who pays for WP hosting, what I'd like to see there is the ability to "fork" a WP instance, media, DB, everything, with a new hostname, that I can try things, updates, etc.
camil 14 hours ago [-]
Thanks! Wordpress will be available for free, it is not currently finished. Probably next week will be ready.
czhu12 14 hours ago [-]
Is this deploying K3s or full kubernetes with a control vs worker plane on different instances?
camil 14 hours ago [-]
It is ready to use Kubernetes setup with separate control plane and node pools
andix 14 hours ago [-]
k3s does support running separate control plane and worker node pools. It's not just for toy-project clusters, or single node clusters. k3s can also power rather big clusters.
barbazoo 14 hours ago [-]
Love how focussed this is.
I would have never guessed that there's an overlap between the circle of people wanting to run a prod workload on a K8s cluster and folks that need a GUI to set up and manage a K8s cluster would be that big but looks like I might be wrong.
0x457 14 hours ago [-]
> I would have never guessed that there's an overlap between the circle of people wanting to run a prod workload on a K8s cluster and folks that need a GUI to set up and manage a K8s cluster would be that big but looks like I might be wrong.
Count how many GKE ad EKS users are out there?
tormeh 13 hours ago [-]
Surely the appeal is more that someone will fix things if your k8s installation breaks?
zft 14 hours ago [-]
Congratulations on the launch!
Is there are plans to support Gitlab and gitlab registry (or any registry) ?
camil 14 hours ago [-]
Thank you! Yes, both are planned. The registry will be a very easy implementation.
rumblefrog 12 hours ago [-]
I wonder how long before Hetzner adds something like managed Kubernetes to their native product line. They already have S3 compatible object storages, load balancers and more.
hobofan 44 minutes ago [-]
Given how rarely they offer specific software solutions and at what pace, I would say 5+ years from now or never.
No idea about the timing but I imagine it's coming.
Would make a lot of sense, especially if you can combine it with the hardware servers. You could get a lot of grunt in your cluster for a lot less than for example AWS.
marcosscriven 11 hours ago [-]
When I was looking into this, I instead setup Proxmox on Hetzner (which you can do natively from ISO).
From there it was much easier just using it for whatever I wanted, including K3S
everfrustrated 13 hours ago [-]
Has anybody found a good way to use encrypted disks with Hetzner yet?
What is the threat model you want to mitigate using encryption at rest? Is it that a physical disk is not properly wiped after usage? Then you could just use luks and store the key anywhere else, e.g. another machine or an external volume…
winrid 13 hours ago [-]
Their installer script supports LUKS.
Setup dropbear, and have another encrypted instance that runs a cron that runs a script every minute to check for the dropbear port on all instances and sshes in and passes the key to boot.
This is what I do for fastcomments anyway for ovh and hetzner
adamcharnock 11 hours ago [-]
To answer from a Kubernetes perspective: Both OpenEBS Mayastor and LocalZFS now support disk encryption.
bflesch 11 hours ago [-]
Encrypted disks are easily setup with archlinux + LUKS + tinySSH, you can remote unlock via SSH.
kopadudl 14 hours ago [-]
Typo: One Cluser always free
camil 12 hours ago [-]
Fixed, thanks!
rumblefrog 12 hours ago [-]
This looks great! Haven't tried it yet, but should I presume this also does k8s and OS updates for me? Or how managed is it?
camil 11 hours ago [-]
Thanks for the feedback! The platform is mostly self service, but it is very easy to upgrade the Kubernetes version, just change the version in the cluster configuration. For OS updates, you can replace the nodes and it will automatically pick the latest OS image from Hetzner. I also run it isolated for some small companies, as a fully managed service, so that option is available as well.
JanMa 11 hours ago [-]
A bit off topic, but you might want to rethink the name. It is very close to EDEKA, the largest German supermarket chain. They have a very large IT division (https://it.edeka) and judging from the name of your project I was expecting it to be one of their projects.
camil 11 hours ago [-]
Well, I had this since 2011, and in 2018 a new disease was labeled EDKA ( that is the first result you get when you google for edka). I became aware about the german supermarket few years after also. I could consider it at some point, but is very hard to find something available these days...
physix 10 hours ago [-]
me too
15 hours ago [-]
CuriouslyC 14 hours ago [-]
Why would I use Edka vs using Linode's free Kubernetes offering?
camil 13 hours ago [-]
This was designed for Hetzner, which I still believe has the best offer on the market comparing price, performance and stability. On top of that, the platform offers some ready to deploy add-ons that simplify the configuration after the initial cluster provisioning.
chatmasta 12 hours ago [-]
What Hetzner-specific functionality did you need to design that you wouldn’t need in a “deploy to arbitrary set of VMs” scenario?
camil 3 hours ago [-]
Hetzner was an easy choice because you can attach persistent volumes, expose services using their load balancers, servers are fast and easy to provision and they probably have the best pricing. I run multiple clusters in Hetzner for over 4 years now and only had minor issues. Sometimes they do not have enough instances on a specific region, sometimes provisioning new instances can be delayed or they send emails to reboot the instances due to patches to their hypervisors. But most of the times runs stable. Few of my clusters have 100% uptime for more than 2 years.
czhu12 13 hours ago [-]
Linode pricing is probably 3-4x more expensive than Hetzner, who does not offer managed kubernetes.
13 hours ago [-]
upa11 14 hours ago [-]
Great job. Love the project
camil 13 hours ago [-]
Thank you!
boredhacker3 14 hours ago [-]
exactly what i was looking for. I will give it a shot !
camil 13 hours ago [-]
Thank you! Please feel free to ask any questions.
sciencesama 12 hours ago [-]
is there a selfhosted version of this ?
import 10 hours ago [-]
You can just install k3s maybe?
zgk7iqea 13 hours ago [-]
typo on the website: one cluser always free
throwmeaway222 11 hours ago [-]
Great job!
latchkey 11 hours ago [-]
Great work. Just tried to email support@ and it bounced.
camil 2 hours ago [-]
Thanks for letting me know. Apparently there was a wrong permission set for the Google group.
21sys 14 hours ago [-]
I can't find this Spanish (?) company in the company register and there are none of the legally required information on the website. Not very trustworthy for a SaaS that stores your data and access keys. I'm confident that this is only a startup "day one" issue, but in times of increased scam and extortion can I be sure? Nope.
camil 13 hours ago [-]
Hello there! Fair enough. As I mentioned in the original post, I built this as a side project, by myself, and I run it as a freelancer registered in Spain. It is not hard to find my public profile. You can check my Spanish VAT number, ESY1848661G. This is still in beta and currently looking to collect feedback and see if there is any interest in the market, before scaling it to a company. Thank you!
off topic: k8s aside, what are people using to receive webhooks from github/gitea/gitlab and do builds/deploys? is the generally accepted way to put deploy credentials into CI secrets and do it that way?
mdaniel 6 hours ago [-]
I'm sure for 10 people you'll get 15 answers, but for my money OIDC is the way, the truth, and the light. GitHub and GitLab offer it, one can have federated auth from within a k8s Pod to anything that trusts OIDC, and realistically one can do it from anything that has intrinsic identity. That's also how AWS Identity Anywhere works, just with more X509
That way we can use Raku as a scripting language for deployment.
Join the waiting list here: https://shipacademy.dev
[1] https://www.talos.dev/v1.10/talos-guides/install/cloud-platf...
I do notice that this deploys onto their cloud offering, which we've (https://lithus.eu) found to be a little shaky in a few places. We deploy clients onto their bare metal line-up which we find to be pretty rock solid. The worst that typically happens is the scheduled restart of an upstream router, which we mitigate via multi-AZ deployments.
That being said, there is a base cluster size under which a custom bare-metal deployment isn't really viable in terms of economics/effort. So I'll definitely keep an eye on this.
Feels like there should be a PV daemon that can do local, and transparent migrations and is lightweight enough to run on a single machine. Once my PV has been migrated to Ceph, the proxy could configure itself away from that job.
It's not the smoothest thing I've ever used, but it's all self hosted and everything can be fixed with some Terraform or SSH.
Great to see some managed Kubernetes on Hetzner!
I'm using it right now
kube-hetzner seems to be a bit stuck, they have a big backlog for the next major release, but it might never happen.
But the TF and other tools are using the API to add and kill nodes, if you could pass a class of nodes to those tools that they know can't create but are able to wipe and rebuild, this would be ideal.
For me it does not pass the smell test. No physical address, no idea who is running it, no idea if company is indeed registered or not. The pricing FAQ at least talks about VAT and I assume it is EU VAT but could be anything.
I'm not familiar with Spanish S.L. (Sociedad Limitada) but it seems to be a private, share-based legal entity with minimum 3000 EUR share capital and at least one director. It seems the share capital does not need to be paid in full [3] which is a risk for potential customers if things go wrong.
If you're based in a EU country I'd suggest to also clearly communicate all these legal information, because it's easier for potential customers to build trust into your services.
[1] https://www.hola.com/aviso-legal/ [2] https://www.hetzner.com/legal/legal-notice/ [3] https://www.lawants.com/en/sl-spain/#:~:text=minimum%20share...
Also here: https://ceo.oepm.es/detalleExpediente?numExp=N0486066
Well, I guess from a platform that has no intrinsic IAM offering, I take that back, I guess keeping track of whose special console password is the current one is, in fact, hard work
https://www.reddit.com/r/hetzner/comments/18yhy89/seems_like...
1) What are the limitations of the scaling you do? Can I do this programmatically? I.e. send some requests to get additional pods of a specific type online?
2) What have you done in terms of security hardening? you mention hardened pods/cluster, but specifically, did you do pentest? Just follow best practice? Periodic scans? Stress tests?
We considered reaching out in May, but held back because we want to run on bare metal.
Any chance to get this provisioned on bare metal at Hetzner?
We have K8S running on bare metal there. It's a slog to get it all working, but for our use case, having a dedicated 10G LAN between nodes (and a bare metal Cassandra cluster in the same rack) makes a big difference in performance.
Also, from a cost perspective. We run AX41-NVMe dedicated servers that cost us about EUR 64 per server with a 10G LAN, all in the same rack. Getting the same horsepower using Cloud instances I guess would be a CCX43, which costs almost double.
I haven't really thought it through yet, whether that even makes sense.
Honestly I’m kindof surprised that something like rook it not used instead, but I guess it is easier to trust hetzner storage and hope for the best.
We host in various bare metal providers, including Hetzner. (I am the lead engineer building Ubicloud PostgreSQL, so if you have questions I can answer them)
My theory is that with terraform and a container based infra, that it should be pretty easier with Claude Code to migrate wherever.
I will say that there is a fair bit of lifting required to spin up a k8s cluster on bare metal, particularly for things such as monitoring and distributed block storage (we use OpenEBS). I would ballpark it as a small number of months.
It is likely easier on their cloud offering, but we've found that to be a little less reliable than we would hope.
Happy to chat more: adam@...
[1] https://lithus.eu
I would have never guessed that there's an overlap between the circle of people wanting to run a prod workload on a K8s cluster and folks that need a GUI to set up and manage a K8s cluster would be that big but looks like I might be wrong.
Count how many GKE ad EKS users are out there?
Is there are plans to support Gitlab and gitlab registry (or any registry) ?
https://docs.hetzner.com/cloud/load-balancers/overview#:~:te...
Would make a lot of sense, especially if you can combine it with the hardware servers. You could get a lot of grunt in your cluster for a lot less than for example AWS.
From there it was much easier just using it for whatever I wanted, including K3S
Setup dropbear, and have another encrypted instance that runs a cron that runs a script every minute to check for the dropbear port on all instances and sshes in and passes the key to boot.
This is what I do for fastcomments anyway for ovh and hetzner
I really loved this talk about using Let's Encrypt for IAM Anywhere https://www.youtube.com/watch?v=M1hXUcBMf1Q
I have personally also set up EKS Anywhere <https://github.com/aws/eks-anywhere#readme> with OIDC, so one need not have a "smart cloud" to get that done, but it places the burden upon security the cluster's identity upon the operator https://gitlab.com/-/snippets/2302594
Triple. 1 and 2 nodes will give failure allowance of zero.