Hacker Newsnew | past | comments | ask | show | jobs | submit | ownagefool's commentslogin

To be fair, I have k8s on my hetzner :p

It depends what you're doing it.

My app is fairly simple node process with some side car worker processes. k8s enables me to deploy it 30 times for 30 PRs, trivially, in a standard way, with standard cleanup.

Can I do that without k8s? Yes. To the same standard with the same amount of effort? Probably not. Here, I'd argue the k8s APIs and interfaces are better than trying to do this on AWS ( or your preferred cloud provider ).

Where things get complicated is k8s itself is borderline cloud provider software. So teams who were previously good using a managed service are now owning more of the stack, and these random devops heros aren't necessarily making good decisions everywhere.

So you really have three obvious use cases:

a) You're doing something interesting with the k8s APIs, that aren't easy to do on a cloud provider. Essentially, you're a power user. b) You want a cloud abstraction layer because you're multi-cloud or you want a lock-in bargaining chip. c) You want cloud semantics without being on a cloud provider.

However, if you're a single developer with a single machine, or a very small team and you're happy working through contended static environments, you can pretty much just put a process on a box and call it done. k8s is overkill here, though not as much as people claim until the devops heros start their work.


Call me old fashion but I prefer tools like Dokploy that make deployment across different VPS extremely easy. Dokploy allows me to utilize my home media server, using local instances of forgejo to deploy code, to great effect.

k8s appears to be a corporate welfare jobs program where trillion dollar multinational monopolistic companies are the only ones who can collectively spend 100s of millions sustaining. Since most companies aren't trillion dollar monopolies, adopting such measures seems extremely poor.

All it signals to me is that we have to stop letting SV + VC dictate the direction of tech in our industry, because their solutions are unsustainable and borderline useless for the vast majority of use cases.

I'll never forget the insurance companies I worked at that orchestrated every single repo with a k8s deployment whose cloud spend was easily in the high six figures a month to handle a work load of 100k/MAU where the concurrent peak never went more than 5,000 users, something the company did know with 40 years of records. Literally had a 20 person team whose entire existence was managing the companies k8s setup. Only reason the company could sustain this was that it's an insurance company (insurance companies are highly profitable, don't let them convince you otherwise; so profitable that the government has to regulate how much profit they're legally allowed to make).

Absolute insanity, unsustainable, and a tremendous waste of limited human resources.

Glad you like it for your node app tho, happy for you.


K8s is just a standardized api for running "programs" on hardware, which is a really difficult problem it solves fairly well.

Is it complex? Yes, but so is the problem it's trying to solve. Is its complexity still nicer and easier to use than the previous generation of multimachine deployment systems? Also yes.


I wrote a scheduler for VMs a long time ago. k8s is basically just the same thing but for containers.

It really confuses me how someone can argue for cloud providers over a decent open solution without realising their argument is simply they don't want to be managing the thing.

And that's fine, most teams shouldn't be neck deep in managing a platform. But that doesn't make the solution bad.


K8s isn't even hard! My team of three manages everything on K8s and we spend ~0 minutes per week on it. Write a script to generate some YAML files, stick it in a CI pipeline, and it's basically fire-and-forget.

You're going to want most of what K8s has anyway: blue-green deployments, some way to specify how many replicas you want, health checks, etc.

The initial setup cost is annoying if you've never done it before, but in terms of maintenance it's very very easy.


Are you using a service like GKE, or running your own cluster? I've set up clusters "from scratch" in dev environments to get familiar with it.

We do both: managed Kubernetes when it's available (AWS, Nebius, others), but for some hardware vendors they just give us raw machines and we self-host K3s on their nodes. We're an open-source LLM inference company so we're basically always scrambling for GPUs wherever we can get them, which means we need to be fairly scrappy with what we support while still having a semi-sane interface for ourselves internally. Kubernetes makes that pretty easy: onboarding a new vendor takes ~minutes, and then everything Just Works and we can interact with the pool of compute the same way we do every other pool since the K8s API is standard, with all of our built-in prod monitoring tools immediately set up and running.

That being said I love exe.dev and have been a happy customer since launch. It's a different use case but they do an amazing job at it. Very, very easy personal cloud dev box. But K8s is very very good too, just for production workloads rather than personal ones!


This makes a lot of sense and is like the perfect use case for programming towards a "standard api".

I run it at home and at work, and while I do hate installing it, once that part is done I've never run into these problems that people claim requires a 20 person(!) team to baby sit it. Maybe my scale is too small or whatever, but its hard not to think that maybe they are just "holding it wrong"...


Just as a quick aside, I tried Coolify, Dokploy, Dockge, and Komodo, and if you're trying to do a Heroku-style PaaS, Dokploy is really good. Hands down the best UX for delivering apps & databases. It's too bad about the licensing. (e.g. OIDC + audit logs behind a paid enterprise license.)

Coolify is full of features, but the UX suffers and they had a nasty breaking bug at one point (related to Traefik if you want to search it.) Dockge is just a simple interface into your running Docker containers and Komodo is a bit harder to understand/come up with a viable deployment model, and has no built-in support for things like databases.


If you're open, love to get your thoughts on https://miren.dev. We've doing similar things, but leaning into the small team aspects of these systems, along with giving folks an optional cloud tie in to help with auth, etc.

How is it monetized? I read something about open core and paying for additional services I think. What are those and where can I find them?

I use Cosmos Cloud on a free 24g oracle VM. Nice UI, solid system

Cosmos Cloud looks neat! At a first glance from looking at the web page, it looks more focused on delivering a "personal cloud" or "1-click deploy apps."

Dokploy is more Heroku-styled: while you can deploy third-party apps (it's just Docker after all), it seems really geared towards and intended for you to be deploying your own apps that you developed, alongside a "managed" database (meaning, the DB is exposed in the UI, includes backup functionality, and can even be temporarily exposed publicly on the internet for debugging.)

Coolify feels a bit like a mix of the two deployment models, while Dockge is "bring your own deployment" and Komodo offers to replace Terraform/Ansible/docker-compose through its own declarative GitOps-style file-based config but lacks features like managed databases, or built-in subdomain provisioning.


Isn't Dokku a worthy mention anymore?

For better or worse, folks _really_ like a free UI. Dokku doesn't offer that (Dokku Pro is paid). With AI increasingly making that sort of thing easier to build - and Dokku being very easy to integrate via MCP but also good for building tools on top of - I'm not actually sure how to proceed with Dokku Pro.

Whether it's a worthy mention or not, I'm not sure. I'd like to think its worthy :)

Disclaimer: I am the maintainer.


I took over tech for a POS company some years ago. They were a .net shop with about 80 developers, less than 200 concurrent connections, 6 figures spend cloud, and 0 nines uptime with a super traditional setup.

Point being, it's not the tools the causes the probem.


Read this as a Piece of sh... company. Then I saw response of someone saying they're a POS developer and was like oh I think he means point of sale.

Or that guy is just a really bad programmer.


I worked at a company that developed a niche POS as part of a larger system. It was, by far, the worst part of the code base. Just imagine a bunch of late 90's era Java 1.2 code, complete with a Swing UI, tons of concurrency issues, singleton objects and synchronized blocks all over the place, custom binary protocols...

Both work (:

But the point was it was in a comparble situations without the microservices / k8s / whatever pet tech you want to hate on.


Just curious, are you still looking for developers? Asking as someone who is a developer that works with POS systems.

I no longer work in that industry.

> I'd argue the k8s APIs and interfaces are better than trying to do this on AWS

I think Amazon ECS is within striking distance, at least. It does less than K8S, but if it fits your needs, I find it an easier deployment target than K8S. There's just a lot less going on.


I ran renderapp in ECS before I ran it in k8s.

The deployment files / structure were mostly equivalent with the main differences being I can't shell into ECS and I lose kubectl in favour of looking at the AWS GUI ( which for me is a loss, for others maybe not ).

The main difference is k8s has a lot of optionality, and folks get analysis paralysis with all the potential there. You quickly hit this in k8s when you have to actually need the addon to get cloudwatch logs.

This is also where k8s has sharp edges. Since amazon takes care of the rest of the infrastructure for you in ECS, you don't really need to worry about contention and starving node resources resulting in killing your logging daemon, which you could technically do in k8s.

However, you'll note that this is a vendor choice. EKS Auto Mode does away with most of the addons you need to run yourself, simplifying k8s, moving it significantly closer to a vendor supported solution.


> I can't shell into ECS

Is there a specific reason why you can't shell into ECS? IIRC, I was able to do so by following the guide [0].

[0] https://aws.amazon.com/blogs/containers/new-using-amazon-ecs...


No, I was simply wrong. Thanks for pointing that out.

if you use Kubernetes, shelling into an instance from k9s cli is pressing "s" with the instance highlighted in the TUI. it's great. haven't found a shrink-wrapped tool like that for ECS thats as good/easy as k9s for Kubernetes.

That or fargate if your just running a few containers.

Totally, it's all about the primitives. I'm curious where exe.dev is gonna build on the the base, or just leave it up to folks to add all their own bespoke stuff to do containers, logs, etc.

The last 20 years has given us a lot of great primitives for folks to plug in, I think that lots of people don't want to wrangle those primitives, they just want to use them.


> a) You're doing something interesting with the k8s APIs, that aren't easy to do on a cloud provider. Essentially, you're a power user. b) You want a cloud abstraction layer because you're multi-cloud or you want a lock-in bargaining chip. c) You want cloud semantics without being on a cloud provider.

This is well put and it's very similar to the arguments made when comparing programming languages. At the end of the day you can accomplish the same tasks no matter which interface you choose.

Personally I've never found kubernetes that difficult to use[1]. It has some weird, unpredictable bits, but so does sysvinit or docker, that just ends up being whatever you're used to.

[1] except for having to install your own network mesh plugin. That part sucked.


Sure, but you can do that in a diff after the event, rather than live.

Nobody really tries to get technical people to do the work.

Like cool, it's a great idea and would potentially produce positive results if done well, but the roles pay half the engineering roles, and the interviews are stacked towards compliance frameworks.

There's very little ability to fix a large public company when HR is involved


Maybe it should be treated like on-call duty and have the load spread between existing engineers on some kind of schedule, maybe with some extra comp as incentive because it's boring and will take more effort/time in the "easy case" compared to pager duty.


I think 12-24 month rotations would work really well, but given how the profession is currently setup, that would be difficult to do.


Speaking as a technical (data) person currently working in internal audit for a not quite public company, it's not entirely uncommon.

I do agree that the pay isn't great, but it's the fact that it's considered a cost centre that's been the issue for me.


Everything except for sales tends to be seen as a cost centre. It's ridiculous.


In more practical terms:-

1. Users - your users table may not benefit by being ordered by created_at ( or uuid7 ) index because whether or not you need to query that data is tied to the users activity rather than when they first on-boarded.

2 Orders - The majority of your queries on recent orders or historical reporting type query which should benefit for a created_at ( or uuidv7 ) index.

Obviously the argument is then you're leaking data in the key, but my personal take is this is over stated. You might not want to tell people how old a User is, but you're pretty much always going to tell them how old an Order is.


To be fair, many human tests I've read do similar.

Especially when folks are trying to push % based test metrics and have types ( and thus they tests assert types where the types can't really be wrong ).

I use AI to write tests. Many of them the e2e fell into the pointless niche, but I was able to scope my API tests well enough to get very high hit rate.

The value of said API tests aren't unlimited. If I had to hand roll them, I'm not sure I would have written as many, but they test a multitude of 400, 401, 402, 403, and 404s, and the tests themselves have absolutely caught issues such as validator not mounting correctly, or the wrong error status code due to check ordering.


I haven't looked at pfsense UI, but you can happily hand out a prefix to a device, which can then hand out its own prefixes. I do it with my k8s clusters, which means the node themseves have enough IPs addresses to launch their own routable k8s clusters.


https://renderapp.io/

A platform for digital asset management, review and workflow. Current features focus primarily on review of images aimed at automotive configurators.

The problem is generic, however, our USP is we have a couple of enterprise customers that upload packs of 60k+ assets for a round, and thus we aim to help discover what demonstrably changed.

A bit like Github, only working with images, videos, and other digital assets rather than text files.


Honestly, I think that's people reacting to the market more than it's the market reacting to people.

If your average zoomer had the ability to get a job for life that paid comparably well by a company that would look after them, I don't think loyalty would be an issue.

The problem is today, sticking with a company typically means below market reward, which is particularly acute given the ongoing cost of living crises affecting the west.


I suspect they were more referring to curmudgeons not patching.

I was engaged after one of the worlds biggest data leaks. The Security org was hyper worried about the cloud environment, which was in its infancy, despite the fact their data leak was from on-prem mainframe style system and they hadn't really improved their posture in any significant way despite spending £40m.

As an aside, I use NATs for some workloads where I've obviously spent low effort validating whether it's a great idea, and I'm pretty horrified with the report. (=


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: