Here we are in place and time where if you put — character anywhere in your text you will be burned like OP on stake for witchcraft.
For those hunting witches doesn't matter if you put in effort and just did fixing grammar or did some research using LLM but in general thoughts and experience were yours. Maybe you are not that good at writing — yet still they will just take pitchforks and torches and drag you out, call you names.
When you plan working 3-5 years in a single company you don’t care if it crashes and burns month after you leave just to burn down next one.
Conversely we see the same dynamic with engineers, they build stuff to prop up their CV and don't care if company still supports crap they did after they leave.
I've genuinely lost count of the number of little vibe coded things I've built but then failed to use, because it turns out I have limited bandwidth in terms of fully trying out the quirky ideas I'm popping out through coding agents.
If you have an app and you want to run a single app yeah silly to look for K8s.
If you have a beefy server or two you want to utilize fully and put as many apps on it without clashing dependencies you want to use K8s or docker or other containers. Where K8s enables you to go further.
I think automatic scaling is useful to utilize server fully - apps that don't need resources automatically scale down, apps that need resources can auto scale up.
I bet you can do it in some other way but that's built in feature of k8s.
There is very little reason to need auto-scaling when you run on pre-purchased VMs/servers. You've paid for all the compute so you can run as many replicas as it can fit and you need to handle the projected amount of traffic.
There are no benefits to scaling down in this case. And scaling up won't help handle more load if you've already allocated all resources to running replicas. You need more machines, not more replicas on the existing machine(s).
It all comes down to simple, boring capacity planning and static resource allocation. Fewer moving parts results in fewer failure modes, hence more robust infra and less ops and maintenance work.
Your response seems like you are talking about a single product / single application.
You have apps A, B and C (you have N teams and N products) each developed by different teams - that you want to run on that one server, when app A doesn't have much traffic apps B and C can use more of compute. Then having deployment management aligned for all teams/products.
I thought both should be equal in solving problems - turns out Cursor with the same model selected somehow was able to solve tasks that Copilot would get stuck or run in loops.
They have some tricks on managing file access that others don’t.
Cynics on HN easily dismiss AI service wrappers (and many of them are in fact overblown and not worth their own code). But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy. The biggest issue is that model providers also see what the community likes and often move on with their own offerings that are tailored to their own models, potentially at the training stage. So even if you have the best harness for something today, unless you are also a frontier LLM provider, there's zero guarantee you will still be relevant in the future. More like the opposite.
It's not like someone paid $60 billion for a product the way you pay for bananas at the store. They invested a much smaller amount and essentially bought an option to acquire. And even if you don't believe the company's assets are worth the current valuation, an acquisition can still make sense if you believe that valuation will go up further. And if they actually do acquire, it will probably still not be in cash. They'll just be swapping stocks. That is essentially how all startup funding works. There is nothing strange about this. It merely reached new dimensions thanks to AI.
> (...) writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy.
This. They are after the harness engineering experience of the Cursor people, I'd assume the they want to absorb all that into Grok's offerings.
The value and the room for innovation on the harness side seems to be underestimated.
Oddly the harness also affects model training, since even GLM/Z.ai for example train (I suspect) their model on the actual Claude Code harness. So the choises made by harness engineers affects the model. For Kimi/Moonshot and OpenAI the company makes their own harness. Alibaba uses Gemini.
Something being harder and attributing value to that makes no sense. Sure a big moat is important for value but "difficult to do" is just a unidimensional angle.
It can use local/oss models, but it doesn't make it simple to do (easiest with ollama) and it's not clear what else you 'lose' by making that choice.
If you had a really good (big) local model, maybe it's an option, but on the more common smaller (<32b) models, it will have similar problems in looping, losing context, etc. in my experience.
It's a nice TUI, but the ecosystem is what makes it good.
"But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy."
It is surprisingly easy to do it once someone else has done the work. Increasingly that's the nature of AI-based software engineering: point it at an existing tool and ask it to carefully duplicate features until it has parity. As you pointed out, frontier LLM companies happen to be well positioned to sell the resulting products.
>They have some tricks on managing file access that others don’t.
I thought it was a Windows thing. My Windows work computer is so heavily managed and monitored I assumed that was why Copilot stops being able to get terminal output or find the file I'm looking at. It's the same problem in IntelliJ and VSCode, with different models trying to find things in different ways.
Now that I think of it though, I've only used Copilot at work. At home I use Debian but I've never tried using Copilot. Claude, OpenCode, Gemini, and IntelliJ's AI Chat pointed at local Ollama models never have issues finding files or reading files and terminal output.
Their annualized revenue run rate is on track to surpass $6 billion by the end of 2026 so it's not ridiculous for them to be valued at $60 billion at some point. Also worth noting that if they do get access to SpaceX compute, they could start pretraining their own model. Composer is good but its built on top of Kimi 2.5.
I actually now think ai prompt writing in the IDE is completely overkill nowadays.
IDEs are made for just a human to interact with code. I think the paradigm of forcing these tools that weren’t built for this to do this, is us trying to fit a square peg in a round hole.
Call me old, but don’t put ai in my ide. My ide was made for a human, not an ai. For the established players for sure it makes sense since they already have space on our machines. But for the new ones imo terminal, or dedicated llm interfaces are where it’s at.
If I’m writing code sure suggest the next line. If the machine is writing code, let it, and just supervise properly. and have the proper interface that allows the strength of each
They're using the code intelligence from the IDE to run the AI, while Claude Code only does greps.
AI coding is much more than just the model - all the tools that human use in IDE are also useful for AI. Claude Code on the other hand just works with grep.
Recent events make it quite clear that this time it is going to be different.
It was like you described earlier. Last year and this year it is basically cumulating over multiple countries.
Swiss people are very upset with what is going on with their military spending in US. I do believe they will be serious about all other purchases from US.
> Swiss people are very upset with what is going on with their military spending in US
Can confirm, as a Swiss person I am flabbergasted at how the federal government keeps pushing for the new fighter jets to be F35s, despite not only the US' currenr erratic behaviour in general, but how it has changed the terms of the purchase deal. Blows my mind, honestly.
I do feel kind of sadness right now it is a zombie that current owners are just pumping out whatever is left out of it.
I don’t care about GH I felt centralized repositories like that is wrong.
Q/A was supposed to be centralized because we need people to find the questions and answers in a single place.
GH or others should be just referring to repositories not keep them… be a search engine for decentralized repositories.
reply