Hacker Newsnew | past | comments | ask | show | jobs | submit | james2doyle's commentslogin

There seems to be a "neuter" fork out there that disables a bunch of stuff: https://github.com/GarethCott/warp#whats-different-from-upst...

Looks like there is a "neuter" fork out there: https://github.com/GarethCott/warp#whats-different-from-upst...

There have been a number of people leaving them because of that bait and switch it seems. That 20% turned out to be something closer to 2% or even 1%

I noticed that too. I’ve been using cmux (based on ghostty, has a more basic UI) and have been happy with it: https://cmux.com/

I feel similar. I have been using cmux over the last few weeks: https://cmux.com/

Seems to fit a good balance for the way I want to use my terminal


Love cmux but lately it's having performance issues. It is not responding for a few seconds when I switch between terminals in the same workspace. I'm tired of it and just use ghostty. I miss the notification from cmux though :(.

You’re paying the subsidized cost. Those margins will shrink once the real bill comes due. I really think everyone will look back at this time as the golden area of cheap AI. We are already seeing the costs (and restrictions/limits) creep up with the Western models.

I think the opposite. AI will get cheaper as models become more efficient and we solve the datacenter/energy problem. I bet 10 years from now AI, that is way better than what we have today, will be close to free.

Just like how cloud costs got cheaper and we solved the datacenter/energy problem over the past 10 years.

For the most part, we did, actually. We had plenty of energy and computer until AI came along.

Energy will get fully solved eventually. To think otherwise is to bet against humanities ability to innovate, which I don't think is ever a wise bet.


ZIRP and Moore have helped the cloud build up with a promise of profits and ever increasing performance. The future is likely different.

"Power will be cheap" is hope you can hang any hat on. We've been increasing compute per watt but again that's on Moore. I don't think it makes sense to bank on a new energy surplus.


Cloud did get cheaper. What are you saying?

I just ran a quick gpt check - EC2 Prices have gone down by more than 80% after accounting for performance and inflation over last 20 years.


You can practically host a website that serves millions of users a day for nearly free using Cloudflare. Imagine doing that in the year 2000.

It amazes me how productive it's possible to be using AI, but I also has this nagging feeling that we are being reeled into being so reliant on this that when the price starts going up, we will simply eat the cost.

The math is pretty simple, and it's easy to justify still paying the price even if it goes up 10 fold, when compared to hirering more resources its still cheap.

So I guess having multiple players and competition in the market is the key?


Going forward, models will start specializing. Anthropic will build a BioMed model for large drug companies. A math/compsci model for frontier theoretical research. A physics modelf or nuclear research. They can communicate each other for synergy effects e.g. for areas where math meets biomed etc. This will be cost reducing as well. We plebs don't need advanced models for our plumbing software work. Following example applied to AI capabilies will make it clear.

Does everyone need a graphing calculator? Does everyone need a scientific calculator? Does everyone need a normal calculator? Does everyone need GeoGebra or Desmos ?


I don’t see why this would happen when the modern models already use MoE, which gives them most of the benefits of having specialized models.

as long as the chinese exist and offer alternatives I think were going to be okay in terms of price, as long as you dont lock in in any american model

Compute has been getting exponentially cheaper nonstop for decades. Much more likely that current capabilities are effectively free within 5-10 years

> I really think everyone will look back at this time as the golden area of cheap AI.

Chinese models like Deepseek v4 are as good and 10 times cheaper. You can even run Deepseek locally. So no, cheap AI wont be over. Just the US investors won't be able to profit off of the artificial bubble that is there now but wont be in the future.


> You’re paying the subsidized cost.

100% agree. I have been trying to tell everyone to build their ideas, and exploit this environment where 100B of VC money into OpenAI/Anthropic = some percentage of money invested into your idea. This is the golden era of building! The music is gonna stop soon. Build now ffs!


The open models of similar scales (ex. the new 1T deepseek model) are a fraction of the cost per token, so I don’t see how that can be the case. Inference is profitable, it’s the training that makes it unprofitable.

In the next round of ChatGPT advertisements, if they don’t use AI generated images, then that means they don’t believe in their own product right?

I think "better" is doing a lot of heavy lifting in this argument. Better how?

Is an AI generated photo of your app/site going to be more accurate than a screenshot? Or is an AI generated image of your product going to convey the quality of it more than a photo would?

I think Sora also showed that the novelty of generating just "content" is pretty fleeting.

I would be interested to see if any of the next round of ChatGPT advertisements use AI generated images. Because if not, they don’t even believe in their own product.


You should check out "model collapse". It seems that an abundance of content, that is more and more AI generated these days, may not be a viable option. There is also a vast amount of data that is increasingly going private or behind paywalls

People love harping on this one, but model collapse hasn't turned out to be an issue in practice.

“It’s been a whole year or two and nothing bad has happened, checkmate doomers!”

It’s pretty shocking how much web content and forum posts are either partially or completely LLM-generated these days. I’m pretty sure feeding this stuff back into models is widely understood to not be a good thing.


What do you imagine distillation being then?

It feels like if it does happen, it will take a lot longer to show up. Also, I doubt they would ship a model that turns out this corrupted stuff.

It wont mean we see the model collapse in public, more we struggle to get to the next quality increase.


There's been symptoms of it that have shown up such as the colloquially called "piss filter" and the the anime mole nose problem, but so far they've been symptoms rather than a fatal expression of a disease. That they are symptoms however shows they can be terminal if exploited properly and profusely. So far we haven't seen anyone capable of the "profusely" part.

Besides models get distilled for fun and profit all the time, which on its own does not support the theory of model collapse.

It doesn't seem like anything has changed to preclude it as a possible outcome yet.

I don't really understand why model collapse would happen.

I understand that if I have an AI model and then feed it its own responses it will degrade in performance. But that's not what's happening in the wild though - there are extra filtering steps in-between. Users upvote and downvote posts, people post the "best" AI generated content (that they prefer), the more human sounding AI gets more engagement etc. All of these things filter AI output, so it's not the same thing as:

AI out -> AI in

It is:

AI out -> human filter -> AI in

And at that point the human filter starts acting like a fitness function for a genetic algorithm. Can anyone explain how this still leads to model collapse? Does the signal in the synthetic data just overpower the human filter?


> Users upvote and downvote posts, people post the "best" AI generated content (that they prefer), the more human sounding AI gets more engagement etc. All of these things filter AI output

At the same time though AI generated content can be generated much much faster than human generated content so eventually AI slop downs out anything else. You only have to check the popular social media platforms to see this in action and AI generated posts are widely promoted and pushed on users the same way most web searches return results with AI generated pages ranked highly.

Humans can't keep up and companies are actively working to bypass the human filter and intentionally promote AI generated content.


The past is not a good predictor of future performance.

>You should check out "model collapse". It seems that an abundance of content, that is more and more AI generated these days, may not be a viable option.

Doom-saying about "model collapse" is kind of funny when OpenAI and Anthropic are mad at Chinese model makers for "distilling" their models, ie. using their outputs to train their own models.


Totally different use cases. If you have nothing, getting 90% of a SOTA model is very valuable. If you have a SOTA model, it's just a worse model.

Isn't there a difference between: distilling specific AI input/output vs scraping whatever random AI output (with unknown input)?

I find it really confusing that the worker AI models on here: https://developers.cloudflare.com/workers-ai/models/ do not have full overlap with the ones on here: https://developers.cloudflare.com/ai/models/

Yes, you can see the same "hosted" ones on there, but when you look at the models endpoint, there are much less options at the "workers-ai/*" namespace. Is that intentional?


To better clarify, I don’t see "workers-ai/@cf/google/gemma-4-26b-a4b-it" in the /models enpoint in gateway.ai.cloudflare.com but it does seem to exist as a hosted model. Same with "workers-ai/@cf/nvidia/nemotron-3-120b-a12b" which I would expect to see


Hey James.

Thanks for the feedback, and good catch. Looks like that endpoint is pulling from a slightly out of date data source. The docs/dashboard currently are the best resources for the full catalog, but we'll update that API to match.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: