Hacker Newsnew | past | comments | ask | show | jobs | submit | ipaddr's commentslogin

Tried the same prompt and ended up no where close on the free plan.

Is there a known lag that it takes the Pro plan's abilities to migrate to the free plans?

GPT 5.5 Pro is not available to any plan outside of ChatGPT Pro ($100 or $200) tier or the API as far as consumer access.

Yes, but don't we expect GPT 5.5 Pro will eventually be a free tier? Maybe I'm missing something because I only use the free tier. But the free tier has gotten way better over the last few years. I'm pretty sure, based on descriptions on this site from paid subscribers, that the free tier now is better than the paid tier of say 2 years ago. That's the lag I'm wondering about.

Free ChatGPT is like a fast car with a barely responsive steering wheel. Guardrails on that thing are insane. Even for math. It wont let you think. It will try to fix mistakes you havent even made yet based on intent that was ascribed to you for no reason. It veers off in some crazy directions thinking that's what you meant and trying to address even a little bit of that creates almost a combinatorial explosion of even more wrong things. Is why I stick to Claude. The latter is chill and only addresses what you had typed. Isn't verbose and actually asks you what you getting at with your post. That said, ChatGPT is more technical and can easily solve math problems that stump Claude.

So this doesn't happen in the paid plans of ChatGPT? But why?

Paid plans give you access to much larger, more intelligent models which have thinking enabled (inference time compute). In the example here you can see GPT Pro taking 20-80 minutes to respond with the proof.

All this is far more expensive to serve so it’s locked away behind paid plans.


> thinking enabled (inference time compute)

What do you mean by compute?


I would google or use ChatGPT to a learn more about this, free version should be totally sufficient.

I do not think this is true. You will continue to get smaller, cheaper-to-host models in the free tier that are distilled from current and former frontier models. They will continue to improve, but I’d be very surprised if, e.g., 5.4-mini (I think this is the free tier model) beat o3 on many benchmarks, or real world use cases.

I won’t even leave chatGPT on “Auto” under any circumstances - it’s vastly worse on hallucinations, sycophancy, everything, basically.

Anyway, your needs may be met perfectly fine on the free tier product, but you’re using a very different product than the Pro tier gets.


You should pay for it if you find value in it.

They pay for it with their personal data.

Tangential but I learned today that GPT-5.5 in ChatGPT (Plus) has a smaller context window than the one in the API. (Or at least it thinks it does.)

I'd guess / hope the Pro one has the full context window.


Notably, 5.5 has a higher price on API for context > ChatGPT, and 5.5 Pro on API does not differentiate based on context size (it’s eye bleeding expensive already :)

Do not use the free plan. It is not good.

Does the free plan even have access to thinking models?

Technically yes, gpt-5.4-mini is available on the free plan

Was this a surprise?

His neighbour isn't spending $60,000 on all of those together

Count the Fords on the street.

Now count the Amazon deliveries in a year on said same street. And next year, and the year after, and.. however long one keeps a Ford these days..

It's quite a scary thought exercise.


The average person spends 2,800 with prime or 1100 without. 75% of Amazon shoppers have prime so about $2500 a year. Amazon collects 35% on each sale where they ship and package for you.

Amazon makes 800 dollars off of each person in revenue.

Ford makes $303 per person in revenue.

AWS makes the same.

AI spend for all platforms $450 per person

Their costs to produce aren't equal.


You get to talk to an AI agent

The comment is about smaller models

Right, but what are you going to do with small models? If your time is worth anything at all you'd pay for the $100 claude code/codex pro subscription, rather than fumbling around with the models quantized enough to fit on your mac.

If you're building agentic processes (harnesses) for business processes local models are a great way to do that, while keeping your data, and any personal data, private.

If you're vibe coding a codex/claude subscription makes more sense as a more polished experience.

I don't vibe code, but I use self hosted models with codex for code review and snippet generation.


If small models keep improving for specific purposes and larger models have diminishing returns, then what?

E.g. I can see a world where you have a local model that is specialised just for producing code.


$100 isn't going to buy you much access to claude code when they start charging a profitable fee for using it.

It's all part of getting along. If feedback that ideas are bad is not welcome why would you want to give it?

Being nice to everyone including your boss elimates a lot of problems.

Taking credit for someone elses work gives you additional power over that person and them additional responsibility when things go wrong.

Your boss will take credit for the departments work, his boss will do the same, her boss will do the same the vp will do the same. Their job is to get everyone under them to meet someone goal set by above. Everyone is taking credit for everyone elses work all the way up the chain. We do this as parents my child made the honor roll you might tell a friend knowing it reflects on you.


Yes but you're recasting my complaints as good things.

Many elements of an abusive relationship can be good, but it's the degrees that make them bad. Would you tell someone they weren't in an abusive relationship because the things they told you about could be good? Or would you accept their feelings on the matter?


Move from a tested library where when a vulnerability is discovered everyone in the world is made aware to an untested one shot llm output that if a vulnerability is discovered will never come to light .

What's the reason to switch to something less stable short/long term? Because its older and newer code is always better?


That's like saying rpad is well tested. Axios isn't exactly rocket science, it is trivial to replace. So much so that even a SOTA llm does it reliably. I'd say good riddance.

Do the endpoints still work?

https://api-web.nhle.com/v1

Keeping upto date endpoints for sport scores is the most difficult challenge.


I took the lazy way out and use a dependency for that. I’ll look into it and see if I can either push a fix upstream or reimplement myself.

If the end result is most books will be written by AI you need the possiblity of that qualification. If its only capable of certain types of book then we will need endless amounts of that.

COBOL developers are sought after but still paid less than a grad doing crud. Is that the future?

Wow the 20 dollar Claude plan sounds awful. I use Claude at work which has metered billing and have to carefully not to hit my four figure max cap.

For me $20 a month is more than I want to spend I just use the free tiers. If I use AI in an app or site I use older models mostly chatgpt3.5. The challenge is more fun and it means I can do more like, make more api calls - 100x more.


I use $20 plan for my side projects and in the beginning I was hitting limits very fast but after creating proper .md files and running /clear, it seems to work fine for my use. I am really curious how people are using $100-$200 plans. Maybe I am not utilizing to its full capacity??

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: