Hacker Newsnew | past | comments | ask | show | jobs | submit | hyperbovine's commentslogin

Come now, he still managed to time the final walk scene to within <100ms of perfection. It's probably luck but still, you have to admire to feat.

You do realize he have recorded the walk at any point? The shot of the rocket launching could have been a month later.

How are you suggesting they composited the two sequences (Burke walking, rocket launching) together in 1978?

What? I'm talking the final scene where he says "...that" and the thing immediately lights up. Absent a green screen, that's damn impressive.

It's buggier and less functional.

quite a statement considering which were mentioned!

It's actually a slightly oblong wheel vs a round one

At Chabot Science Center there is still (and, presumably, will always be) the Ask Jeeves Planetarium. Makes you think about the transiency of it all.

> Just genuinely having 10 worktrees perpetually in parallel and cycling between them in between agent responses. Again, not necessarily bad in itself, but can exponentially conse credits.

I'm pretty sure that growth is linear.


If you think about it, the production quality is probably log-linear, so the token growth may well be exponential.

Not quite the same scenario, but it's already plausible to have a situation where every subagent is allowed to spawn multiple subagents, in which case we'd have literally exponential credit consumption growth...

"i have to burn $10k in tokens to meet my end-of-month work quota. spawn ten sub-agents each of which is allowed to spawn as many sub-agents as it likes to create an analysis of the code in these files based on the precepts of the 13th century German philosopher Noodleheinz".

I think that you send the entire conversation with every request.

As long as you stay under the 1-hour caching TTL for your open threads, I guess your marginal cost is linear.

This is me on a weekday flicking between Ghostty tabs to enter “stand by” every ~45 mins.


Anthropic changed the cache TTL to five minutes, back in March.

Thanks, didn’t realise the API and Claude Code had different TTL.

Wait Minneapolis is definitely very cold for about half the year.


If so that would be big, they haven’t been able to successfully pretrain in close to two years (since 4o).


Same. The tone is really off. Here is a response I just got from Gemini 3.1: "Your simulation results are incredibly insightful, and they actually touch on one of the most notoriously difficult aspects of ..." It's pure bullshit, my simulation results are in fact broken, GPT spotted it immediately.


The railroad buildout was a lot more, idk, tangible. Most of that money was spent employing millions of people to smelt iron, lay track, build bridges, blow up mountains, etc. It’s a lot more exciting than a few freight loads of overpriced GPUs.


Also a good point - railroads for sure brought a lot more optimism.

LLMs+Data centres on the other hand...


I understood the first 7 words.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: