Hacker Newsnew | past | comments | ask | show | jobs | submit | binyu's commentslogin

> which is easy, and more like changing the colour of a carbon-fibre Formula 1 part, which requires re-calculating the weight, strength and aerodynamics.

Seems like a bit overstated for a CPU fan but I might be wrong


This is really cool. It has retro vibes of the era when the Internet was still free from the big five domination.

> At least crypto does not take away more jobs than it creates

Except sometimes when there's a huge black swan event, or when the bubble pops. Such things can result in significant layoffs even though it's a completely different mechanism.


I feel like Anthropic is forcing their new model (Opus 4.7) to do much less guess work when making architectural choices, instead it prefers to defer back decisions to the user. This is likely done to mine sessions for Reinforcement-Learning signals which is then used to make their future models even smarter.

https://www.anthropic.com/engineering/april-23-postmortem

On March 4, we changed Claude Code's default reasoning effort from high to medium to reduce the very long latency—enough to make the UI appear frozen—some users were seeing in high mode. This was the wrong tradeoff. We reverted this change on April 7 after users told us they'd prefer to default to higher intelligence and opt into lower effort for simple tasks. This impacted Sonnet 4.6 and Opus 4.6.

On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10. This affected Sonnet 4.6 and Opus 4.6.

On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality and was reverted on April 20. This impacted Sonnet 4.6, Opus 4.6, and Opus 4.7.


How does this address the point I moved specifically?

Yeah, people really took it to the extreme and made a cult out of it for no reason. Mass delusion at its finest.


Finding specimens is not that hard or inaccessible if you are determined. Virtually any place on earth has its own geomorphology history. Start by looking at geological maps to learn what kind of rocks/minerals you can find in your surroundings and look for old/active mines, quarries or any activity that excavates soil, etc. Specimens can be found sometimes in land deposits from these activites.


Interesting perspective. While the analogy may be somewhat intuitive, distributed computing exhibit a wider and more diverse set of challenges imo.

Example: Synchronization in naturally async environments, consensus, failure-safe system, etc.


Agreed that full consensus is overkill.

But I think the coordination problem is subtler than version control implies. In the (plan, design, code) pipeline they aren't collaborating on the same artifact. They're producing different artifacts that are all expressions of the same intent in different spaces: a plan in natural language, a design in a structured spec, code in a formal language.

Different artifacts which are different projections in different Chomsky levels but all from the same thing: user intent.

The coordination challenge is keeping these consistent with each other as each stage transforms the prior projection into the new one. That's where the gates earn their place: they verify that each transformation preserves the intent from the previous stage.


Frontier models being in the hands of a handful companies does not help either. Let's hope that the open weight movement changes that soon.


Gemma 4 has made a lot of progress in this area. The model is phenomenal. It's size is workable. This is the worst it will ever be.


Now we just need the RAM market to get back to normal. Or at least fine OpenAI for speculating on raw wafers. There's an article on the front page [0] with this passage that gives me hope that consumer access to VRAM may improve

> On the infrastructure side: OpenAI signed non-binding letters of intent with Samsung and SK Hynix for up to 900,000 DRAM wafers per month, roughly 40% of global output. These were of course non-binding. Micron, reading the demand signal, shut down its 29-year-old Crucial consumer memory brand to redirect all capacity toward AI customers. Then Stargate Texas was cancelled, OpenAI and Oracle couldn’t agree terms, and the demand that had justified Micron’s entire strategic pivot simply vanished. Micron’s stock crashed.

[0] https://adlrocha.substack.com/p/adlrocha-how-the-ai-loser-ma...


Microns stock is still up 470% yoy


realistically any 'huge' frontier model that takes a rack of H100s to infer against is probably going to have downtime no matter who runs it.

downtime is always going to 'scale' poorly against loads that require a lot of hardware thrown at them, even with lots of good fail-over -- probably worse for the small vendors because they don't have the contracts supplying them with hardware first so availability is already at a premium for them.

so, I guess i'm saying yeah I hope frontier-level-models get out soon in the open arenas, but I suspect the same or similar level of exclusivity will exist as long as they take that much compute to operate.


If it goes as well as the 'open' / federated social network alternatives of the 2010s, I wouldn't count on it.


Social networks are 100% network effect. AI models are not really effected by that at all.

Which doesn't mean the open models will definitely succeed, it just means they have more of a shot than the open social networks ever did


>AI models are not really effected by that at all.

I don't know about that. More usage means more support, which means more docs and open source projects, wrappers, harnesses built around them etc.

Way less demand to build tooling around open weight models if they remain hobbyist.


The big thing is here is more training and that comes in two flavors:

1. Using AI helps as part of the training process.

2. All the prompts going to openai/claude is a gold mine.


I disagree with you. Today's scenario is certainly much more interesting than the post dot-com boom years, maybe not as interesting as the very early days of computing, but certainly ripe for innovation, and a test-bed for innumerable breakthroughs that are yet to come. We are currently living the post "Big data" age and advancing fast toward the pre quantum computing era, with a renaissance of AI and machine learning technologies. Cryptography is ripe for disruption and the past years have seen the introduction and deployment of novel concepts and semi-old ideas that have finally found application with cryptocurrencies and distributed systems. There are several projects at the forefront of technology with defined goals in mind, and definitely solving real world problems, like privacy in cloud computing, for example. Software stacks have matured intro fully fledged products and there is plenty of choice for every use case, one just need to delve into the enormous amount of information available and do his homework. Operating systems have also advanced a lot and I love, for one, how easy is to operate Ubuntu nowadays, and the level of freedom it offers to users. Maybe you need to think a bit outside the box? respectfully, have a great one!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: