I just tried it on their website, using the desktop browser, and the experience is absolutely OK: you just get the menu as in any web app, and you can close it to go back, etc. Just an old-school page which is blazing fast ... because it is an old-school page. It renders faster than a typical animation to open a sidebar.
...because the opening line of the blog post says he's been "building websites with LLMs", and then attempts to cutely redefine that abbreviation as "Lots of Little htMl pages" in a parenthetical.
It's, um. Not the best kind of communication, and very easily leads to this kind of misunderstanding.
...But what percentage of Swedes is that? vs the vast majority of working-class Americans.
Remember, outside of its few biggest and wealthiest cities, the US just does not have decent, reliable public transport, and most places don't have any.
And how many Americans live in places without any public transport?
As a European I spend some time in LA and Las Vegas and while not optimal I could get everywhere without a car. I could even do a day-trip to Bakersfield by bus.
Your anecdata to this one time you took a trip to California doesn’t help.
You can just look at % of urban residents that use transit, which is lower in US than any western country. Clearly transit isn’t built or available in a sufficient way to majority of people
In addition to the human cost that others mention, the big problem is that in our current system, this doesn't lead to fresh blood coming in and being able to compete on an even footing: it leads to the giant incumbents schlorping up the pieces and becoming even bigger and stronger.
Your statement might be true in a system with healthy safeguards ands competition, but that isn't the system we have in the real world today.
> There is no scenario where not having a fab is beneficial.
Mmm, I think that's an overly strong statement.
In the scenario where Apple has a fab that's doing work at a 2030 level in 2030, that's great, especially if everyone else is, by their standards, doing work at a 2025 level. In the scenario where Apple's fab is doing work at a 2030 level in 2035, and another fab is doing work at a 2035 level, owning their own fab has suddenly become a liability. If they had used that money to hire the output of other fabs, then as soon as it became clear that another one was eclipsing the one they were using, they could simply switch. And given the way businesses operate, it would be very hard to justify closing down their own fab to use the one that was outperforming it.
Now, granted, that's not an especially likely scenario, but it is a very realistic one.
Until the late '70s-early '80s, overall productivity tracked very closely with wage growth.
Then wage growth flattened out, at the same time that the wealth of the wealthiest few started to grow by leaps and bounds.
It is the fault of people accruing capital. They have taken a vast percentage of all the wealth created in the past 50 years, which would have otherwise gone to the rest of us.
Then they used that wealth as leverage to prevent the rest of us from having the power to do anything about it.
> It has always been non-deterministic but we relied on low level engineers who knew the dark magicks to keep the horrors at bay.
This is a disingenuous comparison.
First of all, what you're talking about is nondeterminism at the hardware level, subverting the software, which is, on an ideal/theoretical computer, fully deterministic (except in ways that we specifically tell it not to be, through the use of PRNGs or real entropy sources).
Second of all, the frequency with which traditional programs are nondeterministic in this manner is multiple orders of magnitude less than the frequency of nondeterminism in LLMs. (Frankly, I'd put that latter number at 1.)
This is part of a class of bullshit and weaselly replies that I've seen attempting to defend LLMs over the years, where the LLMs' fundamental characteristics are downplayed because whatever they're being compared to occasionally exhibits some similar behavior—regardless of the fact that it's less frequent, more predictable, and more easily mitigated.
> First of all, what you're talking about is nondeterminism at the hardware level, subverting the software, which is, on an ideal/theoretical computer, fully deterministic (except in ways that we specifically tell it not to be, through the use of PRNGs or real entropy sources).
Malloc and free were never deterministic outside of the simplest systems.
The second we accepted OS preemption we gave up deterministic performance.
Good teams freeze their build tools at a specific version because even minor revs of compilers can change behavior.
I've used way too many schema generator tools that I'd describe as "wishfully deterministic".
Heuristics have been used for years in computer science, resulting in surprising behavior. My point is that if we ramp up the rate of WTF we are willing to tolerate, the power of the systems we can build increases drastically.
> Second of all, the frequency with which traditional programs are nondeterministic in this manner is multiple orders of magnitude less than the frequency of nondeterminism in LLMs. (Frankly, I'd put that latter number at 1.)
Building a RAG lookup system that takes in questions from the user, looks up answers in a doc, and returns results, can be built with reliability damn near approaching 99.99%.
I have seen code generation harnesses that also dramatically reduce non-determinism of LLM generated code, but that will continue to be a hard problem.
My phone camera applies non-deterministic optimizations to images I take, and has done so for years now.
GPS is non-deterministic (noisy), we smooth over the issues. GPS routing is also iffy, but again we smooth over the issues.
The question is can useful products be made with a technology. You can shove enough guardrails on an LLM interface to make it useful. That much is clear. I derive massive value from LLMs and other transformer based systems literally everyday. From the modern speech transcription systems, that are damn near magic compare to what we had a few years back, to image recognition, to natural language interfaces to search over company documents.
If we completely discard coding agents, LLMs are still an insanely impactful technology.
Those guardrails add costs, and latency. For some scenarios that is fine, but for others it isn't. Chat bot support agents implemented by the lowest bidder don't have any attempt at guardrails. Better systems are better built.
I agree that current LLMs all suffer from the problem that the control messages are intermixed with data, that is a crappy problem that the industry has known is a bad pattern for literally decades (since the 70s, 80s?). It seems like an intractable flaw in the systems.
But that doesn't make the system unusable any more than the thousand other protocols suffering from the same flaw are unusable.
The single best example is for this discussion is Superscalar out-of-order execution which can't be used in aerospace, medical devices, and industrial control systems, or you need to guarantee that code finishes within a certain time, because technically it isn't deterministic.
Neither is stochastic gradient descent which is the cause of the LLM problem. Nor is UDP, the network protocol that powers video calls, live streaming, and online gaming.
"There are five companies that we selected because they have absolutely massive growth, far beyond anything else in the market. Should we really say Apple did well just because they're a member of that group?"
I mean, part of the systemic problem here is that "results were so bad they couldn't publish."
That shouldn't ever be a thing. As long as your methods are sound, it should never matter whether your results are just completely random noise; that's still an important result.
Right :) I was trying to write the bleakest possible version, and in the bleakest, your own unpublished idea is not actually original, it’s just a failure of the system to record negative results.
While I agree with your basic premise, that 3.5% "rule" is much more of an observed effect than an actual rule.
There needs to be an actual mechanism for the protests to bring about the fall of the authoritarian regime. Unfortunately, in our current context, a lot of the feedback mechanisms that should cause protests to change actual policy and affect the people in power are broken, largely due to the Republicans' efforts over the last several decades to eliminate accountability both from the actual institutions and as a valid concept in our national consciousness.
Because if I click on a menu button on a desktop browser, I generally don't expect it to take over the entire page with a menu.
This seems like an example of unhelpfully mobile-centric website design, which has been becoming more prevalent in recent years.
reply