Very few other people have Trump's ability to channel frustration in a nonspecific-but-charismatic way that connects the various extreme factions of the American right.
None of those factions will be gone, but their battles will weaken their cause more than they have since 2016.
Some of this can be seen by how even his own popularity falls any time he actually has power, since there are no effective ideas there, only misplaced blame, and that doesn't sustain support for four years. Without him there at all in an out-of-power period, the "blame the Jews"/"blame the brown people"/"blame the women"/"blame the baby-killers"/"blame the anti-Semites"/"blame the sexual deviants" factions will likely fail to find another person they all rally around.
The extreme factions of the right are a very small portion of the electorate. They generally don't decide elections beyond the primaries and generally turn out in favor of the right regardless.
Dems lean more on moderates/independents. Trump won because he persuaded that group, particularly the young men.
25-33% of the electorate is no small fraction. There's a group of people who have been consistently supportive of this government's policies since 2016. Take any policy survey, and the fraction that supports the right-wing side of action always amounts to a consistent 25-33% of the votes.
While being largely correct, looking at his popularity misses the forest for the tree.
Trump is very much a symptom, not a cause. He is simply the kind of personality most fit for the media environment.
The media environment on the right has essentially eschewed journalistic standards for political and economic velocity.
Fringe theories get introduced during podcasts, which then get brought up by guests on Fox. Members of the government point out that the news media is talking about fringe theory X, which then gets repeated by the news media. Eventually the government opens up an investigation or creates a task force to address the issue.
It is not that people don’t come up with objections or counter narratives on the right, it’s just that they don’t get platformed.
Verification is the expensive part of journalism. If you eschew verification. You can be more efficient. Today the right is simply the more “efficient” political consensus manufacturing machine.
This is foundation upon which the rest of the events occur. This is why there will always be space for another character to appear.
Isn't the "skill" just stuff that gets put into the context? Usually with a level of indirection like "look at this file in this situation"?
How long can you keep adding novel things into the start of every session's context and get good performance, before it loses track of which parts of that context are relevant to what tasks?
IMO for working on large codebases sticking to "what the out of the box training does" is going to scale better for larger amounts of business logic than creating ever-more not-in-model-training context that has to be bootstrapped on every task. Every "here's an example to think about" is taking away from space that could be used by "here is the specific code I want modified."
The sort of framework you mention in a different reply - "No, it was created by our team of engineers over the last three years based on years of previous PhD research." - is likely a bit special, if you gain a lot of expressibility for the up-front cost, but this is very much not the common situation for in-house framework development, and could likely get even more rare over time with current trends.
> Isn't the "skill" just stuff that gets put into the context? Usually with a level of indirection like "look at this file in this situation"?
Today, yes. I assume in the future it will be integrated differently, maybe we'll have JIT fine-tuning. This is where the innovation for the foundation model providers will come in -- figuring out how to quickly add new knowledge to the model.
Or maybe we'll have lots of small fine tuned models. But the point is, we have ways today to "teach" models about new things. Those ways will get better. Just like we have ways to teach humans new things, and we get better at that too.
A human seeing a new programming language still has to apply previous knowledge of other programming languages to the problem before they can really understand it. We're making LLMs do the same thing.
Decades is a long time for hardware, but "years" seems reasonable soon. The commercial models are "good enough" for a lot of things now, so if that performance makes its way into the on-device space for "home applicance"-level cost (<$5k at the start, basically), I'd expect a lot of stuff to start popping up there. In offices too.
Like the PC in the 80s starting to eat up "get a mainframe" or "rent time on a mainframe" uses.
What youre doing is seeing changes limited to one of the R, G, B so instwad of judging integral xolors, your doing 3 different. The article explains how errors propagate, and those RGB pixels will all shift errors because of matetial science.
This is humans have traditionally done with greenfield systems. No choices have been made yet, they're all cheap decisions.
The difficulty has always arisen when the lines of code pile up AND users start requesting other things AND it is important not to break the "unintended behavior" parts of the system that arose from those initial decisions.
It would take either a sea-change in how agents work (think absorbing the whole codebase in the context window and understanding it at the level required to anticipate any surprising edge case consequences of a change, instead of doing think-search-read-think-search-read loops) or several more orders of magnitude of speed (to exhaustively chase down the huge number of combinations of logic paths+state that systems end up playing with) to get around that problem.
So yeah, hobby projects are a million times easier, as is bootstrapping larger projects. But for business works, deterministic behaviors and consistent specs are important.
Unless you're training your own model, wouldn't you have to send this dialect in your context all the time? Since the model is trained on all the human language text of the internet, not on your specialized one? At which point you need to use human language to define it anyway? So perhaps you could express certain things with less ambiguity once you define that, but it seems like your token usage will have to carry around that spec.
> There are a few emotional trigger points that LLMs seem to cause in programmers and this is a common one -- the need for deep, first-principles understanding that LLMs make obsolete.
Is it also an "emotional trigger point" that causes people to treat their hunches as facts?
There's also a world where "all companies have access to the software factory so sales and entrepreneurship in software disappears entirely."
But in that scenario it's hard to see where the unwinding stops. What are these other companies doing and which parts of it actually need humans if the "agents" are that good? Marketing? No. Talking to customers? No. Support? No. Financial planning and admin? No. Manufacturing? Some, for now. Shipping physical goods? For now. What else...
Anytime you upgraded from a 4 year old computer to a new one back then - from 16Mhz to 90Mhz, or 75Mhz to 333Mhz, or 333Mhz to 1Ghz, or whatever - it was immediate, it was visceral.
SSDs booted faster and launched programs faster and were a very nice change, but they weren't that same sort of night-and-day 80s/90s era change.
The software, in those days, was similarly making much bigger leaps every few years. 256 colors to millions, resolution, capabilities (real time spellcheck! a miracle at the time.) A chat app isn't a great comparison. Games are the most extreme example - Sim City to Sim City 2000; Doom to Quake; Unreal Tournament to Battlefield 1942 - but consider also a 1995 web browser vs a 1999 one.
For me, at 52, I recall the SSD transformation to be near miraculous. I never once felt that way about a CPU upgrade until getting an M1. I went from a cyrix 5x86 133 (which was effectively a fast 486) to a pentium II 266 and it just wasn't that impressive.
The drag down of swapping became almost a non-issue with the SSD changeover.
I suppose going from a //e to a IIgs was that kind of leap but that was more about the whole computer than a cpu.
Now I have to say, swapping to an SSD on my windows machines at work was far less impressive than going to SSD with my macs. I sort of wrote that off as all the anivirus crap that was running. It was very disappointing compared to the transformation on mac. On my macs it was like I suddenly heard the hallelujah chorus when I powered on.
I went 386 DX 33 to a Pentium 75, which wasn't a wild amount of time. I'd argue that's way bigger than when I got an SSD (but I agree SSD was a huge improvement).
I went from a 1Mhz Apple //e in 1986 to a Mac LCII in 1992 with a 68030/16 MHz LCII. That was the last time I felt a step change in day to day work. Of course things like games, video and audio encoding got faster.
The next time I felt a step change was the M series of Macs. P
Software was already far down the bloat path by the time the Core 2 Duo came out, so the upgrade didn't make all that much of a difference in feel given how much latency was caused by software performing random reads off a disk. That's why SSDs made such a huge difference.
Back in the MS-DOS days, the amount of data needed to be read off a disk while the OS booted was negligible, so a second or two on a fast 486 felt amazing compared to the incredibly slow grind of watching code execute on an 8086 or slow 80286. Software was still in the space of having to run tolerably on an 8086, so the added resources of a newer faster machine actually did improve the feel of the system.
That's my point, the software was getting bloated at least as fast as the CPUs were getting faster, so you had to upgrade to a new CPU every few years to run the latest software. With SSDs, there was a huge overlap in CPU speeds that may or may not have an SSD, so upgrading to one meant a huge performance boost, within the same set of runnable software.
Also, going from Sim City to Sim City 2000 was pre-bloat. Over the course of five years, the new version was significantly better than the original, but they both target the same 486 processor generation, which was brand new when the original SimCity was released, but rather old by the time SimCity 2000 was released. Another five years later, Sim City 3000 added minimal functionality, but required not just a Pentium processor, but a fast one.
I guess what I'm getting at is that a faster CPU means programs released after it will run better, but faster storage means that all programs, old and new, will run better.
Screamtracker was sampling. Great for the days and much more accesible for the teenager I was than buying and controlling synths but that was not exactly same. More a competition to the early akai MPCs.
And we were mostly ripping those samples from records on cassettes and CDs, or other mods.
Well now that you mention that, my very first steps actually were with Soundmonitor on a C64, one of the OG trackers probably (even though not called tracker yet IIRC). I kind of forgot about that, as that was still very amateurish (I mean what I made with it, not the software).
There is definitely bloat. A few months ago I was messing about with making a QWERTY piano in a web page, and it was utterly unplayable due to the bloat-induced latency in between the fingers and the ears.
I wouldn't call that bloat; certainly we've been complaining about software bloat as long as I've been into computers, but at that time, software was simply pushing the capabilities of the hardware, and often running into walls.
These days, we value developer productivity over performance optimization, so we have stuff like Electron apps. The reason behind it is that CPUs (and RAM quantity, for the most part) are so far ahead of regular desktop applications that it doesn't matter. In the 80s and 90s, the hardware could barely keep up with decently-optimized software that wanted to do anything interesting.
> SSDs booted faster and launched programs faster and were a very nice change, but they weren't that same sort of night-and-day 80s/90s era change.
For me they were.
I still remember the first PC I put together for someone with a SSD.
I had a quite beefy machine at the time and it would take 30 seconds or more to boot Windows, and around 45s to fully load Photoshop.
Built this machine someone with entirely low-end (think like "i3" not "Celeron") components, but it was more than enough for what they wanted it for. It would hit the desktop in around 10 seconds, and photoshop was ready to go in about 2 seconds.
(Or thereabouts--I did time it, but I'm remembering numbers from like a decade and a half ago.)
For a _lot_ of operations, the SSD made an order of magnitude difference. Blew my mind at the time.
SSDs came out after CPUs started to slow down on doubling (single threaded) performance every 12-18mo or so.
So it was the only way to get that visceral improvement in user experience like CPU and platform upgrades were in the mid 90's to very early 00's.
The experience of just slapping a new SSD in a 3 year old machine was similar to a different generation of computer nerds.
Nothing could really match the night and day difference of an entire machine being double to triple the performance in a single upgrade though. Not even the upgrade from spinning disks to SSD. You'd go from a game being unplayable on your old PC to it being smooth as butter overnight. Not these 20% incremental improvements. Sure, load times didn't get too much better - but those started to matter more when the CPU upgrades were no longer a defining experience.
Sure, but what about once Photoshop was open? Aka where you spend most of your day after you start up your stuff?
Would you take the SSD and a 500Mhz processor or a 2Ghz dual-core with a 7200k or 10000k HD? "Some operations are faster" vs "every single thing is wildly faster" of the every-few-years quadrupling+ of CPU perf, memory amounts, disk space, etc.
(45sec to load Photoshop also isn't tracking with my memory, though 30s-1min boot certainly is, but I'm not invested enough to go try to dig up my G4 PowerBook and test it out... :) )
Nah I agree with him. Spinning disks were always a huge bottleneck (remember how long MS Word took to open?) and SSD's basically fixed that overnight. The CPU advancements were big, but software had a chance to "catch up" (i.e. get less efficient) because they it was a gradual change. That didn't really happen with SSDs because the change was so sudden and big.
I'd say software never really "caught up" to the general slowness that we had to endure in the HDD era either. Even my 14 year old desktop starts Word in a few seconds compared to upwards of 60s in the 90s.
The closest I've seen is the shitty low end Samsung Android tablet we got for our kids. It's soooo slow and laggy. I suspect it's the storage. And that was actually and upgrade over the Amazon Fire tablet we used to have which was so slow it was literally unusable. Again I suspect slow storage is the culprit.
I was a PC gamer in the late 90s. It was very expensive. Nowadays you can build a nice rig and you can be sure to play all the latest games for 5 years.
None of those factions will be gone, but their battles will weaken their cause more than they have since 2016.
Some of this can be seen by how even his own popularity falls any time he actually has power, since there are no effective ideas there, only misplaced blame, and that doesn't sustain support for four years. Without him there at all in an out-of-power period, the "blame the Jews"/"blame the brown people"/"blame the women"/"blame the baby-killers"/"blame the anti-Semites"/"blame the sexual deviants" factions will likely fail to find another person they all rally around.
reply