Yeah, punk was a bit of a rejection of the polish of the big bands of the time. In a sense, the "horrible" was sort of the point. And for the shock value. But did that really mean they were horrible? Probably everyone kind of sucks at first. But it's hard not to improve your skills once you have got to a point where you have done a certain number of shows because you created a sustainable cash flow to support it.
There was an interview with Joey (maybe in the greatest hits liner notes?) where he said at the beginning they were trying to cover their favorite songs from the 50s and 60s but they couldn't figure them out. So they wrote their own that were easy (my word) enough to play.
Imo they were terrible musicians but a world class band.
An excellent analogy. Everyone is an expert in taking the photo. But this does not make them a photographer. Even that expert claim is actually not fully true, the phone camera is woefully inadequate in many ways. But the main difference between a photographer and a layman like myself is the ability to produce output strongly linked with clear artistic intent.
Writing code is not the hard part and never has been. The hard part is having a clear understanding of how to solve a specific complex problem and being able to express that intent in code. Getting a decently exposed image was never the hard part.
Finally, there’s no scaling issues with cameras. You just make them better until it stops making economic sense. This is not true with code. To make llms better, good human-made code is needed for training. Better llms lead to less human-made code being available. This means there’s not an exponential growth in quality but a S-curve with a balance point. I’d say we are already there: innovation is shifting from the models to the ways of harnessing the models.
I used to manage NT-based infra back in the day, have been on a mac for 15 years now because of stuff like this. A few years ago I bought a Windows box for my daughter. Out of the box the clock was wrong and it would just hang on auto-update. No message, no logs anywhere, just hangs. A few years later the son comes of age and gets his own box. And it’s the same story, no automatic adjustment of the clock. I’m running a bog standard unifi network leading to fiber, nothing complicated, everything else works including all the windows laptops of my wife. But a basic standards-based library-supported Windows function.
Windows NTP client uses UDP port 123 as both the destination and source port, rather than letting the OS assign an ephemeral source port.
Many ISPs (e.g. AT&T Fiber) block UDP traffic with source port 123 to mitigate NTP amplification attacks.
Most people won't notice that problem since low-end consumer routers tend to mangle the source port when they perform outbound NAT. The ISP-provided router will generally do this itself until you enable "DMZ+" or "IP Passthrough" or some similarly-named mode, as home networking experts will typically do so they can manage NAT and firewalling on their own devices.
If a Windows laptop can sync and the wired Windows desktops can't, your wi-fi AP might be doing the necessary source port mangling.
If you add a NAT rule to your router to change the source port for NTP traffic, you should get time sync working.
Windows uses NTP by default with sane settings -- and it logs by default. So whatever issue you're experiencing is not a Microsoft problem, but a *you problem*. And the fact you state that there are no logs, which is false, kinda proves it.
that's such a cop out. Whatever store GP is buying computers from is messing things up, but how come Microsoft lets things get so bugged up in the first place? If I get an iPhone, it'll just work.
Agreed. I have several windows gaming PCs for my kids. One of them occasionally decides it’s in California and has to be corrected. Why? I have no idea.
Every single Mac, iPad, and iPhone gets this right with zero configuration.
My theory, having seen what happens due to incorrect date/time settings on Windows (e.g. rebooting a laptop after the battery has been drained for extended durations):
1. The time, and critically date, is wrong (not syncing with the NTP servers, potentially due to ISP filtering, as the sibling comment implies)...
2. Which is causing SSL errors because the wrong date causes the expiry date on the SSL certificates to appear nonsensical...
3. Which causes connection failures to pretty much any HTTPS endpoint...
3. Which is preventing updates because no sane OS would download updates over an insecure connection.
The weird thing is the way it’s trash. It breaks weird things no dev should ever have to touch. At one point Excel left horizontal lines on screen, when scrolling. Bullets and numbering just straight up refuses to restart numbering. It _worked_ why did you break it? Who gained what out of you breaking it?
where I work, we're not allowed to merge them. we test every change, and we review everything to make sure there are no regressions in all the obvious features. scrolling through our webpage will never break in production, because we use people with a full set of eyes to check before merge.
I believe you but I've literally not worked at a single place that puts that much scrutiny on PRs and I've been working as a professional programmer for 20 years.
That’s a broken analogy. An intern and a llm have completely different failure modes. An intern has some understanding of their limits and the llm just doesn’t. The thing, that looks remarkably human, will make mistakes in ways no human would. That’s where the danger lies: we see the human-like thing be better at things difficult for humans and assume them to be better across the board. That is not the case.
I don’t think the objections are not necessarily in terms of lack of productivity although my personal experience is not that of massive productivity increases. The fact that you are producing code much faster is likely just to push the bottleneck somewhere else. Software value cycles are long and complicated. What if you run into an issue in 5 years the LLM fails to diagnose or fix due to complex system interactions? How often would that happen? Would it be feasible to just generate the whole thing anew matching functionality precisely? Are you making the right architecture choices from the perspective of what the preferred modus operandi of an llm is in 5 years? We don’t know. The more experienced folks tend to be conservative as they have experienced how badly things can age. Maybe this time it’ll be different?
I’m not sure you are familiar with the way some traditional communities treat, say, single mothers.
The system is, that if you make unfortunate choices (such as moving away from your support network and carrying your life savings where you can forget them) and do not have anything (such as skills or low morals) to compensate, life is going to be hard. That has always been the case, that will always be the case. What the consequential decisions are, differs. But the basic premise of “f around and find out” holds.
Are you aware that this is your personal opinion? You make it sound like universally agreed fact. I have a different opinion, and operate from a wider definition of “community“ and “system”.
lol yeah sure. I take it you have zero interest in this discussion or to understand my arguments so I wonder why you’re even engaging in it.
There is no “changing definitions” happening here. I could link to definitions of both community and system and point to the one I’m using and the one you seem to be using in an attempt to achieve shared understanding but the way you phrase things really don’t encourage further attempts to communicate.
It’s not a primary source is a scan of a 2016 reprint that I can’t find much information on. And I she a version that purports to be the 1937 edition which does have the small slimy creature line.
> I believe in coding primarily as a means to an end
Yes. Absolutely. To what end, though? Is your end deterministic like a cryptographic protocol or loose like pagination of a web page? Is your end feature delivery or 30 years of rock solid service delivery at minimal cost?
AI is a dangerous tool. It exposes fundamental questions by automating away the mundane. We have had the luxury of not thinking deep and hard about intent and value creation/capture and system architecture. AI is putting us face to face with our ineptitude: maybe it wasn’t the tech stack or the programmers or the whatnots? Maybe the idea was shait, maybe I had no understanding of the value added of my product? Maybe …?
You get the best gear - musical instrument, bicycle, camera, etc - the pros have and still the results are not great. Gotta ask why. We are experiencing this at literally industrial scale.
reply