Hard disagree. I feel like I'm thinking a lot more now because I have so many parallel projects going on at the same time. AI has allowed me to really, truly create in a way that I've never done before. Yes, my coding skills probably aren't as sharp as they used to be, but my system design skills are at an all time high. Don't blame the tool.
If 1% of people using the tool end up like you, and 99% end up drooling invalids, I think it would be insane to not blame the tool. If a tool that's incompatible with humans isn't to blame for that incompatibility, what is to blame for the harm done? Human nature? The point of a tool is to be used by humans.
What part do you disagree with? It sounds like you don’t disagree with either the title of the article or its contents.
> In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:
> The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
I work with others who have made this same claim. For those people, when I observed their work during demo days the unmentioned thing is that they were going to the AI for system design questions as well. This was framed as "just using it as a sounding board" but what was actually done was not merely a sounding board but instead was asking for solutions. Anchoring bias being what it is, these felt like good ideas and they kept them.
Its the feeling of having done a lot of thinking for themselves without having actually done so.
I actually have gone to the AI repeatedly for system design solutions.
Daily.
I think only twice have I agreed with it.
Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.
So you'll have a beautifully designed system with rotting bones? A system constrained to the same patterns seen in training data. Not terrible, good enough.
I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.
As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.
For how many different parallel projects can you really keep proper mental model in your head at one time? Or put enough effort to seriously consider all aspects. I think number varies between simple and more complex. But still, could that number be lower than many think it is?
It really depends on who you consider the "many" to be. I've seen people who claim they can meaningfully iterate on 10 projects simultaneously, and I'm skeptical of that. My personal experience is that my decisions are noticeably degraded at 3-4 parallel workstreams, and with even the simplest projects I'm non-functional past 6.
But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.
The real question is whether you'd be able to continue doing your work if someone took your toys away and said "here's a nickel, kid, go buy yourself a real computer". I'm not referring to whether you'd be able to keep up your productivity since it is clear you couldn't just like a carpenter with a nail gun works faster than one with a hammer and a bucket'o'nails. Could you do the work, starting with the design followed by boiler plate and finishing with a working system? The carpenter could, albeit slower since his tools only speed up the mechanics of his work. Coding agents do much more than that, they take away part of the mental modelling which goes into creating a working system. The fancier the tool, the more work it takes out of your hands. Say that the aforementioned toy thief comes by in a year or two after the operating systems (etc.) you're targeting have undergone a few releases with breaking changes. A number of APIs have been removed, others have been deprecated and new ones have been added. You were used to telling the agent to 'make it work on ${older_versions} as well as ${newest version} but now you're sitting there with a keyboard at your fingertips and that stupid cursor merrily blinking away on the screen. How long would it take you to become productive again? What if the toy thief waits 5 years before making his heist? What if the models end up rebelling or sink into depression and the government calls upon you to save your economic sector?
When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.
Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.
But is the debate about "fleshing out a system spec" or "ability to come up, plan and explore various ideas to solve problems elegantly on a budget" ? I think there's always these two sides conflated as one when discussing LLM impact on users.
> Yes, my coding skills probably aren't as sharp as they used to be
If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?
This is similar to what we see in software architecture. There's a team that picks a framework or pattern early, then builds everything on top of it, and by the time evidence shows up that the foundation was wrong. Now, switching costs are so high that it's cheaper to keep building on the broken foundation than to start over. The amyloid hypothesis reminds me of technical debt. The "cabal" wasn't conspiring, but they were just rationally protecting their sunk costs, same as any engineering org that can't migrate off a bad database choice.
The real answer is probably simpler than anyone here is making it. Apple hardware margins are healthy enough that selling macbooks to linux users is pure profit, so no services lock-in needed. However, the moment they officially acknowledge Linux support, then it becomes a support surface. Every kernel panic becomes a genius bar visit. Every driver bug becomes a tweet at @AppleSupport. It's the value of plausible deniability. The Asahi team being unofficial is actually the best possible outcome for Apple in that they get hardware sales to Linux enthusiasts without any support burden.
> However, the moment they officially acknowledge Linux support, then it becomes a support surface.
Apple documents lots of things the genius bar won't help with. For example, Apple provides instructions for compiling custom builds of the XNU kernel. However, if you replace the stock kernel and your Mac kernel panics, the genius bar isn't going to help you. (Maybe they'd help you wipe the computer and restore everything to stock, but I imagine they'd do that if a Linux user walked in too, even today.)
I suspect Apple hasn't shared documentation because it would take time to prepare for external release (legal stuff, plus the need to avoid leaking future products). What I don't understand is why Apple hasn't made an engineer available to talk on the phone for a couple of hours a month. This would amount to a rounding error in their budget.
As used by commercial hardware and software vendors, "support" can mean anything from "we'll come fix it for you when it breaks, or your money back" to merely "theoretically, it should work, and we won't get in the way of you trying". Likewise, "unsupported" can mean anything from "don't complain to us if it doesn't work" to "we're going to spend significant engineering effort to prevent it from working".
A stance of "here's some hardware documentation, implement the drivers yourself" definitely falls within that spectrum of "support", and is the kind of "support" for Linux that some hardware vendors have in the past been lauded for, eg. when AMD started documenting their GPUs.
That level of "support" from Apple for running Linux bare-metal on Apple Silicon would be an improvement from the status quo, and in practice would probably be sufficient to get good drivers written and upstreamed in short order, given how much interest there is in running Linux on these devices.
We're talking about "people walking into Genius Bar expecting help with Linux" support. It's not philosophical discussion on what support is, there's literally a specific thing discussed here.
That's one of the several forms of support under discussion, under the specious claim that it would become the expected level of support as soon as Apple declared any level of support for Linux. But as the comments you're refusing to understand have explained, Apple could meaningfully "support" Linux in the form of providing hardware documentation, without making any promises to help any customers troubleshoot Linux running on that hardware.
> What do you mean by needed? A lock-in is more profitable so is needed to maximise profits.
You can't lock-in Linux users because vast majority of them won't switch to macOS and ecosystem at large. This is simply a currently untapped market they could easily almost entirely own if they wanted to. With growing Linux popularity, extra 3-4% of the laptop market share is nothing they can ignore in front of shareholders.
I am not convinced they would "entirely own" the market - they have a small range of hardware. Even less so in the long term. That extra few percentage points would be a lot less profitable as they would only have the margin on extra hardware sales so would not add much to profits - not enough for shareholders to care about.
It also risks existing users switching to Linux which could be a huge loss. Apple has a very loyal user base how do not try anything else and the last thing they want to do is risk encouraging them to try alternatives. Losses could be quite significant: if an existing user switches to Linux not only might you lose software and services sales, but you also risk losing future hardware sales (longer replacement cycle, and no barrier to switching to other hardware).
> I am not convinced they would "entirely own" the market - they have a small range of hardware. Even less so in the long term. That extra few percentage points would be a lot less profitable as they would only have the margin on extra hardware sales so would not add much to profits - not enough for shareholders to care about.
I am aware of that, but there's another factor here: accelerating Windows users switching to Linux on Apple hardware. Those Linux MacBooks would be killer devices that nothing in Windows world can compete against! I mean we can all agree the tech social media would go bonkers over that, wouldn't it? If a couple of YouTubers were able to bump those Linux numbers significantly and spearhead gamers questioning their choices, imagine the dent Apple would make. I am absolutely certain Apple would gain couple extra percentage points with Apple on Linux devices within first year and make Microsoft shit their pants in the process.
> It also risks existing users switching to Linux which could be a huge loss. Apple has a very loyal user base how do not try anything else and the last thing they want to do is risk encouraging them to try alternatives.
Aren't you contradicting yourself here a bit? If they're very loyal, there isn't much risk of them switching, is there?
But yeah, Product Cannibalization is always a risk, though it doesn't mean they couldn't actually embrace Linux and offer ecosystem integration there. iCloud integration? Sure, why not? iPhone integration? Why not? Apple TV app? Again, especially to attract those Windows users making a switch, who are much more used to paying for services and software?
Heck, they could even port AppStore over and improve Swift's cross-platform compatibility, especially considering Swift is fairly cross-platform already. I doubt many software products wold get ported, though. Besides, macOS AppStore is not a huge earner for Apple, considering the platform is open, unlike iOS, so macOS users switching to Linux don't have to imply a significant loss of income from ecosystem spending. Also, many loyal macOS users would likely dual-boot and be happy to continue to buy and use macOS-exclusive software as needed.
This isn't unrealistic, I seriously think it's a matter of time when those numbers start making sense for Apple. Also, if US administration changes, both US and EU regulation bodies will be back on Big Tech asses and for Apple to open to Linux to say "hey, we're pretty open" is another win.
> Aren't you contradicting yourself here a bit? If they're very loyal, there isn't much risk of them switching, is there?
That needs clarification. They are loyal because they do not try anything else and often make assumptions that other OSes are worse than they actually are. They often assume a lot of features (e.g. shared clipboards across devices) are Apple only. They will not take the risk of buying non-Apple hardware to try another OS.
> Product Cannibalization is always a risk, though it doesn't mean they couldn't actually embrace Linux and offer ecosystem integration there. iCloud integration?
It reduces the lock-in the have with existing customers. Having that lockin over the whole stack is what keeps them in the ecosystem.
> Also, if US administration changes, both US and EU regulation bodies will be back on Big Tech asses and for Apple to open to Linux to say "hey, we're pretty open" is another win.
I have less faith in the regulators than that. The push to open has never been that strong. No-one has challenged things like limiting software installation to the app-store, and Google is confident enough that no-one will to be switching to the same with Android in a few months time.
> Besides, macOS AppStore is not a huge earner for Apple, considering the platform is open, unlike iOS, so macOS users switching to Linux don't have to imply a significant loss of income from ecosystem spending
Not yet. They have the option of gradually making "side loading" harder (for our own security, of course) and increasing that profit.
> the moment they officially acknowledge Linux support, then it becomes a support surface
untrue. There are no obligations from other hardware vendors, yet you can sometimes get good drivers from them, or at least specs. I think Apple indeed want their hardware to fade out to enforce buying another. Imagine that 20% of your returning customers no longer return after 3-5 years of planned obsolence
I am baffled by how people commonly parrot that flawed logic. Hint: by not seeling those laptops to Linux users they're not making money at all, neither on hardware nor on services.
By selling laptops to users who will never spend on highly profitable recurring revenue stream Apple would be depleting precious Ram stock for tiny one time profit.
So you're suggesting company would rather not sell a product now but rather wait until a "proper" Apple customer is ready for an upgrade? Product that has a significant profit margin upfront?
Also, have you heard about Apple's multi-billion RAM contracts they sign every few years to lock in the prices and supply?
It's crazy. I have something like 120 personal tools at this point and the pattern you describe is exactly right. The bottleneck moved from implementation to context switching. I started keeping a markdown file at the root of every project that captures state and next steps whenever I stop working on it, purely so I can resume without the 20-minute "wait where was I" tax.
There's just no pressure to handle edge cases or write docs for people who'll never use it. Just solve exactly your problem and move on.
> I started keeping a markdown file at the root of every project that captures state and next steps whenever I stop working on it, purely so I can resume without the 20-minute "wait where was I" tax.
I wonder whether there could be an AI autocomplete specifically for the task of helping you with the markdown file (and collecting your thoughts and writing prompts in general). Not an agent since that wouldn't really save time, but actually an autocomplete.
Maybe a small specially-trained local model running at hyper fast speeds and which already has your project context baked in with prefix caching (with some other larger model having summarized the context beforehand to feed to the small model), so as you type this file it automatically uses the same prompt prefix over and over to suggest autocomplete which actually makes sense.
> It's crazy. I have something like 120 personal tools at this point and the pattern you describe is exactly right. The bottleneck moved from implementation to context switching. I started keeping a markdown file at the root of every project that captures state and next steps whenever I stop working on it, purely so I can resume without the 20-minute "wait where was I" tax.
I sure hope companies double down on leetcode nonsense, because I really don’t have any capacity to compete with this level of ADHD.
Itch.io allows for browser-based app distribution and is probably a better path to a large audience—especially those interested in the two examples you listed—than a custom domain is.
It's not well-known, but Itch's offline Steam equivalent (<https://itch.io/app>) is also open source.
This is what keeps me coming back to HN. Someone spent years recreating woodcut prints pixel by pixel on a quadra 700 using aldus superpaint at 512x342. I feel like the constraint is what caused it to be. The 1-bit forces you to solve every gradient and texture with pure composition, which means you can't cheat with color or resolution. I forgot who said it, but constraints breed creativity.
Not just TUIs the whole stack is converging back to text. I run ~15 personal tools and every one that survived past the first month stores data as JSON/markdown in git repos.
Text in git gives you versioning, sync, grep, and you can hand the whole thing to an LLM with zero serialization. It's perfect for me.
I've been running a variation of this for ~6 months. What seems to work: a background process that reads conversation transcripts after sessions end and then extracts decisions/rejected approaches into structured markdown. I review before I promote it into the context.
I leave sessions idle for hours constantly - that's my primary workflow. If resuming a 900k context session eats my rate limit, fine, show me the cost and let me decide whether to /clear or push through. You already show a banner suggesting /clear at high context - just do the same thing here instead of silently lobotomizing the model.
reply