There is a whole lot of crap out there. But I think the Internet HAS been a game changer in lifting people out of poverty and increasing standards of living. Communication is awesome. And although there is a lot of propaganda (which there has always been) there is now also a lot of truth and counter claims. It’s no longer just the rich that have access to information (think of farmers guessing at what their crop was worth). I _hope_ AI will do similarly, but I have my doubts on that one.
The irony of the moment is that a billionaire in a Michelin-star restaurant and a homeless person on the street are scrolling through the same Instagram feed.
Technology doesn't do anything by itself. The result of it on the fabric of the society depends on how it is applied, whether the benefits are distributed to everyone (to some extent) or not. It's possible that taby technology is just used to displace people who lose their livelihoods and income while the benefits flow the capital owning class. In fact all the great productivity boosts that have taken place over the last decades since WW2 have mostly befitted the capital class. That's why a worker still grinds away 8h a week and barely scares by while the "elites" splurge on yacths, mansions and space travel for leisure.
it’s just another tool. lots of people didn’t want to use compilers and got left behind. the world moved on.
i’m older than you and doing fine. it’s just another tech upheaval, we’ve been through plenty.
yes it gets tiring and at some point you find your way off the treadmill. but it’s really not that hard to stay on, especially if you have the experience.
Do w hat we did in corporate/banking/other sociopathic envs did decades ago - find another source of fulfillment and happiness. For me its adventures and sports and kids, could be something else for the next joe.
Or just code as you want as a hobby, unrestrained, for whatever you need or makes you happy.
Do you have management pressure to use these tools? I don’t have any data but me and virtually every software engineer I talk to regularly is feeling or has felt pressure to use these tools.
FWIW, I'm responsible for our engineering team, and I'm the one starting to put some gentle pressure on the developers right now. Velocity used to be one of the bigger issues we had: Features used to be in development over weeks, while customers, product management, and engineers iterated on the feature, until it was finally deemed stable enough and shipped. With AI, we can shorten that cycle considerably, and get stuff out of the door in days or even hours instead. Doing so requires adapting your processes accordingly, give up some control over the details, take good care of tests, and do proper code reviews.
Given all that, I just cannot ignore AI as a development tool. There is no good justification I can give the rest of the company for why we would not incorporate AI tools into our workflows, and this also means I cannot leave it up to individual developers on whether they want to use AI or not.
This pains me a lot: On the one hand, it feels irresponsible to the junior developers and their education to let them outsource thinking; on the other hand, we're not a charity fund but a company that needs to make money. Also, many of us (me included) got into this career for the joy of creating. Nobody anticipated this could stop being part of the deal, but here were are.
> There is no good justification I can give the rest of the company for why we would not incorporate AI tools
Is there definitive proof of long term productivity gains with no detriment to defects, future velocity, etc?
If so I’d say you’re irresponsible at best to put this much trust in a tool that’s been around for a few months (at the current level). Absolutely encourage experimentation, but there’s a trillion dollar marketing hype machine in overdrive right now. Your job is to remind people of that.
So you struggled to improve velocity without AI tools, are you worried that using the AI tools as a crutch will just lead to a death spiral of bad code being shipped increasingly faster? I've only ever seen the AI adoption approach work on fully functional teams.
The concern as well is that by forcing the AI onto developers, they eventually throw their hands up and say "well they dont care about code quality anymore, neither should I" and start shipping absolute vibeslop.
> I've only ever seen the AI adoption approach work on fully functional teams.
It's not that the team isn't functioning, it's that it's a pretty diverse team in terms of experience, which means things just used to take a while to finish.
> The concern as well is that by forcing the AI onto developers, they eventually throw their hands up and say "well they dont care about code quality anymore, neither should I" and start shipping absolute vibeslop.
This is IMHO avoidable by emphasising code reviews and automated tooling; my general policy is still that everyone is responsible for what they push, period. So absolute vibeslop isn't what I'm seeing, rather an efficiency miscalculation on which parts should be written by humans and which by the AI.
In my experience the bottleneck was never with writing code. If this is the case how can a developer be expected to increase their output while still being responsible in the same way? Seems like a recipe for burnout.
The vast majority of workplaces have never cared about code quality (with the exception being the actual engineers that write the code). Everyone else has no clue what programmers do, other than, "they write arcane symbols, and our product works, and our business continues to function". They do not know that code can even _have_ quality. It does not help that they only ever have to interact with engineering when something is going _wrong_, which conditions them to associate engineers with stress and failure and angry customers. Nobody ever thinks of engineers when everything is going well. The LLM mandates stem from a combination of mistrust and resentment.
I know, from second hand experience, that long before coding LLMs became a thing, engineers would ship slop when it became clear that their superiors cared about deadlines uber alles (i.e. not shipping slop would be the same thing as quitting, but without the paycheck -- slop code is often a form of quiet quitting).
Most people would _prefer_ to be able to "program" their entire business from a spreadsheet. LLMs have enabled them to get involved, and they cannot understand why engineers reject this "help" (it is for the same reason that a pilot would reject a copilot that thinks he knows how to fly because he played a flight simulator or read Jonathan Livingston Seagull; flight simulators are used in training too, but they are not a substitute for actual piloting experience). This refusal and resistance feeds into the mistrust and resentment. We live in a world where managers and administrators do not understand what they are managing and administrating, nor do they think that this is part of their job description. In the worst cases, they believe their job is to extract compliance from their subordinates.
There is a _lot_ of alpha in being part of a company, where authorities understand how the internals of the business (including software and IT!) _actually_ function. (One engineer told me that clueless yet demanding managers are, for all intents and purposes, unwitting saboteurs, and that the best a company can do about this is get him a job interview at a competitor). In some sense, the economy is just a machine for transferring wealth from those who do not know something essential, to those who do know something essential. This can veer uncomfortably close to exploitation. If we want to avoid crossing that line, we need to cultivate an economy where a lack of understanding is not seen as an _opportunity for profit_, but rather _as an opportunity for illumination_.
Your team is creating code you don't really grok to "get stuff out the door". Guaranteed a month or year from now this is going to bite you in the ass, hard.
And it is. You are going to end up with a wreck of a product and not a single person you can call upon to fix it. It is your choice and you will pay for it.
A wreck of a product is still better than being out of business by not being able to release fast enough. Unfortunately, the market in general does not reward slow high quality.
Who says that is my view of the importance of quality? My second sentence starts with "unfortunately"...
I'm just recognizing that businesses have challenges to deal with besides quality. Being able to generate revenue is just as important as software quality. And seeing how easily consumers switch to a competing product if it has a few more features, you can't neglect time to market if you want to survive as a company.
Many customers are pretty shallow: "meh, the new version looks just like to old one, nothing has changed" even if under the hood the product has significantly improved.
Yes, market dynamics are a bit of a catch 22: customers looking for the best deal, companies looking to reduce costs to still make profit. Customers always looking for the newest features, companies releasing faster before the product is done.
This is a starker tradeoff, but still the same logic that engineering leaders have used for years to eliminate time for exploration, learning, mentoring, role-switching, and every other activity that makes a better engineer but doesn’t move tickets off the queue. These developers are all going to work somewhere else in a few years, so why should we invest in growing their skills? This isn’t a charity, after all.
I’m sure you’re smarter than that, but a lot of leaders aren’t. And that’s based on the past, when they had an established playbook they could choose to follow, not the situation we’re in now where you have to make it up as you go.
I absolutely see your point there, but I don't have a better answer. It feels like the table stakes for feature development speed have risen all of a sudden, whether we like it or not.
Well, only if the increased speed doesn't result in a quality or staffing time bomb. Which none of us really knows at this point. You could always write code faster if you don't care if it works or is maintainable (and indeed many companies work that way for a while), and you could always put your developers in a pressure cooker until they leave from burnout.
So, you have a duty of care to make a safe workplace, at least in most countries.
Consider what a job with no joy means for the ongoing mental health of your staff, where the main interaction they have all day is with an AI model that the person has to boss around; with little training on norms.
Depression, frustration, nonchalance, isolation, and corner cutting are going to be the likely responses.
So at the same time as you introduce new tooling, introduce the quality controls you would expect for someone utterly checked out of the process, and the human resources policies or prevention to avoid your team speed running Godwin's law because they dont deal with people enough to remember social niceties are important.
Examples off of the top of my head of ways to do this are:
- Increased socialisation in the design processes. Mandatory fun sucks, a whiteboard party and collaboration will bring some creativity and shared ownership.
- Budget for AI minimal or free periods, where the intent is to do a chunk of work "the hard way"; and have people share what they experienced or learnt
- Make people test each other's work (manual testing) or collaborate, otherwise you will have a dysfunctional team who reaches for "yell in all caps to make sure the prompt sticks" as the way people talk to each other/deal with conflict.
The way to justify this to management above you is the cost of staff retention - advertise, interview, hire, pay market rates, equip, train, followed 6 months later by securely off boarding, hardware return, exit interview means you get maybe 4 months productivity out of each person, and pay 2 months salary in all of the early job mistakes or late job not caring, or HR debacle.
Do you or your next level up want to spend 30% more time doing this process? Or would you rather focus on generating revenue with a team that works well together and are on board for the long term?
The answer most of the time is "we want to make money, not spend it". So do the math on what staff replacement costs are and then argue for building in enough slack to the process that it costs about half of that to maintain it/train the staff/etc.
Your company is now making a "50% efficiency gain" in the HR funnel, year over year, all by simply... not turning the dial up to 10 on forced AI usage.
I'm applying gentle pressure, not forcing everyone to use it. If necessary, I will fight for my team as much as I can, but that's not where we're headed and I would think about switching jobs if it ever is.
Having said that: The dichotomy expressed in the threads here is a bit too extreme for my taste. It's not like working with AI is pure Yes-clicking review dread; there is joy to be found in materialising your ideas out of thin air, instead of the Lego-like puzzle solving experience many developers are used to.
And as mentioned in TFA, There's risk in both using it too little and too much. This also applies to employees, of course: If I shielded junior developers from AI tools, they'd end up in their next job utterly unprepared for what may be required from them as the world keeps spinning.
> Framed like that, sounds a lot better doesn't it?
Sure does, but that's not the situation I'm in. I'm trying to figure out the local maximum of keeping my company afloat in a world where AI has kicked the PMF from under our feet to the other end of the playing field, and ensuring my team stays happy, curious, and engaged. And I'm not the only one in this spot, I suppose.
> It's not like working with AI is pure Yes-clicking review dread; there is joy to be found in materialising your ideas out of thin air
I think that's true for some developers, and not for others. My guess is that one subset of developers has more ideas than they have time/resources to implement, and they enjoy programming because they love seeing the finished product emerge. I think this subset is more likely to go into management, because it's a force multiplier for them. They're the ones getting joy out of seeing AI make their ideas into reality.
But there's another subset who enjoys programming not because they love to see a product emerge, but because they enjoy the process itself: the head-scratching, the getting past "why won't this work" to the moment when the build starts working again or the site comes back up or the UI snaps into place. It's the magic of finding, among all the possible wrong answers, the exact right combination of bits that solve the problem. This subset is not getting any joy from AI: they're seeing AI take away that whole process and turn it into the kind of work their managers and their project owners do. It's made even worse because their managers don't even understand why they're so unhappy. I think managers would do well to consider how they're going to keep these folks happy and engaged and productive, because they're the ones who are going to be fixing the production bugs introduced by their teammates' AI commits. If they've all gone off to retrain as electricians, we're going to have a problem as an industry.
You are feeling that pressure because the people that use them are more productive and the next pressure you are going to get is to remove yourself from the loop completely.
I personally do not. But I don't work in the software industry. I write custom software in an industry that's as far away from tech as you can imagine. My management tells me what features they want, and doesn't care how it gets done. They only care that it works, and the priority is never to get a feature out fast. The priority is to never break their logistics software that's used 24/7. The deployment cycle is still fast, but bugs can be catastrophic, and it's on me to fix any bugs that crop up whenever something goes into production. Usually, when a bug filters up to me, it's within a few hours, because edge cases arise quickly. I know almost immediately what lines of code in which files are the most likely culprits. Because I wrote them, and I tested them manually, and I thought long and hard before hitting the button. If someone else (or something else) wrote them, I'd have to go hunting at the exact moment when time is critical and there's an open bug in a live deployment, and my phone is ringing and people are yelling.
The term "vibe coding" is new, but I've described what I do as "jazz coding" for a couple decades.
I just did a test project using K2.5 on opencode and, for me, it doesn’t even come close to Claude Code. I was constantly having to wrangle the model to prevent it from spewing out 1000 lines at once and it couldn’t hold the architecture in its head so it would start doing things in inconsistent ways in different parts of the project. What it created would be a real maintenance nightmare.
It’s much better than the previous open models but it’s not yet close.
You may be anthropomorphizing the model, here. Models don’t have “assumptions”; the problem is contrived and most likely there haven’t been many conversations on the internet about what to do when the car wash is really close to you (because it’s obvious to us). The training data for this problem is sparse.
I may be missing something, but this is the exact point I thought I was making as well. The training data for questions about walking or driving to car washes is very sparse; and training data for questions about walking or driving based on distance is overwhelmingly larger. So, the stat model has its output dominated by the length-of-trip analysis, while the fact that the destination is "car wash" only affects smaller parts of the answer.
I got your point because it seemed that you were precisely avoiding the anthropomorphizing and in fact seemed to be honing in on whats happening with the weights. The only way I can imagine these models are going to work with trick questions lies beyond word prediction or reinforcement training UNLESS reinforcement training is from a complete (as possible) world simulation including as much mechanics as possible and let these neural networks train on that.
Like for instance, think chess engines with AI, they can train themselves simply by playing many many games, the "world simulation" with those is the classic chess engine architecture but it uses the positional weights produced by the neural network, so says gemini anyways:
"ai chess engine architecture"
"Modern AI chess engines (e.g., Lc0, Stockfish) use
a hybrid architecture combining deep neural networks for positional evaluation with advanced search algorithms like Monte-Carlo Tree Search (MCTS) or alpha-beta pruning. They feature three core components: a neural network (often CNN-based) that analyzes board patterns (matrices) to evaluate positions, a search engine that explores move possibilities, and a Universal Chess Interface (UCI) for communication."
So with no model of the world to play with, I'm thinking the chatbot llms can only go with probabilities or what matches the prompt best in the crazy dimensional thing that goes on inside the neural networks. If it had access to a simple world of cars and car washes, it could run a simulation and rank it appropriately, and also could possibly infer through either simulation or training from those simulations that if you are washing a car, the operation will fail if the car is not present. I really like this car wash trick question lol
Reasoning automata can make assumptions. Lots of algorithms make "assumptions", often with backtracking if they don't work out. There is nothing human about making assumptions.
What you might be arguing against is that LLMs are not reasoning but merely predicting text. In that case they wouldn't make assumptions. If we were talking about GPT2 I would agree on that point. But I'm skeptical that is still true of the current generation of LLMs
I'd argue that "assumptions", i.e. the statistical models it uses to predict text, is basically what makes LLMs useful. The problem here is that its assumptions are naive. It only takes the distance into account, as that's what usually determines the correct response to such a question.
I think that’s still anthropomorphization. The point I’m making is that these things aren’t “assumptions” as we characterize them, not from the model’s perspective. We use assumptions as an analogy but the analogy becomes leaky when we get to the edges (like this situation).
It is not anthropomorphism. It is literally a prediction model and saying that a model "assumes" something is common parlance. This isn't new to neural models, this is a general way that we discuss all sorts of models from physical to conceptual.
And in the case of an LLM, walking a noncommutative path down a probabilistic knowledge manifold, it's incorrect to oversimplify the model's capabilities as simply parroting a training dataset. It has an internal world model and is capable of simulation.
I’m an American and my vision, fully corrected, is right at the legal borderline to get a license without restrictions. I’ve never “failed” a vision exam at the DMV; one time the clerk even said, “good enough”. (Don’t worry, I never drive, I only keep my license up to date for serious emergencies).
A serious emergency isn't going to be helped by someone with very little driving experience. I don't follow your reasoning. If it was a serious emergency who would care if you had a license?
People think about things differently. It may be that for OP, "but I don't have a license" would cause a second thought and waste time in an emergency. They may be self aware enough to head that off.
A police officer would. The penalty for an accident might be negligent driving.
The penalty for an accident without a license is, at minimum, driving without a license. You're also not likely to be covered by insurance without one either, even if you're not at fault.
Take a person who has marginally acceptable eyesight, who never drives, put them in an emergency situation where they need to drive and you've got a recipe for much higher odds of having an accident.
Given that getting a license is an option, and it conveniently doubles as a photo ID, and there's really not a reason to not get one.
This is one of the strangest internet myths. Every single state in America will issue a photo ID which is fully equivalent to a drivers license for every purpose other than permitting you to drive.
Also, you don't need "Real ID" to fly no matter what they say. You don't even need a photo ID at all (although they'll force you to waste time if you don't have one. I found this out when I lost mine but still had to travel.)
Not really. People are angry because it is likely their first time hearing a contrarian narrative about solar energy, which likely challenges their own sunk-cost fallacy as solar panel owners.
I have roof top solar. I have never had to clean or maintain them in any way. Same with my friends who have roof top solar. The worst I’ve heard of is a microinverter failing, which was covered by warranty.
My gut response to your post was also aggression, not because you’re preaching uncomfortable truths, but because you’re repeating fossil fuel lobbyist talking points that I’m getting really tired of seeing all over social media.
How long have you had your system - biggest risk point is year 10-12 and then 20-24 on inverter failure replacement which is fixable but just stretches out your payback period.
Im with you I hate the people who preach fossil fuel talking points. I also don't like the shady solar sales people who say solar is a no brainer - they are just pushing product to install on your roof. It is a pretty good product but not 100%.
> The attitude itself is of course something has been designed and implemented into engineering culture by precisely the leaders you contend are scape goats to society. POSIWID.
I don’t know if this particular statement is true or not, but the number of smart people I know who thinks they’re not affected by propaganda is wild. We’re all affected by propaganda.
It’s not even the American definition. We have many exceptions, particularly using speech to cause violence or physical harm in various ways. I’m also confused by American free speech absolutists because that’s not a thing here and essentially never has been.
Of course this is all hypothetical at the moment, as the current administration doesn’t seem to care much for the law.
The phrase “its logical conclusion” is doing a lot of heavy lifting here. Why on earth would that absurdity be the logical conclusion? To me it looks like a very illogical conclusion.
reply