Hacker Newsnew | past | comments | ask | show | jobs | submit | dannersy's commentslogin

Beautifully stated and I couldn't agree more. This is my experience.

I am paid far less than what I'm worth to work at a company that is as close to ethically neutral as I could find. The next step would be nonprofits, but there aren't many and a lot of times they can't justify my pay, understandably.

Good luck convincing people to not just take the money.


Honestly, it isn't particularly good at greenfield stuff. It's a mess all around. Seeing the claims that AI is better at code than humans has been a failure for my direct experience and those I know who work at larger companies. Incident and regression rates are up exponentially and requiring more people to play wack-a-mole.

> The usage of LLMs is continuing to increase ~exponentially

I would like a source for that statement. Additionally, I want to know by who? Because it certainly isn't end users. Inflating token usage doesn't make it any more economically viable if your user base, b2b or not, hasn't increased with it. On the contrary, that is a worse scenario for providers.


> Additionally, I want to know by who?

1. As a consultant pretty much every company I have worked with in the last 2 years are doing some kind of in-house "AI Revolution", I'm talking making "AI Taskforce" teams, having weekly internal "AI meetings" and pushing AI everywhere and to everyone. Small companies, SMEs and huge companies. From my observation it is mainly due to C-level being obsessed by the idea that AI will replace/uplift people and revenue will grow by either replacing people or launching features 10x quicker.

2. Did you see software job-boards recently? 9/10 (real) job listings are to do with AI. Either it is fully AI company (99% thin wrapper over Anthropic/OpenAI APIs) or some other SME that needs some AI implementations done. It is truly a breath of fresh air to work for companies that have nothing to do with AI.

The biggest laugh/cry for me are those thin wrappers that go down overnight - think all the "create your website" companies that are now completely useless since Ahtropic cut the middleman and created their own version of exactly that.


Yeah, my only hope is that this is unsustainable, admittedly for selfish reasons.

I know plenty of engineers being forced to use these tools whether they want to or not. A lot of which are okay with using AI liberally, but don't particularly like generative AI and see it as pretty irresponsible (which feels more true by the week and it is clear from first hand experience). I don't know, there is a huge gradient of users, but I would argue that in previous revolutionary technologies, we didn't have to force people to use a good tool. I didn't have to be forced to use Google search or Google Maps, tech that is now ubiquitous with western society. It seems really suspect that suits have to enforce the use of something that is supposed to change the way we work and be a force multiplier.


From my limited experience in multiple companies, as stated before I see one very common pattern - The process from feature idea to development is just bad. PMs do not know what exactly they want. C-level interjects in the middle and changes requirements. QAs are unsure what to test because acceptance criteria is vague.

C-level strongly believes that AI will fix all these issues. They believe that AI will fix their broken processes.

I see strong resemblance with "Agile Development" ~15 years ago. Extremely hyped, noone asked if their org even is a fit for it or need it, and most importantly - the only way to fix agile is to do more agile. Same with AI right now.


> I would like a source for that statement

The recent enterprise revenue numbers of Anthropic


So as I said, a self interested metric who also controls how many tokens it takes to get a desirable result from their models.

Users are willingly paying for larger volumes of tokens. You are layering your own unproven interpretation onto that. I would have arrived at an opposite interpretation given the available facts. Models are becoming more token efficient for the same task, such as ChatGPT 5.3 versus 5.2 which halved the token count, and capabilities show a log relationship with the number of tokens since o1 preview was revealed in September 2024.

No, you have gone off in your own tangent. The person you're responding to is talking about money and my point is that you're using a misleading metric. Even if the current user base is paying more for the "exponential token usage", it does not add up to the industry's cost of maintaining and building on this technology, especially since we are not taking into account what that token usage costs the provider. First you said Anthropic as your source, but now you're talking about OpenAI's ChatGPT, who are floundering for a product and user base, which they themselves claim will be profitable through subscriptions at numbers never seen before in a subscription business model.

If they were they would show evidence because they'd pull in more investment. I don't believe their claim that they make profits on inference, especially not with reports like this coming out.

Once again, this whole article is predicated on us being at the finish line. You know who will care about how something got written? Very suddenly, it will be the org that has an issue that the AI fails to fix or you don't understand well enough to fix within a span of time they deem reasonable. That has been the battle since the beginning of software and the only thing you have to combat it is your understanding.

I am still baffled about engineer's or developer's use of AI. I see fail and fail on anything other than some novelty one-shotted tool, and even then folks are bending over backwards to find the value because that solution is always a shallow representation of what it needs to be.

My projects are humming away, hand made. They have bugs, but when they show up, I more often than not know exactly what to do. My counterparts have AI refactoring wild amount of code for issue that can be solved with a few lines of adjustment.

TL;DR I feel like I live in a different reality than those who write these blog


You do not write Rust yet will make blanket absurd claims about Rust being overcomplicated or not solving your problems. The additional comparison of Rust to PHP is also ridiculous for so many reasons it might not be worth even discussing, because it just seems bad faith from the start.

You're missing a very fundamental point here, and this is usually something I find with long time C programmers. C is a great, but old language, with decades of overhead on context, tooling, and lib understanding that make getting in to writing C substantially more difficult than say, Rust. We are still humans, and Rust isn't entirely trying to solve C from a fundamental paradigm of handling memory from a technical point of view, but more from a programmer one. Your comment about solving memory bugs is predicated on perfect human usage of C to not write the bug in the first place, which is precisely one of the many problems Rust is looking to solve.


Is this another AI article? What is said about Rust here has been said over and over again, and this brings nothing new to the table. They also always seem to be writing from a place of ignorance. If you're writing "high level Rust" the use of clone or Arc or whatever is negligible. If you're writing an HTTP service, your clone will be so fast it will make literally zero difference in the scope of your IO bound service.

Another observation is developer experience. Again, have you written _any_ Rust? I would argue that the reason Rust is such a joy to use is that the compile time errors are amazing, the way docs are handled are amazing, and so on. I know the eco system for something like Typescript is worlds better, but have you ever tried to _really_ understand what shenanigans are going on behind the scenes on some random hook from say, React? Their docs are so dismal and sparse and make assumptions on what they think you should know. Rust? You go to the doc gen, read what they think you should know as an overview, and if it isn't enough, I can click on "source" to dive deeper. What more could I possibly want?

Perhaps I'm just triggered. The discussion on Rust always seems to be fad or hype driven and almost always have very little to do with how it is to actually use it. If I have to read about compile times, again, I'm going to scream. Of all the languages I've used, it is one of the best in terms of a mature project needing new features or refactors. This is mentioned in some articles, but mostly we hear about "velocity" from some one dimensional perspective, usually someone from the web space, where it is arguable if the person should even bother trying to use Rust in the first place.

Apologies for the rant, but at this point I think Rust as a buzz word for article engagement has become tiring because it seems clear to me that these people aren't _actually_ interested in Rust at all. Who gives a shit what someone on Reddit thinks of your use of clone??


What triggered me was the proposed Arc<dyn TRAIT> as a quick fix. I was highlevel rustin along the learning curve as the article describes until i stumbled uppon dyn-compatible types and object safety.

It is too easy to trap yourself in by sprinkling in Sized and derive(Hash) ontop of your neat little type hierarchy and then realize, that this tight abstraction is tainted and you cant use dyn anymore. As a follow up trigger: This was when i learned derive macros :)


This is great news and thank you for work.


Of total users 5% is a substantial number of consumers and some would argue a non-trivial amount of market share to ignore when making a product.

This also goes without saying that the more adoption we see, the better these alternatives will get as we see consumers and businesses view Linux as worth the investment.


Exactly, 5% isn't much, but it's enough to compel developers to make sure their game runs well through Proton, which is all is necessary these days. Ports aren't really worth it, especially if they aren't going to be as well maintained in the long (cough, Valve, cough).


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: