Hacker Newsnew | past | comments | ask | show | jobs | submit | nodja's commentslogin

> It's like people let their hate of AI and LLM bubble blind them, and their brains can't compartmentalize good from bad news anymore.

DLSS is also AI and people like it.

People don't like framegen because the manufacturers are not being honest about it and using it for deceptive hype marketing. Anyone with a brain knows that it introduces latency and is only useful if you're already 40+ FPS, we also know that companies will use it to pad benchmarks. NVIDIA themselves said that the 5070 had 4090 performance because it supports framegen.


> we also know that companies will use it to pad benchmarks.

Unlike Nvidia, Intel explicitly doesn't use it to pad benchmarks.


XeSS is also like DLSS, it's not just frame gen.

> A year or more ago, I read that both Anthropic and OpenAI were losing money on every single request even for their paid subscribers, and I don't know if that has changed with more efficient hardware/software improvements/caching.

This is obviously not true, you can use real data and common sense.

Just look up a similar sized open weights model on openrouter and compare the prices. You'll note the similar sized model is often much cheaper than what anthropic/openai provide.

Example: Let's compare claude 4 models with deepseek. Claude 4 is ~400B params so it's best to compare with something like deepseek V3 which is 680B params.

Even if we compare the cheapest claude model to the most expensive deepseek provider we have claude charging $1/M for input and $5/M for output, while deepseek providers charge $0.4/M and $1.2/M, a fifth of the price, you can get it as cheap as $.27 input $0.4 output.

As you can see, even if we skew things overly in favor of claude, the story is clear, claude token prices are much higher than they could've been. The difference in prices is because anthropic also needs to pay for training costs, while openrouter providers just need to worry on making serving models profitable. Deepseek is also not as capable as claude which also puts down pressure on the prices.

There's still a chance that anthropic/openai models are losing money on inference, if for example they're somehow much larger than expected, the 400B param number is not official, just speculative from how it performs, this is only taking into account API prices, subscriptions and free user will of course skew the real profitability numbers, etc.

Price sources:

https://openrouter.ai/deepseek/deepseek-v3.2-speciale

https://claude.com/pricing#api


> This is obviously not true, you can use real data and common sense.

It isn't "common sense" at all. You're comparing several companies losing money, to one another, and suggesting that they're obviously making money because one is under-cutting another more aggressively.

LLM/AI ventures are all currently under-water with massive VC or similar money flowing in, they also all need training data from users, so it is very reasonable to speculate that they're in loss-leader mode.


Doing some math in my head, buying the GPUs at retail price, it would take probably around half a year to make the money back, probably more depending how expensive electricity is in the area you're serving from. So I don't know where this "losing money" rhetoric is coming from. It's probably harder to source the actual GPUs than making money off them.


> So I don't know where this "losing money" rhetoric is coming from.

https://www.dbresearch.com/PROD/RI-PROD/PROD0000000000611818...


electricity


There are companies which are only serving open weight models and not doing any training, so they must be profitable? Check for example this list https://openrouter.ai/meta-llama/llama-3.3-70b-instruct/prov...


To borrow a concept of cloud server renting, there's also the factor of overselling. Most open source LLM operators probably oversell quite a bit - they don't scale up resources as fast as OpenAI/Anthropic when requests increase. I notice many openrouter providers are noticeably faster during off hours.

In other words, it's not just the model size, but also concurrent load and how many gpus do you turn on at any time. I bet the big players' cost is quite a bit higher than the numbers on openrouter, even for comparable model parameters.


There is https://huggingface.co/spaces/hf-audio/open_asr_leaderboard but it hasn't been updated for half a year.


> My <device> already comes with built in <software> why would I install anything else?

Top voted comment on hacker news btw.

Ok that was probably unnecessarily snarky I hope you don't take offense, but it seems the hacker spirit has been fading more often from this site, we used to replace stuff with inferior versions just to see if we could.


Unfortunately, a quick google seems to say no one's ported Doom to the Comma hardware just yet!


If geohot is involved, it wouldn't suprise me if it was the first proof of concept for each new hardware variant.


Speaking of hacker spirit... Comma has a lot of restrictions on what software you can load / what changes you can make to the software.


No it doesn't?

You can run stock, or any fork simply by providing the URL of the version you want to run.

Where exactly is the restriction?


https://github.com/commaai/openpilot/blob/master/docs/SAFETY... - they will ban your device from the training set if you run an unsafe fork.


That seems very reasonable, doesn't it? Otherwise the fork could pollute the training data through bad modifications, and tracing that would be a pain.


What? It's literally open source, you can ssh into the thing and change whatever you want. I am running a fork of a fork of the code right now. I change things all the time.


I've been programming since ~1999 and anecdataly don't remember programmers having a culture of paying for their dev tools. On linux everything's free, and on windows I've used a plethora of freeware IDEs/compilers/etc. from turbo pascal, dev c++ (that's the name of it), later on eclipse took the stage in it's buggy mess and right before vscode there was atom. The only people that I know that used visual studio either got it for free for being a student/teacher, had their job pay for it, or most commonly: pirated it.

According to this[1] site visual studio had a 35.6% marketshare, tied at #1 with notepad++.

[1] https://asterisk.dynevor.org/editor-dominance.html


> or most commonly: pirated it.

Yes, I'm aware. That's the problem elucidated in the article. Developers expect everything for free, even though the price of tools relative to what they get paid to deliver products using those tools is completely trivial. This reluctance to pay for anything harms developers themselves most of all. If developers normalized a culture of paying for things they use, more developers would be able to develop their own independent software and sustain themselves without being beholden to $awful_corp_environment to pay the bills. But because developers will do anything they can to avoid paying <1 hr salary for a tool that saves them many hours, there is a huge gap between corporate professionals, who make lots of money, and open-source developers, most of whom make almost nothing, with only a relatively limited subset of independent developers able to bridge the gap and make a living producing good, non-corporate-nightmare software.

I'm pretty pro-piracy for students and such. It is an extremely good thing for learning to be as available as possible, even to those in poverty, so that they can make something better of their situation and contribute more to society than if they were locked in to low-knowledge careers solely by virtue of the random chance of their upbringing. But people who make a living off software development never graduate from the mindset of piracy. Even for open-source software, the vast majority of users never contribute to funding those projects they rely on. If we think open-source software is good for the world, why are we so opposed to anyone being able to make a living creating it? The world's corporate capture by non-free software is a direct result of our own collective actions in refusing to pay anything for anything even when we can afford to.


Well, I've been programming since 1986 and I had to buy all the compilers I used. For the Mac: Lightspeed Pascal, Lightspeed C and Metrowerks. I wish I had the money to buy MPW. Linux then was just a glint in Linus's eyes. We didn't have the internet with easy access to pirated software. I didn't do BB's so I don't know about that area. Once I went to uni in the 90's I started using Usenet but even then I didn't download any pirated software. Microsoft was virtually giving away Visual Studio/Visual Basic to University students. Back then I also remember reading Linus' arguments with Tanenbaum over microkernels and this funny language called Python and its creator unveiling/supporting it on Usenet. Around that time more and more tools were being offered for free and as a poor student I was delighted. Also we got access to free Unix tools since as we were doing work on Unix systems. Oh yeah, I remembered using this cool functional language called Miranda in one of my courses but was sad that it was a paid product. And then I heard about the debut of Haskell which was sort of a free answer to Miranda.


I remember paying something like $400-500 for Glockenspiel C++ for MS-DOS, which was based on AT&T's cfront and compiled to C code. Not long after that, Turbo C++ and Zortech C++ made things a lot easier, since they didn't compile through C, like cfront did. Those were in the $150-200 range, iirc. I also remember paying for PC-YACC from Abraxas, something like $400.


Turbo Pascal was not freeware when it was new.


It did have a remarkably low price, though - less than a tenth of UCSD p-System or Topspeed. Even the competitors in the hobbyist space like the original IBM/Microsoft Pascal and DRI's Pascal/MT originally cost 4-5x as much.

Really, it was responsible for as big a step change in pricing of programming tools in the 1980s as GNU, BSD, and Linux were in the 90s.


True. I never paid for it myself, but I think i got it from my uni. I had some of my fondest coding memories with it


Back in ye olden days, prior to teh interwebs, compilers were not free and it was an assumed price of entry to programming. Pirating has always been a thing, but I've paid for more than one compiler in my life and I wasn't exactly flush with cash.


As heavy Borland user I am quite sure none of their software was freeware.

Yes they had educational discounts, but that was it.


While many hobbyists developer market, like the hobbyist graphic design market, pirated their tools, the corporate market did paid for their tools.

The issue here is that they the developers aren't convincing their companies to pay for libraries now, partly because a lot of the tools are now free.


The pain process and paperwork to obtain a license for anything in a corporate environment is usually the biggest factor imho.



I think you need to either feed it all of ./docs or give your agent access to those files so it can read them as reference. The MEMORY.md file you posted mentions ./docs/CANONICAL_STYLE.md and ./docs/LLM_CORE_SUBSET.md and they in turn mention indirectly other features and files inside the docs folder.


Yeah, I think you're right about that.

The thing that really unlocked it was Claude being able to run a file listing against nanolang/examples and then start picking through the examples that were most relevant to figuring out the syntax: https://gisthost.github.io/?9696da6882cb6596be6a9d5196e8a7a5...


It's insane to me that in 2008 a bunch of pervs decentralized storage and made hentai@home to host hentai comics. Yet here we are almost 20 years later and we haven't generalized this solution. Yes I'm aware of the privacy issues h@h has (as a hoster you're exposing your real IP and people reading comics are exposing their IP to you) but those can be solved with tunnels, the real value is the redundant storage.


The illegal side of hosting, sharing, and mirroring technology, as it were, is much more free to chase technical excellence at all costs.

There are lessons to be learned in that. For example, for that population, bandwidth efficiency and information leakage control invite solutions that are suboptimal for an organization that would build market share on licensing deals and growth maximization.

Without an overriding commercial growth directive you also align development incentives differently.


I was hopeful a few years ago when I heard of chia coin, that it would allow distributed internet storage for a price.

Users upload their encrypted data to miners, along with a negotiated fee for a duration of storage, say 90d. They take specific hashes of the complete data, and some randomized sub hashes, of internal chunks. Periodically an agent requests these chunks, hashes and rewards a fraction of the payment of the hash is correct.

That's a basic sketch, more details would have to be settled. But "miners" would be free to delete data if payment was no longer available on a chain. Or additionally, they could be paid by downloaders instead of uploaders for hoarding more obscure chunks that aren't widely available.


> a bunch of pervs

Not everyone who watches hentai is a perv


Yeah sure, they are just into 9000 year old dragons...


Just don't look up what the word "hentai" means ;)


At least hentai isn't necessarily lolisho (although a lot of it is...)


I've run the first of the sample images through 3 captioning models, an old old ViT based booru style tagger, a more recent one and qwen 3 omni. All models successfully identified visual features of the image with no false positives at significant thresholds (>0.3 confidence)

I don't know what nightshade is supposed to do, but the fact that it doesn't affect the synthetic labeling of data at all leads me to believe image model trainers will have close to 0 consideration of what it does when training new models.


On one hand, it has made json more ubiquitous due to it's frozen state. On another hand, it forces everyone to move to something else and fragments progress. It would be much easier for people to move to json 2.0 rather than having hundreds of json + x standards. Everyone is just reinventing json with their own little twist that I feel sad that we haven't standardized to a single solution that doesn't go super crazy like xml.

I don't disagree with the choice, but seeing how things turned out I can't just help but look at the greener grass on the other side.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: