Hacker Newsnew | past | comments | ask | show | jobs | submit | whizzter's commentslogin

Masks didn't work for people needing an immediate cure, but it was never that, it always was a multiplier, and even an multiplier with only 30% efficiency would translate to 4x reduction in spread through 4 levels.

And that reduction was there to give healthcare workers a chance to not be overwhelmed as they were for a large part of the initial pandemic.


Only if the LLM knows the inputs connected to particular outputs, pre-digital era or classified material might not be available, neither informal discussions with other experts.

Most importantly, negative but unused signals might not be available if the text does not mention it.


challenge: provide a single example where the LLM can only provide the output and not the steps? (in text scenario)

An LLM can always output steps, but it doesn’t mean they are true, they are great at making up bullshit.

When the “how many ‘r’ in ‘strawberry’” question was all the rage, you could definitely get LLMs to explain the steps of counting, too. It was still wrong.


can you provide a single example now with gpt 5.4 thinking that makes up things in steps? lets try to reproduce it.

I’m pretty sure you can think of one yourself, I’m not going to play this game. Now it’s 5.4 Thinking, before that it was 5.3, before that 5.2, 5.1, 5, before that it was 4… At every stage there’s someone saying “oh, the previous model doesn’t matter, the current one is where it’s at”. And when it’s shown the current model can’t do something, there’s always some other excuse. It’s a profoundly bad faith argument, the very definition of moving the goal posts.

I do have a number of examples to give you, but I no longer share those online so they aren’t caught and gamed. Now I share them strictly in person.


Caught and gamed? What do you mean?

He means that if the problem becomes known, the AI companies will hack in a workaround rather than solving the problem by making the model more intelligent. Given that they have been caught cheating in that way in the past, I can't blame the GP for not sharing his tests.

Ok so no example.

I bought an 16e last week with the same chipset, just tested and it handles realtime recording of 4k at 60fps with HEVC (camera is "48mp" so the source is 8k camera source material).

Pretty sure most of the encoding/decoding of video is handled with special circuits these days.

Now, add enough layers and it'll probably falter, but with dedicated encoding/decoding circuits combined with a modern GPU it will definetly be a usable experience with some lower res quick pre-renders at worst but probably realtime for most content creator usages.


Same old story, too much support requests and bad actors making it hard to make money off opensource.

This is one case where we really should support the original product, you can buy a perpetual licence of a pittance and they just 2 guys chugging along.

LibreSprite has 5000 commits, 30 in the past year whilst ASEPrite has over 10000 at this point.


The person you're replying to was making a clarification on the license, not arguing about the validity of changing the license or charging for it.

Libresprite is an important project because people can fork it and learn from it by extending it, and submit those patches upstream, regardless of how active it is.


I think aseprite is a perfectly fine project, but where possible, I like to use open source tools rather than proprietary tools.

I have paid for Aseprite, but on many machines I just install the old GPL version, usually available as a package. It is fine for most tasks, even if the latest version has many improvements.

A fork of the old version to have a slightly better version conveniently available in package repos would be nice. I don't think it has to catch up with Aseprite to be useful.


It's good to have open source software.

It's good to support honest and high quality proprietary software.

Aseprite offers the latter good, this offers the former good.


Yeah that github name made my spider senses tingle, large scale credentials harvesting?

Also the use of the google logo.

Edit: Oh, I think this actually is an official account. Very confusing


16gb is plenty, an intern we had ran a M1 Mac with 8gb of memory and running a browser concurrently with Figma made everything slow down to the point where he went around asking for advice.

Could it be all the corporate-tracking software ? I used to have a M1 Pro macbook with 16gb ram when it's first released, and somehow it still feel slow when compiling.

Then try again on my friend personal M1 MB, it was night and day.


We're a small shop so nothing of that sort, it's more of larger Figma projects and modern web-apps being hogs.

Honestly, these days compiling feels like really lightweight work in terms of memory compared to so much else.


1: Education market 2: Avoiding cannibalizing their own products

Apple has always ignored cannibalism because they would rather cannibalize their own products than have someone else do it.

Definetly, my kids at different schools had iPad's and Chromebooks. The kid with the iPad was using an external keyboard most of the time iirc.

2: I'd say it's about the same, same RAM/SSD sizes, maybe an M1 chip is faster than A18 but limitations will come from running out of ram/disk Had an intern working with us, basically couldn't have an browser running together with Figma due to ram shortage making it slow down to a crawl.

We had a workshop 6 months ago and while I've always been sceptical of OpenAI,etc's silly AGI/ASI claims, the investments have shown the way to a lot of new technology and has opened up a genie that won't be put back into the bottle.

Now extrapolating in line with how Sun servers around year 2000 cost a fortune and can be emulated by a 5$ VPS today, Apple is seeing that they can maybe grab the local LLM workloads if they act now with their integrated chip development.

But to grab that, they need developers to rely less on CUDA via Python or have other proper hardware support for those environments, and that won't happen without the hardware being there first and the machines being able to be built with enough memory (refreshing to see Apple support 128gb even if it'll probably bleed you dry).


I feel like the push by devs towards Metal compatibility has been 10x than AMD. I assume that's because the majority of us run MacBooks.

I think that might be partly because on regular PC's you can just go and buy an NVidia card insteaf of fuzzing around with software issues, and for those on laptops they probably hope that something like Zluda will solve it via software shims or MS backed ML api's.

Basically, too many choices to "focus on" makes non a winner except the incumbent.


Who is "us" in this case? Majority of devs that took the stack overflow survey use Windows:

https://survey.stackoverflow.co/2025/technology/#1-computer-...


That's the broad developer community. 90%+ of the engineers at Big Tech and the technorati startups are on MacOS with 5% on Linux and the other 5% on Windows.

> 90%+ of the engineers at Big Tech and the technorati startups

The US 1s? Is that why we have Deepseek and then other non-US open source LLMs catching up rapidly?

World view please. The developer community is not US only.


You’ll see a lot of MacBooks in Beijing’s zhongguangcun where all the tech companies are, but they also have a lot of students there as well, so who knows. You need to go out to the suburbs where Lenovo has offices to stop seeing them. I know Apple is common in Western Europe having lived there for two years (but that was 20 years ago, I lived in China for 9 years after that).

It wouldn’t surprise me if the deepseek people were primarily using Mac’s. Maybe Alibaba might be using PCs? I’m not sure.


I would also expect that the Deepseek devs are using MacBook. If not they may be using Linux - Windows is possible of course but not likely imho. I have no knowledge about that area though so would be interesting to here any primary sources or anecdotes.

Deepseek is in Hangzhou, so I guess they are. GDP/capita in Zhejiang is pretty high, even more so for HZ. If you ever visit, it feels like a pretty nice place (especially if you can get a villa around xihu). I also visited ZJU once, and it was pretty Macbooky, but I don't have as much experience there as Beijing's Zhongguancun.

I live in Germany not the US. I mentioned in another comment but aside from the fact that Deepseek mainly targets Linux I expect that the Deepseek devs are using Mac or Linux.

Source?

Working in three countries, working in big tech and startups, talking to people.

Working there?

I think it's reasonable to say that the people responding to surveys on Stack Overflow aren't the same people who work on pushing the state of the art in local LLM deployment. (which doesn't prove that that crowd is Apple-centric, of course)

Perhaps. Though Windows has been the majority share even when stack overflow was at it's peak, and before.

It's not the whole answer, but SO came from the .NET world and focused on it first so it had a disproportionately MS heavy audience for some time. GitHub had the same issue the other way around. Ruby was one of GitHub's top five languages for its first decade for similar reasons.

Majority of devs are in the global south I presume

Which majority?

I certainly only use Macs when being project assigned, then there are plenty of developers out there whose job has nothing to do with what Apple offers.

Also while Metal is a very cool API, I rather play with Vulkan, CUDA and DirectX, as do the large majority of game developers.


Honestly though, gamedevs really are among the biggest Windows stalwarts due to SDK's and older 3d software.

Only groups of developers more tied to Windows that I can think of are probably embedded people tied due to weird hardware SDK's and Windows Active Directory dependent enterprise people.

Outside of that almost everyone hip seems to want a Mac.


80% of the desktop market has to have their applications developed by someone, at least until software replicators replace them.

Everyone hip alright, or at least those that would dream to earn a salary big enough to afford Apple taxes.

Remember there are world regions where developers barely make 1 000 euros per month.


The only "push" towards Metal compatibility there's been has been complaints on github issues. Not only has none of the work been done, absolutely nobody in their right mind wants to work on Metal compatibility. Replacing proprietary with proprietary is absolutely nobody's weekend project. or paid project.

If coding by AI was truly solved then it would be done with AI, right?

Torch mlp support on my local macbook outperforms CUDA T4 on Colab.

Except CUDA feels really cozy, because like Microsoft, NVidia understands the Developers, Developers, Developers mantra.

People always overlook that CUDA is a polyglot ecosystem, the IDE and graphical debugging experience where one can even single step on GPU code, the libraries ecosystem.

And as of last year, NVidia has started to take Python seriously and now with cuTile based JIT, it is possible to write CUDA kernels in pure Python, not having Python generate C++ code that other tools than ingest.

They are getting ahead of Modular, with Python.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: