Hacker Newsnew | past | comments | ask | show | jobs | submit | erichocean's commentslogin

Humans have, obviously. Just interbreeding with ancient species was enough to do it, even without separate evolution.

Autochrome [0] make reviewing Clojure diffs from AI really easy.

[0] https://github.com/ladderlife/autochrome


I also interact a ton with C and C++, but it's easy today to have Claude write a Project Panama wrapper and then put a nice Clojure veneer on top of the Java.

> it just couldnt write the changes to file 9/10 times when using it with opencode

You might want to give this a try, it dramatically improves Edit tool accuracy without changing the model: https://blog.can.ac/2026/02/12/the-harness-problem/


Another limitation: only five active user accounts (with UIs) per machine.

> Models do not (broadly speaking) learn over time. They can be tuned by their operators, or periodically rebuilt with new inputs or feedback from users and experts. Models also do not remember things intrinsically: when a chatbot references something you said an hour ago, it is because the entire chat history is fed to the model at every turn. Longer-term “memory” is achieved by asking the chatbot to summarize a conversation, and dumping that shorter summary into the input of every run.

This is the part of the article that will age the fastest, it's already out-of-date in labs.


I'm struggling to reckon how that can even possibly be true, unless we're counting automation of the "dumping that shorter summary into the input of every run" thing.

I can imagine it being true with models so small that each user could afford to have their own, but not with big shared models like what're getting used for all the major services. Is that what you mean?


> Is that what you mean?

I think the confusion is that, when I write "model", you read "LLM."

LLMs aren't the only kind of AI model, and they have the limitations Aphyr mentions, for the obvious reasons you're thinking of.

His mistake is thinking that's the only model that exhibits intelligence today, but it's not.


I see nothing to preclude a foundation model being augmented by a smaller model that serializes particulars about an individuals cumulative interaction with the model and then streamlines it into the execution thread of the foundation model.

Source?

In what way?

AI is exactly the right term: the machines can do "intelligence", and they do so artificially.

Just like we have machines that can do "math", and they do so artificially.

Or "logic", and they do so artificially.

I assume we'll drop the "artificial" part in my lifetime, since there's nothing truly artificial about it (just like math and logic), since it's really just mechanical.

No one cares that transistors can do math or logic, and it shouldn't bother people that transistors can predict next tokens either.


> AI is exactly the right term: the machines can do "intelligence", and they do so artificially.

AI in pop culture doesn't mean that at all. Most people impression to AI pre-LLM craze was some form of media based on Asmiov laws of robotics. Now, that LLMs have taken over the world, they can define AI as anything they want.


In 2018, ie “pre-LLM”, the label “AI” was already stamped to everything, so I highly doubt that most people thought that their washing machines are sentient in any way. I remember this starkly, because my team was responsible at Ericsson (that time, about 120k employees) for one of the crucial step to have models in production, and basically every single project wanted that stamp.

The shift in meaning has been slowly diluted more and more across decades.


> Most people impression to AI pre-LLM craze was some form of media based on Asmiov laws of robotics.

I'll reveal you a secret: "positronic brains" are just very fast parallel computers running LLMs.


> Just like we have machines that can do "math", and they do so artificially.

Nobody calls calculators "artificial mathemeticians", though; we refer to them by a unique word that defines what they can and can't do in a far less fanciful and ambiguous way.


As I remember it, they were basically the same—but IOKit is C++ (with restrictions) because 3rd party developers didn't want to learn Objective-C.

But that's a hazy, 20 year old memory.


Yes, you're right! I'm just dolt who's never checked what a .kext on OS X actually is.

I had been under the impression that DriverKit drivers were quite a different beast, but they're really not. Here's the layout of a NS ".config" bundle:

  ./CG6FrameBuffer.config/English.lproj
  ./CG6FrameBuffer.config/English.lproj/Info.rtf
  ./CG6FrameBuffer.config/English.lproj/Localizable.strings
  ./CG6FrameBuffer.config/CG6FrameBuffer_reloc
  ./CG6FrameBuffer.config/Default.table
  ./CG6FrameBuffer.config/Display.modes
  ./CG6FrameBuffer.config/CG6FrameBuffer
The driver itself is a Mach-O MH_OBJECT image, flagged with MH_NOUNDEFS. (except for the _reloc images, which are MH_PRELOAD. No clue how these two files relate/interact!)

Now, on OS X:

  ./AirPortAtheros40.kext/Contents
  ./AirPortAtheros40.kext/Contents/_CodeSignature
  ./AirPortAtheros40.kext/Contents/_CodeSignature/CodeResources
  ./AirPortAtheros40.kext/Contents/MacOS
  ./AirPortAtheros40.kext/Contents/MacOS/AirPortAtheros40
  ./AirPortAtheros40.kext/Contents/Info.plist
  ./AirPortAtheros40.kext/Contents/version.plist
OS X added a dedicated image type (MH_KEXT_BUNDLE) and they cleaned up a bit, standardized on plists instead of the "INI-esque" .table files, but yeah, basically the same.

You're focusing on the executable format, which is very much not the driver model.

From here:

https://news.ycombinator.com/item?id=10006411

"At some stage in the future we may be able to move IOKit over to a good programming language"


IOKit was almost done in Java; C++ was the engineering plan to stop that from happening.

Remember: there was a short window of time where everyone thought Java was the future and Java support was featured heavily in some of the early OS X announcements.

Also DriverKit's Objective-C model was not the same as userspace. As I recall the compiler resolved all message sends at compile time. It was much less dynamic.


Mostly because they thought Objective-C wasn't going to land well with the Object Pascal / C++ communities, given those were the languages on Mac OS previously.

To note that Android Things did indeed use Java for writing drivers, and on Android since Project Treble, and the new userspace driver model since Android 8, that drivers are a mix of C++, Rust and some Java, all talking via Android IPC with the kernel.


There was also the Java-like syntax for ObjC but I don’t think that ever shipped.

> there was a short window of time where everyone thought Java was the future

Makes me think of how plists in macOS are xml because back then xml was the future


Yes, also the same reason why Java was originally introduced, Apple was afraid that the developer community educated in Object Pascal / C++, wasn't keen into learning Objective-C.

When those fears proved not true, and devs were actually welcoming Objective-C, it was when they dropped Java and the whole Java/Objective-C runtime interop.


> Isn't it more-or-less the same as the Python REPL?

Not even close.


Also discussed in detail in Crafting Interpreters.[0]

[0] https://craftinginterpreters.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: