Hacker Newsnew | past | comments | ask | show | jobs | submit | zem's commentslogin

I have been programming for most of my life, and am very engaged in the art and craft of it, but I have a very hard time answering superlative questions like "what is the weirdest bug you fixed" or "what project are you proudest of". mostly I enjoy projects while I'm doing them, but don't have the kind of memory that lets me compare old and new work and see which one I rated higher by whatever metric. also bugs in particular tend not to stick in my mind - I can ramble at length about fun architectural decisions or ad hoc DSLs, but bugs I mostly fix and move on; even if they were super interesting to debug at the time I tend not to remember them later on.

Would they be easier to easier to answer without the superlative? E.g., "What's a project you're proud of?"

Thanks for the feedback!


yes, definitely! the "-est" triggers decision paralysis but it's easy to think of several projects I'm proud of.

Thank you!

if spinel gets to where it can compile 100% of mruby there could be some nice synergies there.

this is one reason i will not pay for extra usage - it is an incentive for them to be inefficient, or at least to not spend any effort on improving my token usage efficiency.

mmap(2) and all its underlying machinery are open source and well documented besides.

There are open-source and even open-weight models that operate in exactly this way (as it's based off of years of public research), and even if there weren't the way that LLMs generate responses to inputs is superbly documented.

Seems like every month someone writes up a brilliant article on how to build an LLM from scratch or similar that hits the HN page, usually with fancy animated blocks and everything.

It's not at all hard to find documentation on this topic. It could be made more prominent in the U/I but that's true of lots of things, and hammering on "AI 101" topics would clutter the U/I for actual decision points the user may want to take action upon that you can't assume the user already knows about in the way you (should) be able to assume about how LLMs eat up tokens in the first place.


ugh, caching based on idle time is horrible for my usage anyway; since claude is both fairly slow and doesn't really have much of a daily quota anyway I often tell it to do something and then wander off and come back to check on it when I next think about it. I always vaguely assumed that my session would not "detect" the intervening time anyway since it was all async. I guess from a global perspective time-based cache eviction makes sense.

personally yes, i've considered them the bad guys from day one. they have always publicly portrayed themselves as enabling mass surveillance so i'm not even sure why this sudden crisis of conscience, unless the trump administration has finally made it clear to even the thickest-headed of them that mass surveillance is not a good thing.

I'm pretty sure a lot of commercial egg farming involves keeping the hens in bad conditions

You can't know out at a restaurant the what eggs they use, but at home you can buy eggs from sources you trust that don't keep hens in bad conditions.

I've noticed that there are some people who feel that their claude instance could be working on churning out multiple apps, and therefore if it isn't they are in some sense falling behind. it's the illusion of productivity raised to the level of a minor addiction.

good reminder to me to stay in practice with manual coding for my side projects! claude is super convenient for them for now but if it goes it goes, I definitely don't want to get dependent on it. maybe the local models will improve in a year too.

"like ruby but you can ship a single native binary" is already huge.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: