Hacker Newsnew | past | comments | ask | show | jobs | submit | secondcoming's commentslogin

"Claude, don't create any technical debt please"

i've been told that it's totally fine because once the codebase turns into spaghetti you can simply tell the agent to refactor it and then everything will be ok

I know this is a tongue-in-cheek response, but this brings me great pain. The spaghetti begins quickly, and your unit/functional tests won't help you unless you hammered out your module API seams before you even began. Oh, your abstractions are leaking? Your modules know too much about each other? Multiply the spaghetti!

the multiple layers of vibe, makes the dozen of code bases even harder to maintain.

all your GPUs are belong to us

Is this still true? New versions of protobuf allow codegen of `std::string_view` rather than `const std::string&` (which forces a copy) of `string` and `repeated byte` fields.

https://protobuf.dev/reference/cpp/string-view/


It allows avoiding allocations, but it doesn't allow using serialised data as a backing memory for an in-language type. Protobuf varints have to be decoded and written out somewhere. They cannot be lazily decoded efficiently either: order of fields in the serialised message is unspecified, hence it either need to iterate message over and over finding one on demand or build a map of offsets, which negates any wins zero-copy strives to achieve.

This is true but the relative overhead of this is highly dependent on the protobuf structure in one's schema. For example, fixed integer fields don't need to be decoded (including repeated fixed ints), and the main idea of the "zero copy" here is avoiding copying string and bytes fields. If your protobufs are mostly varints then yes they all have to be decoded, if your protobufs contain a lot of string/bytes data then most of the decoded overhead could be memory copies for this data rather than varint decoding.

In some message schemas even though this isn't truly zero copy it may be close to it in terms of actual overhead and CPU time, in other schemas it doesn't help at all.


The win could be only decoding the fields you actually care about, rather than all fields.

It's the same for any other high performance decoding of TLV formats (FIX in finance for instance).


Those field accessors take and return string_view but they still copy. The official C++ library always owns the data internally and never aliases except in one niche use case: the field type is Cord, the input is large and meets some other criteria, and the caller had used kParseWithAliasing, which is undocumented.

To a very close approximation you can say that the official protobuf C++ library always copies and owns strings.


Well that is very disappointing news.

Even the decoder makes a copy even though it's returning a string_view? What's the point then.

I can understand encoders having to make copies, but not in a decoder.


Google really dropped the ball with protobuf when they took so long to make them zero-copy. There are 3rd party implementations popping up now and a real risk of future wire-level incompatibilities across languages.

"zero copy" in this context just means that the contents of the input buffer are aliased to string fields in the decoded representation. This is a language-level feature and has nothing to do with the wire format.

If a user of S3 knows that directories aren't real why would they expect directory-related normalisation to happen?

Precisely because of it. On Linux, /bin/bash, //bin/bash and /bin//bash are the exact same file, the same inode. They look somewhat off to people, but they're entirely harmless, so cleaning that up is an aesthetic choice, not something important.

On S3 they're different. Using the wrong paths causes weird issues, like not finding things you expect you find, or storing multiple versions of the same data out of sync.

Normalizing // to / means making S3 behave more like people expect.


What even is a 'clobber build'?


Sorry for being unclear. I'm using Firefox build system lingo without explanations. It's from the command `./mach clobber`, which is similar but not the same as `make clean`. I use 'clobber build' as "a build with no existing build state" and the qualifiers "cold" and "warm" to indicate if cache is empty or filled.


Ah ok, thanks


How long would it take for the heat to be depleted? Humans have only managed to drill something like 12km into the earth because it gets too hot to go further.


If it were possible to access all of the Earth's stored geothermal energy, probably a very, very, very long time.

But if we're open to applying a quantitative timescale threshold to the thought experiment, at which we can argue geothermal is renewable, that raises the question for nuclear. If we could access all fissile uranium and thorium on Earth, how long would it take for us to deplete its stored energy? Does that mean nuclear energy is renewable?


I find that sometimes changing the font in my IDE can give me an inexplicable boost


> Imaging being able to work on a branch stacked on a coworkers branch while you’re both constantly modifying them

I think that's something I don't want to imagine


The advantage of VMs is that you can nuke them and be done with it if you need to.

I use my personal laptop for $WORK and everything work related is done via the VM.


I can do that too by just rm-rf the agents home directory


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: