i've been told that it's totally fine because once the codebase turns into spaghetti you can simply tell the agent to refactor it and then everything will be ok
I know this is a tongue-in-cheek response, but this brings me great pain. The spaghetti begins quickly, and your unit/functional tests won't help you unless you hammered out your module API seams before you even began. Oh, your abstractions are leaking? Your modules know too much about each other? Multiply the spaghetti!
Is this still true? New versions of protobuf allow codegen of `std::string_view` rather than `const std::string&` (which forces a copy) of `string` and `repeated byte` fields.
It allows avoiding allocations, but it doesn't allow using serialised data as a backing memory for an in-language type. Protobuf varints have to be decoded and written out somewhere. They cannot be lazily decoded efficiently either: order of fields in the serialised message is unspecified, hence it either need to iterate message over and over finding one on demand or build a map of offsets, which negates any wins zero-copy strives to achieve.
This is true but the relative overhead of this is highly dependent on the protobuf structure in one's schema. For example, fixed integer fields don't need to be decoded (including repeated fixed ints), and the main idea of the "zero copy" here is avoiding copying string and bytes fields. If your protobufs are mostly varints then yes they all have to be decoded, if your protobufs contain a lot of string/bytes data then most of the decoded overhead could be memory copies for this data rather than varint decoding.
In some message schemas even though this isn't truly zero copy it may be close to it in terms of actual overhead and CPU time, in other schemas it doesn't help at all.
Those field accessors take and return string_view but they still copy. The official C++ library always owns the data internally and never aliases except in one niche use case: the field type is Cord, the input is large and meets some other criteria, and the caller had used kParseWithAliasing, which is undocumented.
To a very close approximation you can say that the official protobuf C++ library always copies and owns strings.
Google really dropped the ball with protobuf when they took so long to make them zero-copy. There are 3rd party implementations popping up now and a real risk of future wire-level incompatibilities across languages.
"zero copy" in this context just means that the contents of the input buffer are aliased to string fields in the decoded representation. This is a language-level feature and has nothing to do with the wire format.
Precisely because of it. On Linux, /bin/bash, //bin/bash and /bin//bash are the exact same file, the same inode. They look somewhat off to people, but they're entirely harmless, so cleaning that up is an aesthetic choice, not something important.
On S3 they're different. Using the wrong paths causes weird issues, like not finding things you expect you find, or storing multiple versions of the same data out of sync.
Normalizing // to / means making S3 behave more like people expect.
Sorry for being unclear. I'm using Firefox build system lingo without explanations. It's from the command `./mach clobber`, which is similar but not the same as `make clean`. I use 'clobber build' as "a build with no existing build state" and the qualifiers "cold" and "warm" to indicate if cache is empty or filled.
How long would it take for the heat to be depleted? Humans have only managed to drill something like 12km into the earth because it gets too hot to go further.
If it were possible to access all of the Earth's stored geothermal energy, probably a very, very, very long time.
But if we're open to applying a quantitative timescale threshold to the thought experiment, at which we can argue geothermal is renewable, that raises the question for nuclear. If we could access all fissile uranium and thorium on Earth, how long would it take for us to deplete its stored energy? Does that mean nuclear energy is renewable?
reply