Hacker Newsnew | past | comments | ask | show | jobs | submit | dathinab's commentslogin

not just fiber, e.g. Netflix requires "only" (reliable) ~15Mb/s for a 4k stream, that means most people in most countries feel little difference between ~25 Mb/s and 1Gb/s in their "every day" usage. Sure it's a huge difference if you download a 80GiB AAA game, or preload a 4k movie. But in my experience (which definitely doesn't apply to all countries) a lot of non tech affine people don't do that that often an if they do it (e.g. movies before travel) they tend to do it over the night so it still works out just fine with not so fast internet.

So for a lot of people paying for more then 25-50Mb/s (pro person) makes only sense if it isn't too costly. Hence I rarely see people going for more then 250-500 Mb/s even iff 1Gb/s is available and they have money. And for non-gamers with little money, I mostly see them with ~50Mb/s (or paying for 50Mb/s but getting much less due to old wires :( ).

(Also IMHO The more important things compared to 1Gb/s is how much of the bought bandwidth is reliably available at all times _with good latency_...)


and people on very limited bandwidth and/or speed don't watch Netflix (or do so at most at 1080p) and if they watch netflix they are fine with clogging up their internet as it isn't some random backround download hindering what they want to do but what they are actively doing

> Agree on title. Too dramatic.

not just too dramatic

given that all the things they list are

non essential optimizations,

and some fall under "micro optimizations I wouldn't be sure rust even wants",

and given how far the current async is away from it's old MVP state,

it's more like outright dishonest then overly dramatic

like the kind of click bait which is saying the author does cares neither about respecting the reader nor cares about honest communication, which for someone wanting to do open source contributions is kinda ... not so clever

through in general I agree rust should have more HIR/MIR optimizations, at least in release mode. E.g. its very common that a async function is not pub and in all places directly awaited (or other wise can be proven to only be called once), in that case neither `Returned` nor `Panicked` is needed, as it can't be called again after either. Similar `Unresumed` is not needed either as you can directly call the code up to the first await (and with such a transform their points about "inlining" and "asyncfns without await still having a state machine" would also "just go away"TM, at least in some places.). Similar the whole `.map_or(a,b)` family of functions is IMHO a anti-pattern, introducing more function with unclear operator ordering and removal of the signaling `unwrap_` and no benefits outside of minimal shortening a `.map(b).unwrap_or(a)` and some micro opt. is ... not productive on a already complicated language. Instead guaranteed optimizations for the kind of patterns a `.map(b).unwrap_or(a)` inline to would be much better.


most unsafe language to rust transpilations produce not just pretty terrible rust code but also use unsafe everywhere

which is needed, as making things safe often requires refactoring not localized to a single function/code block and doing that while transpiling isn't the best idea. In general I would recommencement a non LLM based transpilation (if possible) and then use an LLM to do bit by bit as localized as possible bottom up refactoring to get ride of unsafe code potentially at some runtime performance cost, followed by another top down refactoring to make thing nice and fast. And human supervision to spot parts where paradigms clash so hard that you have to do some larger changes already during the bottom up step.

anyways that means segfaults likely would stay segfaults in the initial transpilled version


zig can do some things wrt. compiler time compute which sits somewhere in between rust const expr and proc macro usage. This isn't something rust (or most languages) have. So even if we are generous and interpret line by line as expression by expression this isn't fully doable

but also telling a LLM to do a line-by-line translation and giving it a file _is guaranteed to never truly be a line-by-line translation_ due to how LLMs work. But thats fine you don't tell it to do line-by-line to actually make it work line by line but to try to "convince" it to not do any of the things which are the opposite (like moving things largely around, completely rewriting components based on it "guessing" what it is supposed to do etc.). Or in other words it makes the result more likely to be behavior (incl. logic bug) compatible even through it doesn't do line-by-line. And that then allow you to fuzz the behavior for discrepancies in the initial step before doing any larger refactoring which may include bug fixes.

Through tbh. I would prefer if any zip -> terrible rust part where done with a deterministic, reproducible, debug-able program instead of a LLM. The LLM then can be used to support incremental refactoring. But the initial "bad" transpilation is so much code that using an LLM there seems like an horror story, wrt. subtle hallucinations and similarr.


> So ciphers have to almost perfectly mix information.

yesn't

most modern stream ciphers basically use XOR for encryption with one time use keys per chunk (like. AES-CTR, AES-GCM, AEGIS, ChaCha20, etc.)

no mixing of bites is needed there just high entropy uniformly distributed one time use keys being generated per block, i.e. you need a "good enough" PRNG

practically the easiest way to get them is by doing something similar to a hash on the state(key, nonce, index) in some form. Which is likely done by mixing up information, hence the yes in yesn't.

but any PRNG with sufficient properties would do, and there probably are some which use some clever math which you probably wouldn't describe as "mix information".

It's just "shuffling bits" + "bad one way function" is often "sufficient" secure and faster then alternatives.

And historical many ciphers (e.g. AES block cipher) come from a time where we didn't yet had grate frameworks/know-how about how to assess security properties and write cryptography. Hence why they did all kinds of ways of mixing information and chaining which sometimes is quite.. arbitrary.

It might be easy to assume AES stuck around as it's "just grate" but that is plain wrong. It stuck around because it spread everywhere (including standards/requirements) before we knew how to best do things and due to that then ended up with hardware acceleration support on most chips. But no one would create it that way anymore (it is prone to side channel attacks if you don't have HW accl. xor use bitslicing trickery which makes it slow). But due to everything having AES hw acceleration it became a very fast building block. Hence why most modern cipher still use (part of) it and even some hashes and other algorithms use it... It's another example of how a "good enough" and wide spread technology often wins, not the best.


Mmm. It's true that stream cyphers do not need to mix information (of the plaintext) and block cyphers do. I'm not sure I fully agree with your comment, but I'm also not quite sure what you intend to say and it's late at night here. I'd suggest that anyone reading the above make sure they fully understand the different security properties of stream cyphers Vs block cyphers, before dismissing the latter.

Another big reason is that we use familiar terms, models to describe things. Often leading to very different things, being described in much less different ways. Also the simplified models often used for high level/very abstract explanations have the tendency that extrapolations you do based on them often end up wrong, in a subtle but fundamental way(* 1).

E.g. graphs, matrices, linear algebra systems, digital photos (and much more) can be all perfectly transformed into each other, or in other ways this are all different ways to look at the same data. (This is also not just hypothetical * 2)

As a side effect saying things are similar "because they are just matrix algorithms" is meaningless, because most things are "just a matrix algorithm". (It's also meaningful for the same reason, as it means you can transform many problems into such with well understood solutions.)

And the high level abstraction of "encoder -> state -> decoder" structure is another of such "too generic/meaningless" things. As state can be anything and encoding is just "process input to generate state" and decoder is just "generate output from state" wen can model most algorithms with that. Like the identity function is now `encode(input): state=input; decode(state): output=state` (and indeed as long as state is large enough wrt. the input (or unbound) you can train encode decoder network to do exactly that, as meaningless as that seems.

Similar you can treat everything as everything you have in cryptography itself: All of hash, PRNG, stream cipher can be easily(kinda, * 3) build from another and like most algorithms in existence they can be formulated to "consume data and then produce output", i.e. as encoder decoder pattern ;)

So IMHO it's mostly an combination of observer bias due to how we like to model things (in a very high level POV). And a construction bias from that and what computer can compute well (in a slightly less high level POV when looking at somewhat "arbitrary choices").

I know some of the examples here might sound a bit ridiculous, but it's some of the most important insights in CS:

- a lot of algorithms can be transformed (or modeled as) a lot of other algorithms, take advantage of it

- just because something looks alike, it neither means it is alike nor that it being alike has any deeper meaning (e.g. two graphs might look alike, but only for the subset of sample data you happen to use to plot them)

---

(* 1): Anything related to quantum physics seems be especially bad affected by it.

(* 2): you navigation system might navigate by mapping a navigation graph to a matrix and then compute on the matrix, where steps involved might treat it as a linear algebra system solved using some fast approximation.

(* 3): At least some of the conversion directions are easy. Some are a bit less intuitive. Also this assumes you either have perfect properties on the used building block or don't have to estimate how the imperfect properties map to properties of the created construct... Oh and naturally you can use some simple operations in the transformation (add, xor) as long as they are used in a straight forward way. E.g. PRNG(seed, offset) = HASH(encode(seed, offset)), with encode being a bijektive pairing function (e.g. `bytes_128bit_le(seed)+bytes_64bit_le(offset)`). E.g. for encryption you chunk, xor every chunk with a single time used key and generate the key using HASH(encode(key, nonce, chunk_idx)), or PRNG(encode(key, nonce), chunk_idx). That is how AES-CTR/AES-GCM does encryption. It (roughly) uses XOR(AES(key, encode(nonce, offset)), chunk). (Yes AES is more used like a hash then a block cipher in most modern AES ciphers...)


it isn't

but reading the article helps

her reason wasn't some my tech shouldn't be bad people high moral ground but that she felt she can't do here work on here previous job anymore and the next job happened to be in Singapore, and the reasoning in order was also

- reduced funding / many projects getting side lined

- US moving away from decarbonization

- immigration policies discrimination against Chinese born people (even if they have left Chinese citizenship behind/are US citizens)

- and here not wanting to be put in a position where she is pressured to work directly on batteries for weapon systems like drones (!= general use systems being used in a military context)

so she chose Singapore because someone in Singapore presented here with a good job offer where she doesn't have to worry about this things

i.e. this isn't about the US being "evil" and Singapore being better, but about the US no longer being as good a place for civilian use battery production scientist


nor will anyone else, the point isn't about weather generic use batteries will be used for military usage, or that someone who licence the technology builds military focused versions

the point is they themselves didn't want to _explicitly_ design components for specific forms of military usage

and they no longer feel safe to not be pressured by the US to do so

but that isn't even the main reason for moving mentioned in the article which are (in order they appear in): reduced funding, moving away from electrification, immigration policies and then the previous point

-----

as a side note your comment did sound a bit like you think Singapore is China, in the unlikely case you did idk. mix it up with Honkon: it isn't China in any form it was


This could be explained by the 250-300 M you refer to not matching the same distribution due to

1. this seems to be google ad network specific, not google services per-see

2. the analysis seem to only include users which do in general generate ad revenue, e.g. all AD Block everywhere users are not included in the distribution

3. given the lower bound I assume ad views which have no clear attributable user, and/or users with a very low and irregular amount of views, are not included (e.g. some mostly "offline" people, people mostly using an ad-block but sometimes somewhere still seeing an add, also it's G-Ads, so anyone using only FB, TickTock etc. would not show up I think)


The Google ad network's revenue is 10% of their first party ad revenue. It would be even harder to make the numbers work that way.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: