Hacker Newsnew | past | comments | ask | show | jobs | submit | chowells's commentslogin

"Progress is at fault" only is a criticism of the criticism if it's actually progress. What if actually being forced to slow down and think about a thing is what actually makes you learn it, and anything designed to optimize a process removes its educational value? If that were true, would "EdTech" still be progress?

I imagine it would depend case by case on exactly what parts were optimized and how. Presumably it's possible to optimize for more time spent thinking as well as for thinking about more useful things.

I think I agree with what you're getting at, though I usually phrase it differently: indirection is not abstraction. A good abstraction makes it easier to understand what the code is doing by letting you focus on the important details and ignore the noise. It does this by giving you tools that match your problem space, whatever it may be. This will necessarily involve some amount of indirection when you switch semantic levels, but that's very different from constantly being told "look over there" when you're trying to figure out what the code is saying.


Agree, and I would add that a bad abstraction, the wrong abstraction for the problem, and/or an abstraction misused is far worse than no abstraction. That was bugging me in another thread earlier today: <https://news.ycombinator.com/item?id=47350533>


Is it even possible for money to not be power? Like, how do you separate purchasing power from influence power? Purchasing is a very easy route to influence.


It is A form of power, but it should not have such a liquid/easy transaction to political power.


I want KYC. I want AML. I want reversible transactions. I also want all of those things to be well regulated by a responsive and reasonable regulatory body.

They may have cases where they break down, but their net social impact is positive.


We're talking about LinkedIn, not banking. KYC and AML with respect to banks is a privacy tradeoff that is required by law, after public debate from legally elected representatives. With LinkedIn, it's none of that.


As someone with no firsthand knowledge at all, I am inclined to believe your position is correct. But I also think the Economist is making an important point: Russia's continued prosecution of this war will shred their internal economy with consequences lasting for decades or centuries. What people often underestimate is just how much damage an economy can suffer before breaking down entirely.

But Putin doesn't care about that, so the war will continue until something changes militarily.


The EU is rich enough to support Ukraine for a very long time. During that time it is likely that Ukraine develops better and better weapons. This requires Russian army to improve as well.

It's not clear how the Russian army will improve when the economy declines.


The EU is rich enough but will they stay "willing enough"? Unfortunately, many EU parties that are gaining popularity are also against spending money on Ukraine


EU may be “rich enough” to support Ukraine forever, doesn’t mean it will or should do it.


The EU, well NATO has the problem what Russia will do when it is no longer at war with Ukraine. There is also the question what the US would if Russia attacks a NATO country.

So European NATO countries basically need to keep supporting Ukraine while they try to becomes militarily independent of the US.


EU just needs to support Ukraine until Russia has dug themselves into a hole that will take generations to recover from. Their might be a point where the war hasn't ended but Russia is no longer seen as a threat by the EU.


One issue is that Europe got caught with it pants down. It likely that Europe will keep improving its defense long after it is no long necessary from an economic point of view. Supporting Ukraine in destroying whatever Russia manages to produce is a sound strategy in this context.

If Russia really becomes weaker and the war winds down a bit, then supporting Ukraine is likely to become cheaper as well. But as long as Russia manages to send tons of drones and missiles to Ukraine, Europe should be worried. So Ukraine will remain a testing ground for air defense for a while.

There is also the issue with the Baltic countries and to some extent Finland. Those countries are terrified that Russia will do something stupid.


The EU as a whole is rich enough, the problem is that its the elites that are rich, not the ordinary citizens. However, the burden of support (via taxes and cutting welfare) will be places on ordinary citizens. Hence, the need to flame the war rhetoric. Still, there is no real support for forever war among EU citizenry.

Even if there is enough support for economic/material support of Ukraine, the matter of sending your man to die on the eastern front is an altogether different matter. Even Poland is not willing to do that. I mean, 'I am afraid of dying in a war, so I better go die in a war, to prevent that.'


Basically all of the use cases in the article don't make sense with AES. That's not because it's AES. That's because its blocks are significantly larger than the data you want to protect. That's the point the article was making: in very specific circumstances, there is practical value in having the cipher output be small.


In that case just use CTR mode, no?


In the context of encrypting 32 or 64 bit IDs, where there is no nonce, that'd be equivalent to XOR encryption and much weaker than TFA's small block ciphers.


If you really want to encrypt and decrypt 32-bit numbers without having any nonces available, the fastest way on non-microcontroller CPUs remains using the AES instructions.

You can exploit the fact that the core of AES consists of 32-bit invertible mixing functions. In order to extend AES to 128-bit, a byte permutation is used, which mixes the bytes of the 32-bit words.

The AES instructions are such, that you can cancel the byte permutation. In this case, you can use the AES instructions to encrypt separately four 32-bit words, instead of one 128-bit block.

Similarly by canceling the standard byte permutation and replacing it with separate permutations on the 2 halves, you can make the AES instructions independently encrypt two 64-bit words.

These AES modifications remain faster than any software cipher.

How to cancel the internal permutation and replace it with external shuffle instructions was already described in the Intel white paper published in 2010, at the launch of Westmere, the first CPU with AES instructions.


Are you certain using AES is still faster? Let's say for a 32-bit block size and 64-bit key.

From https://en.wikipedia.org/wiki/Speck_(cipher), that Speck combination would use 22 rounds, and using the instruction timings for Zen 5 from https://instlatx64.github.io/InstLatx64/AuthenticAMD/Authent..., it looks like each round would take at most 3 cycles. (Dependency chain for each round is 3 instructions long, ror+add+xor). 22*3 = ~66 cycles.

Using AES with a pshufb to take out the ShiftRows step would be 2 cycles for the pshufb and 4 cycles for each aesenc, and at 10 rounds, you have ~60 cycles.

It's quite close, and to say which one wins, we'd need to actually benchmark it. One is not clearly much faster than the other.


Standard AES-128 has a throughput of around 16 bytes per 8 clock cycles or even less in recent CPUs, because they can do 2 or 4 AES instructions per clock cycle (in the modes of operation that are not limited by latency).

AES-128 can be easily modified to independently encrypting four 32-bit words per execution, instead of one 128-bit block, by cancelling the byte permutation that extends the AES mixing function from 32-bit to 128-bit. this would increase the throughput at least twice, depending on whether PSHUFB is done concurrently or not.

You have given the latencies of the instructions, not their throughput. When you use AES in such a way that you are limited by latency, that is normally wrong. The cryptographic libraries have multi-buffer functions, which compute e.g. 8 AES values, so that they are not limited by latencies.

Regarding the parent article, if you want an unpredictable identifier for a record, you should not do this by encrypting some value with the intent of decrypting it in the future. Instead of this, you should use as identifier an unpredictable random number. Such identifiers can be generated with AES in batches, at maximum throughput, and stored until they are needed for assignment to a record.

If you need in your record some information like time of creation or a monotonically increasing number, which you consider private, such information should be put in distinct fields, that you do not give externally, instead of attempting to encrypt them in a record identifier, which would need to be decrypted to access such information.


>You have given the latencies of the instructions, not their throughput. When you use AES in such a way that you are limited by latency, that is normally wrong.

I did that because TFA is talking about encrypting 32 bit IDs, which is 1/4th of an AES block. There aren't multiple blocks to do at once in this scenario, and throughput numbers do not apply because each instruction depends on the result of the one before.

You mention doing multiple IDs at once, but the overhead of pulling multiple IDs into a single batch from something akin to URLs in web requests is likely gonna be worse than any gains.

>Instead of this, you should use as identifier an unpredictable random number. Such identifiers can be generated with AES in batches, at maximum throughput, and stored until they are needed for assignment to a record.

Now you lose the ability to sort the records in a database, and I fail to see what AES gives you here over any other random number generator.


maybe the reason they are so close is that the AES microcode is inplementing exactly those operations


There's nothing similar about AES and Speck, and the "microcode" for AES isn't like what you're thinking of. If you want to learn more about it, you can look up the specifications for AES and Intel's AES instruction set.


Would it, though? Either way you're operating in ECB mode with 2^32 or 2^64 values. Why is one more secure than the other?

EDIT: What I mean is you can do cypher = truncate(plain ^ AES(zero_extend(plain))).


>EDIT: What I mean is you can do cypher = truncate(plain ^ AES(zero_extend(plain))).

How would you decrypt that though? You truncated 3/4ths of the AES output needed to decrypt it.

I thought you were suggesting this:

  ciphertext = truncate(AES(key) ^ plaintext)
And in this case, since AES(key) does not depend on the plaintext, it would just be XOR by a constant.


The first examples in the parent article do not require decryption. They only require unpredictable random numbers that are unique.

If uniqueness is needed for a 32-bit number or a 64-bit number, then in AES-128 the byte permutation can be modified, to reduce the block size accordingly.

For the other examples with record identifiers, I am not sure whether the author meant for them to ever be decrypted. If decryption was intended, I disagree that this is the right solution. If an opaque record identifier is desired, it should be an unpredictable random number, which can be generated at maximum speed with AES. There is no need to ever decrypt such an identifier.

If other private information is needed, like a record counter, it should be put in separate fields, that are not provided to external entities, instead of encrypting it inside the identifier. Encrypting such private information would prevent its use in indexing anyway.


You're right, my bad. I guess if you have strict size requirements it does make sense to use small block sizes.


https://www.cs.ucdavis.edu/~rogaway/papers/thorp.pdf

(Not that this is the only solution but that it motivates the problem of why you can't just naively apply AES to the problem).


Some people just itch to use something custom and then to have to think about it. Which can bring amazing results, sure, but it can also bring spectacular disasters as well, especially when we're talking about crypto.


The article is less about crypto and more about improving UUID (and IDs in general) with small block ciphers. It's a low impact mechanism to avoid leaking data that UUID by design does leak. It also doesn't require a good source of entropy.


The block size of a block cipher function like AES is important for its security level, but it is completely independent of the size of the data that you may want to encrypt.

Moreover, cryptography has many applications, but the most important 3 of them are data encryption, data integrity verification and random number generation.

The optimal use of a cryptographic component, like a block cipher, depends on the intended application.

If you want e.g. 32-bit random numbers, the fastest method on either Intel/AMD x84-64 CPUs or Arm Aarch64 CPUs is to use the 128-bit AES to encrypt a counter value and then truncate its output to 32 bits. The counter that is the input to AES may be initialized with an arbitrary value, e.g. 0 or the current time, and then you may increment only a 32-bit part of it, if you desire so. Similarly for other sizes of random numbers that are less than 128 bit, you just truncate the output to the desired size. You can also produce random numbers that need to have 1 of a certain number of values that is different from a power of two, by combining either multiplication or division of the output value with rejection done either before or after the operation (for removing the bias).

Similarly, for message authentication, if you have some method that produces an 128-bit MAC, it can be truncated to whatever value you believe to be a good compromise between forgery resistance and message length.

For encryption, short data must be encrypted using either the CTR mode of operation or the OCB mode of operation (where only the last incomplete data block is encrypted using the CTR mode). With these modes of operation, the encrypted data can have any length, even a length that is not an integer number of bytes, without any length expansion of the encrypted message.

The advice given in the parent article is not bad, but it makes sense only in 32-bit microcontrollers, because since 2010 for x86-64 and since 2012 for Aarch64 any decent CPU has AES instructions that are much faster than the implementation in software of any other kind of block cipher.

Moreover, for random number generation or for data integrity verification or for authentication, there are alternative methods that do not use a block cipher but they use a wider invertible function, and which may be more efficient, especially in microcontrollers. For instance, for generating 128-bit unpredictable random numbers, you can use a counter with either an 128-bit block cipher function together with a secret key, or with a 256-bit invertible mixing function, where its 128-bit output value is obtained either by truncation or by summing the 2 halves. In the first case the unpredictability is caused by the secret key, while in the second case the unpredictability is caused by the secret state of the counter, which cannot be recovered by observing the reduced-size output.

For applications where a high level of security is not necessary, e.g. for generating 32-bit random numbers, the already high speed of AES-128 (less than 0.5 clock cycles per output byte on recent CPUs) can be increased by reducing the number of AES rounds, e.g. from 10 to 4, with a proportional increase in throughput.


Please explain to me how you can hash n distinct strings into O(n) buckets in O(1) time. Please note that this process needs to work as n goes to infinity.

Hash tables are O(log n) structures when you don't hand-wave away the "compute a hash" part. The thing is, search trees are far worse than that in practice and you aren't hand-waving away the "compare two elements" part. That's where the real speed savings come from.


What I think you are saying is that computing the hash needs to process the entire string, and the length of that string roughly corresponds to log n, therefore it's O(log n). Not sure I am entirely convinced by that reasoning, but let's roll with it for now.

Because if you apply it to binary search, you need to compare the strings at every step, and by that logic, each of these operations is O(log n), which means your binary search is now O(log^2 n).

I guess the crux is that we are still comparing apples to oranges (or multiplication operations to comparison operations), and at the end what probably makes hashing faster is that we are not branching.

Still I don't think it makes sense to think of both hash tables and binary search as O(log n).


Good. Video games are being ruined by shareholders demanding returns over good games. The less public investment there is, the better.


Just which prominent publishers remain private, anyway? Besides Valve.


EA!


Epic.


Haha, yes, ruining games by that hated public video game studio, Nintendo


Yeah I don't even agree with the conceit that public investment is causing companies to make bad games. You don't make 'good returns' as a video game company with bad video games, I don't care how fancy your financial engineering is. And private investors care about returns just as much as public investors do.

If anything a lot of this is on gamers for continuing to buy shitty games that they complained even at the time were shitty games.

If they'd stopped buying games the lousy companies would have folded, their investors would have eaten a loss, and the game companies that remained would care more about the investors who'd shown more of a long-term focus.


The article addresses this, actually. Fetching any unsecured content is an attack vector. https://danq.me/2026/01/28/hsbc-dont-understand-email/#footn...


In this particular case, injecting content into the image to make someone read a false message doesn't seem possible. The pixel <img> tag has width and height set to one. This overrides whatever the image size is. No altered message will be readable.


This is true up until the point that someone finds a security issue with an image parser that’s present in a browser engine, and suddenly you have an RCE.


If you have access to an exploit and want to compromise someone with an image, you'd usually just send it to them directly via e-mail or SMS or AirDrop or whatever, or all of the above. And it'll even work if your image is linked in an email via HTTPS.

Trying to MITM an existing tracker pixel when they're connected to public WiFi sounds like practically the hardest way to do it.


The harder it is to do, the more the targets guard will be down

In this case, sending your malicious image through a fake email might get flagged, or even not opened by someone whos been trained in infosec enough to be suspicious of these things. But a tracking pixel in an email that is verifiably from a trusted entity will be opened no problem. Type of thing that will look pretty slick if you read about it being used


It's incredibly easy to get people to open emails. This isn't asking them to download an attached .zip or .exe file or follow a suspicious link, which is what people are trained against. This is just an embedded image.


So you're saying... it's type inference of type classes, just like in Haskell?


No, I don't think so, not unless there's some feature of Haskell type classes I'm completely unaware of.

If anything it's closer to SFINAE in C++ where it tries to implement methods but then doesn't consider it an error if it fails. Then infers type-classes based on the outcome of the SFINAE process. Or the macro analogy another poster made isn't bad (with the caveat that it's a type system aware macro - which at least in rust is strange).


I am not sure how Haskell works but I think what the previous poster meant is that the types get determined at compile time. Closures are akin to macros except you can't see the expanded code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: