Hacker Newsnew | past | comments | ask | show | jobs | submit | Filligree's commentslogin

But is it true or false?

I’ve used titles like that for thirty years.

I'm going to ask the qustion I ask everyone who makes the claim that they wrote like that for years: Can you show us a link from prior 2022 that you wrote like that?

No, of course not. It’s all corporate internal documentation.

I suppose my high school essays were not. Apologies, but those are lost.


Nobody owes you evidence for your witch hunts.

Sure, but, look, we have seen these claims so many times, that if it were true by now someone would have linked at least one archived blog post to show that it is, indeed, how humans used to write.

The lack of a single example is very telling.


Sure, and an LLM-written article will use that pattern eight times in two pages.

ZFS is out because the Linux developers refuse to cooperate by providing the hooks it would need to avoid duplicating the disk cache.

That’s the only real reason. There are some papercuts, but they don’t compare to the risks described in this article.


A licensed engineer who signs off on a bridge that collapses will not remain an engineer, and may be open to criminal prosecution. Their employer knows that, and therefore doesn’t ask them to make that choice. In the rare cases where they do, the engineer doesn’t end up blacklisted across the industry for saying no.

A software engineer is not so lucky.


China has never threatened war against my country; America has. Between the two, it’s clearly safer to lean towards the Chinese options if EU ones aren’t available.

That’s incredibly naïve.

More naive than blithely blowing off threats of war?

Meh, people have their own interests and values. And you can't force people to spend money no matter how much you may disagree with them

Bring on the Chinese, fuck the Americans.


Americans hating on the Chinese for doing to them what they did to the rest of the world for 50 years.

Just without the bombs part.


Nothing but Love for China.

How so?

A modern fridge also uses approximately five watts, on average. There are far better targets.

At the rate robots are improving, will that still be the case in ten years?

Why? Surely copying the same pixels out sixty times doesn't take that much power?

The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.

Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.

Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.

[1] https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...


So it's a Sharp MIP scaled up? https://sharpdevices.com/memory-lcd/

Sharp MIP makes every pixel an SRAM bit: near-zero current and no refresh necessary. The full color moral equivalent of Sharp MIP would be 3 DACs per pixel. TFT (à la LG Oxide) is closer to DRAM, except the charge level isn't just high/low.

So, no, there is a meaningful difference in the nature of the circuits.


Thanks. Great explanation.

Copying , Draw() is called 60 times a second .

It isn't for any reasonable UI stack. For instance, the xdamage X11 extension for this was released over 20 years ago. I doubt it was the first.

Xdamage isn’t a thing if you’re using a compositor for what it’s worth. It’s more expensive to try to incrementally render than to just render the entire scene (for a GPU anyway).

And regardless, the HW path still involves copying the entire frame buffer - it’s literally in the name.


Thats not true. I wrote a compositor based on xcompmgr, and there damage was widely used. It's true that it's basically pointless to do damage tracking for the final pass on gl, but damage was still useful to figure out which windows required new blurs and updated glows.

At the software level yes, but it seems nobody has taken the time to do this at the hardware level as well. This is LG's stab at it.

Apple has been doing this since they started having 'always-on' displays.

So has Samsung, but we're talking mobile devices with OLED displays, which is an entirely different universe both hardware and software-wise.

What’s your metal model of what happens when a dirty region is updated and now we need to get that buffer on the display?

It was, but xdamage is part of the composting side of the final bitmap image generation, before that final bitmap is clocked out to the display.

The frame buffer, at least the portion of the GPU responsible for reading the frame buffer and shipping the contents out over the port to the display, the communications cable to the display screen itself, and the display screen were still reading, transmitting, and refreshing every pixel of the display at 60hz (or more).

This LG display tech. claims to be able to turn that last portion's speed down to a 1Hz rate from whatever it usually is running at.


You forget that all modern UI toolkits brag about who has the highest frame rate, instead of updating only what's changed and only when it changes.

Do you have the genetics for that? It takes a lot of raw strength, and not that much intelligence.

It does add complexity, and the optimal solution is probably not to use it. Consider what happens if a 4kB page has only a single unique word in it—you’d still need to load it to memory to read the string, it just isn’t accounted against your process (maybe).

I would have expected something like this:

- Scan the file serially.

- For each word, find and increment a hash table entry.

- Sort and print.

In theory, technically, this does require slightly more memory—but it’s a tiny amount more; just a copy of each unique word, and if this is natural language then there aren’t very many. Meanwhile, OOP’s approach massively pressures the page cache once you get to the “print” step, which is going to be the bulk of the runtime.

It’s not even a full copy of each unique word, actually, because you’re trading it off against the size of the string pointers. That’s… sixteen bytes minimum. A lot of words are smaller than that.


That is a valid solution, but what IO block size should you use for the best performance? What if you end up reading half a word at the end of a chunk?

Handling that is in my opinion way more complex than letting the kernel figure it out via mmap. The kernel knows way more than you about the underlying block devices, and you can use madvise with MADV_SEQUENTIAL to indicate that you will read the whole file sequentially. (That might free pages prematurely if you keep references into the data rather than copy the first occurance of each word though, so perhaps not ideal in this scenario.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: