Hacker Newsnew | past | comments | ask | show | jobs | submit | lucb1e's commentslogin

I think that must be the point they're trying to make, yes

It also drives home that Anubis needs a time estimate for sites that don't use Anubis as a "can you run javascript" wall but as an actual proof of work mechanism that it purports to be its main mechanism

It shows a difficulty of "8" with "794 kilohashes per second", but what does that mean? I understand the 8 must be exponential (not literally that 8 hashes are expected to find 1 solution on average), but even as a power of 2, 2^8=256 I happen to know by heart, so thousands of hashes per second would then find an answer in a fraction of a second. Or if it's 8 bytes instead of bits, then you expect to find a solution after like 8 million hashes, which at ~800k is about ten seconds. There is no way to figure out how long the expected wait is even if you understand all the text on the page (which most people wouldn't) and know some shortcuts to do the mental math (how many people know small powers of 2 by heart)


I would agree, but

- accepting that they take the finger now makes me worried about the rest of the hand

- it seems like a complete strawman argument: I have never heard of anyone getting scammed by being guided through system menus to enable app installations and then downloading and installing an apk from the scammer, as opposed to just going to the play store and installing e.g. teamviewer

- apps are already a pain about users with access to their own devices. If they can somehow detect that you're in "advanced flow" mode... that's going to be a real joy and further discourage/scare away people from using this

- my current understanding of the finger they've given us is that it does not include publishing apps via the play store and outside of the play store unless you change the app ID. One signing key is bound to one app ID when the developer does the verification to be in the Play Store and their code is not installable after compiling by an independent party. F-droid still can't exist in its current form


Ah yes, getting access to your own data would be a massive problem, can you imagine such a world?! /s

Such data should be put in (or encrypted by) the hardware-backed keystore. You get to have full access to what the OS does, including seeing what data gets passed into this secure element for encryption or signing (you retain visibility and control), and yet secrets can't be leaked to you or an attacker who tries to extract those secrets

See e.g. your bank card: it's yours, you can choose where to stick it and what transactions it authorizes, but you can't get at the token that serves as proof of possession nor reset the PIN attempts counter. Your phone('s banking app) could work in the same way and has the hardware on board that makes this possible. So you see, it's a choice that you don't get to see what apps are doing and people are scared into believing that access to their own phone is bad. It's a matter of conflicting incentives on the vendor side, not technical risk


There is an API for backing up all app data that requires authorization. This is different from giving the user root, so any malicious can back up all app data at any time.

Which API do you mean?

adb backup

If you control the build, you should implement your own Backup Service. You should not just open all apps' data to any app.


Oh, that useless thing. I was very confused about something which can "app data that requires authorization" (thought maybe it's some Google service that extracts your secrets for device migrations) but you just mean the old adb backup that the security industry (that I'm part of, and fighting from within :p) destroyed in the name of people's own good

Like, yes this exists, but it doesn't back up half the things you need :(


"adb backup" is buggy and deprecated.

It's easier and more reliable to use adb root to rsync everything. No apps need root access that way.


> the most obvious, straightforward, user-friendly approach, and it was never even discussed

Fwiw, it was "discussed" in the sense that the person we're arguing with meant upthread ("let's discuss a good solution instead of this boring repetitive outrage"), but it's not like Google listens to that so any such discussion is pointless anyway. It is indeed the obvious solution and it comes up in each of these threads, but believers like GP can always be new rationalizations of why Google doesn't implement one proposal or another


If there is literally "No amount of scary menus will work." then those people cannot use computers. So long as they can transfer money with it, or do another action that a scammer may want to do, then the scammer can tell them to do it. They should not be allowed to install banking apps with that logic and need a legal guardian to manage their digital belongings

If the solution is that nobody has control of their digital life anymore (see also attempts to require client-side scanning and verify user age, which don't work if said user can override it) then we've lost sight of the bigger picture


What public stance do you mean? Did they say somewhere that sharing statistics about Android is against their morals or what do you mean?

Their stance is that they want to lock up Android, if they start sharing the truth, it just doesn't support their goals

Because we hear so many stories where the scammer directed their target to install an app so that their scam works

I know a lot more people that install newpipe than people that got scammed by any means, and have never heard of anyone being asked to install an app by a scammer


But I was scammed by newpipe! It said I can watch YouTube, but there aren't any ads! Now I don't know what to buy. It even had CCC Media, so now my videos are informative and insightful. Where's my influencers?!

> i will get negative karma again

"Please don't comment about the voting on comments. It never does any good, and it makes boring reading." https://news.ycombinator.com/newsguidelines.html


Or another process will die at random instead, which might be your desktop environment, the main browser process, Signal (10% chance at corrupting message history each time), a large image you were working on in Gimp...

Firefox has gotten very good at safely handling allocation failures, so instead of crashing it keeps your memory snugly at 100% full and renders your system entirely unusable until the kernel figures out (2-20 minutes later) that it really cannot allocate a single kilobyte anymore and it decides to run the OOM killer

but also

it's not cheap? Why should everyone upgrade to 32GB RAM to multitask when all the text, images, and data structures in open programs take only a few megabytes each? How can you not get hung up about the senseless exploding memory usage


That's not how it works. Process killing is one of the last ways memory is recovered. Chrome starts donating memory back well before that happens. Try compiling something and see how ram usage in chrome changes when you do that. Most of your tabs will be discarded.

I've already described above what the browser's behavior is. That your browser works differently is good for you; I'm not using a Google product as my main browser. There are also other downsides that this other behavior does not fix, mentioned in sibling comments

This is not a chrome problem but an OS problem. Android does a much better job here by comparison. Desktop Linux is simply not well optimized for low RAM users.

I dunno I have 96GB of RAM and I still get the whole "system dies due to resource exhaustion" thing. Yesterday I managed to somehow crash DWM from handle exhaustion. Man, people really waste resources....

AWS has a similar RAM consumption. I close Signal to make sure it doesn't crash and corrupt the message history when I need to open more than one browser tab with AWS in the work VM. I think after you click a few pages, one AWS tab was something like 1.4GB (edit: found it in message history, yes it was "20% of 7GB" = 1.4GB precisely)

Does anyone else have the feeling they run into this sort of thing more often of late? Simple pages with just text on it that take gigabytes (AWS), or pages that look simple but it takes your browser everything it has to render it at what looks like 22 fps? (Reddit's new UI and various blogs I've come across.) Or the page runs smoothly but your CPU lifts off while the tab is in the foreground? (e.g. DeepL's translator)

Every time I wonder if they had an LLM try to get some new feature or bugfix to work and it made poor choices performance-wise, but it completes unit tests so the LLM thinks it's done and also visually looks good on their epic developer machines


I think a big problem is the fact that many web frameworks allow you to write these kind of complex apps that just "work" but performance is often not included in the equation

so it looks fine during basic testing but it scales really bad.

like for example claude/openAI web UIs, they at first would literally lag so bad because they'd just use simple updating mechanisms which would re-render the entire conversation history every time the new response text was updated

and with those console UIs, one thing that might be happening is that it's basically multiple webapps layered (per team/component/product) and they all load the same stuff multiple times etc...


The Grok android app is terrible in that sense. Just writing a question with a normal speed will make half of the characters not appear due to whatever unoptimized shit the app does after each keystroke.

Sounds quite overengineered. CEOs have basically no idea what they're doing these days. If this were my company, I'd start by cutting 80% of staff and 80% of the code bloat.

Don't know if this is satire, but I do wonder if Musk uses the Grok app himself.

As someone who knows xAI employees, he does use it a LOT and reports bugs very often afaik

The "very often" part is wild to me. You'd think being an engineer himself[0] he'd fix the root cause: the testing process, not work as an IC QA himself.

[0] He holds the title of Chief Engineer at SpaceX.


Holding the title of engineer does not make one a good or capable engineer.

Does he use Android?

it's unironically just react lmao, virtually every popular react app has an insane number of accidental rerenders triggered by virtually everything, causing it to lag a lot

well that's any framework with vdom, the GC of web frameworks, so I'd imagine it's also a problem with vue etc..

I don't understand though why performance (I.e. using it properly) is not a consideration with these companies that are valued above $100 billion

like, do these poor pitiful big tech companies only have the resources to do so when they hit the 2 trillion mark or something?


Vue uses signals for reactivity now and has for years. Alien signals was discovered by a Vue contributor. Vue 3.6 (now in alpha/beta?) will ship a version that is essentially a Vue flavored Svelte with extreme fine grained reactivity based on a custom compiler step.

One of the reasons Vue has such a loyal community is because the framework continues to improve performance without forcing you to adopt new syntax every 18 months because the framework authors got bored.


The React paradigm is just error prone. It's not necessarily about how much you spend. Well paid engineers can still make mistakes that cause unnecesssary re-renders.

If you look at older desktop GUI frameworks designed in a performance-oriented era, none of them use the React paradigm, they use property binding. A good example of getting this right is JavaFX which lets you build up functional pipelines that map data to UI but in a way that ensures only what's genuinely changed gets recomputed. Dependencies between properties are tracked explicitly. It's very hard to put the UI into a loop.


Property binding and proxies really didn't work well in JS at all until relatively recently, and even then there is actually a much worse history of state management bugs in apps that do utilize those patterns. I've yet to actively use any Angular 1.x app or even most modern Angular apps that don't have bugs as a result of improper state changes.

While more difficult, I think the unidirectional workflows of Redux/Flux patterns when well-managed tend to function much better in that regard, but then you do suffer from potential for redraws... this isn't the core of the DOM overhead though... that usually comes down to a lot of deeply nested node structures combined with complex CSS and more than modest use of oversized images.


It's not a problem with vue or svelte because they are, ironically, reactive. React greedily rerenders.

It's also not a problem with the react compiler.


Nobody gets promoted for improving web app performance.

Yes, they do. OGs remember that Facebook circa 2012 had navigation take like 5-10 seconds.

Ben Horowitz recalled asking Zuck what his engineer onboarding process was when the latter complained to him about how it took them very long to make changes to code. He basically didn't have any.


From: https://hpbn.co/primer-on-latency-and-bandwidth/#speed-is-a-...

> Faster sites lead to better user engagement.

> Faster sites lead to better user retention.

> Faster sites lead to higher conversions.

If it's true that nobody is getting promoted for improving web app performance, that seems like an opportunity. Build an org that rewards web app performance gains, and (in theory) enjoy more users and more money.


yep. I think this is the root problem, not the frameworks themselves

If it's slow people also stick around for longer if they have something they must accomplish before leaving.

They have no real competitors, so anything that makes the user even stickier and more likely to spend money (LinkedIn Premium or whatever LinkedIn sells to businesses) takes priority over any improvements.

> well that's any framework with vdom

Is it time for vanilla.js to shine again with Element.setHTML()?

https://developer.mozilla.org/en-US/docs/Web/API/Element/set...

It's a bit unfortunate that several calls to .setHTML() can't be batched so that several .setHTML() calls get executed together to minimize page redraws.


Well, their lowest tier devs, they have started firing and churn a lot... combined with mass layoffs... and on the higher end, they're more interested in devs that memorized all the leet code challenges over experienced devs/engineers that have a history of delivering solid, well performing applications.

Narcissism rises to the top, excess "enterprise" bloat seeps in at every level combined with too many sub-projects that are disconnected in ways that are hard to "own" as a whole combined with perverse incentives to add features over improving the user experience.


I think linkedin is built with emberjs not react last i checked…

The problem with performance in wep apps is often not the omg too much render. But is actually processing and memory use. Chromium loves to eat as much ram as possible and the state management world of web apps loves immutability. What happens when you create new state anytime something changes and v8 then needs to recompile an optimized structure for that state coupled with thrashing the gc? You already know.

I hate the immutable trend in wep apps. I get it but the performance is dogshite. Most web apps i have worked on spend about 10% of their cpu time…garbage collecting and the rest doing complicated deep state comparisons every time you hover on a button.

Rant over.


I was researching laptops at BestBuy and every page took ages to load, was choppy when scrolling, caused my iPhone 13 mini to get uncomfortably hot in my hand and drained my battery fast. It wouldn’t be noticeably different if they were crypto-mining on my iPhone as I browsed their inventory.

It’s astonishing how bad the experience was.


Best Buy is actually one of the worst and slowest websites from any large retailer. I cannot believe how bad it is. It's like they set out to make it pretty and accidentally stepped in molasses.

The irony! My router died literally an hour ago, and I was on bestbuy to buy a new one. Over 5g connection. That was probably the worst shopping experience I had in a while...

> Does anyone else have the feeling they run into this sort of thing more often of late? Simple pages with just text on it that take gigabytes (AWS), or pages that look simple but it takes your browser everything it has to render it at what looks like 22 fps?

It is to do with websites essentially baking in their own browser written in javascript to track as much user behavior as possible.


Spot on. It's why I quit adtech in 2015. Running realtime auctions server-side is one thing, but building what basically amounts to live-feed screen capture ..

I do live-feed screen capture and it doesn't really consume much and is barely unnoticeable. Running 100 live-feed screen capture is a different story though.

My company started using slack in 2015 and at that time I put in a bug report to slack that their desktop app was using more memory than my IDE on a 1M+LOC C++ project. I used to stop slack to compile…

The bug in Slack is that it uses Electron

Electron has been a blessing and a horrible thing at the same time.

The overhead added by Electron is hardly that significant at this point, you can still use it to write significantly more efficient apps

The overhead of an entire browser engine isn’t significant compared to a native app?

No, compared to everything else in those apps. i.e. if they are writing extremely bloated Electron apps why would the native version be less slow and bloated? I mean Electron's overhead is mostly fixed (it's still a lot but its possible to keep memory usage we below 1 GB or even 500 BB even for more complex applications).

A native app that compiles to machine language and uses shared system libraries is by definition going to take less memory and resources than code + web browser +Javascript VM + memory to keep JITd byte code.

Write a “Hello World” in C++ that pops up a dialog box compared to writing the same as an Electron app.


Yes, exactly, that's what I said. There is significant overhead but is it the only or the main reason why these apps are so slow and inefficient? It's perfectly easy to write slow and inefficient code in C++ as well...

Exactly how would you write a program in C that could possibly be as bloated as adding an entire browser engine + Javacript runtime?

A highly inefficient render loop. I've seen people commit absolute crimes rendering text in game engines.

This is not what "by definition" means.

A - your code

B - a heavy runtime that is greater than 0

C - system libraries

By definition

A + C < A + B + C


Again, this is not by definition. This is by deduction.

It's always good to not slack when compiling.

Just to clarify for other readers, sword fighting while riding office chairs is not slacking.

Hit this exact wall with desktop wrappers. I was shipping an 800MB Electron binary just to orchestrate a local video processing pipeline.

Moved the backend to Tauri v2 and decoupled heavy dependencies (like ffmpeg) so they hydrate via Rust at launch. The macOS payload dropped to 30MB, and idle RAM settled under 80MB.

Skipping the default Chromium bundle saves an absurd amount of overhead.


I noticed that there's a developing trend of "who manages to use the most CSS filters" among web developers, and it was there even before LLMs. Now that most of the web is slop in one form or another, and LLMs seem to have been trained on the worst of the worst, every other website uses an obscene amount of CSS backdrop-filter blur, which slows down software renderers and systems with older GPUs to a crawl.

When it comes to DeepL specifically, I once opened their main page and left my laptop for an hour, only to come back to it being steaming hot. Turns out there's a video around the bottom of the page (the "DeepL AI Labs" section) that got stuck in a SEEKING state, repeatedly triggering a pile of NextJS/React crap which would seek the video back, causing the SEEKING event and thus itself to be triggered again.

I wish Google would add client-side resource use to Web Vitals and start demoting poorly performing pages. I'm afraid this isn't going to change otherwise; with first complaints dating back to mid-2010s, browsers and Electron apps hogging RAM are far from new and yet web developers have only been getting increasingly disconnected from reality.


So many sites.. they’re all built as web apps these days when they don’t need to be. And they’re all full of tracking and “telemetry”……..

Yes, its sometimes extreme. I often wondered if it was my FF browser, but then i'd switch to Opera or Brave, and i would see the same pattern.

Its quite insane


What us this AWS you talk about? :-)

my employer's choice of premium hosting provider

I know what AWS is...that is why your statement

>> AWS has a similar RAM consumption.

Makes no sense to me...


Ah, now I understand your question (and see others already answered). Yeah, I realized that possible confusion after writing it, but hoped it was clear enough after editing in the bit about this AWS problem being in a browser tab. You may have seen the initial version, or it may still have been too confusing. Whoops

I think they are talking about AWS dashboard, but I might be wrong.

the web interface

The official name is the AWS management console. Or just the console.

The ‘dashboard’, the ‘interface’? Reminds me of coworkers who used to refer to desktop PC cases as the hard drive, or people who refer to the web as ‘Google’.


Wow this makes me nostalgic for 2000s era pointless web rage. It’s better material; keep up the good fight.

If you're talking about the AWS management UI, I haven't used it recently but can tell you that the Azure one is no better. One of the stupidest things I remember is that it somehow managed to reimplement a file upload form for one of their storage services such that it will attempt to read the whole file into memory before sending it to the server. For a storage service meant for very large files (dozens of gigabytes or more).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: