Hacker Newsnew | past | comments | ask | show | jobs | submit | upofadown's commentslogin

Putting on my user hat...

"OK. Signal has forward secrecy. So messages are gone after I receive them. Great!"

Oh, you didn't turn on disappearing messages? Oh, right, then forensic tools like Cellebrite can get them. You have to turn on disappearing messages. The default is off.

Oh, you did turn on disappearing messages? We send the messages in notifications. So the OS can keep them. Turns out Apple was doing that. There is an option you can turn on to prevent that. It is off by default.

"I'll just delete the entire app!" No, sorry, the OS still has your messages...

At what point does the usability get so bad that we can blame the messaging system?

This same app had a usability issue that turned into a security issue just last year:

End to End Encrypted Messaging in the News: An Editorial Usability Case Study (my article)

https://articles.59.ca/doku.php?id=em:sg


I think one of the main issues is that end-to-end message encryption is a sham as long as backups are not encrypted. I could have good device security, but if the person I'm talking to does not use ADP, iMessage and WhatsApp messages get backed up with only at-rest encryption (I think Signal opts out of standard iOS backups) and possibly the same for backups of the iPhone notification database (which the article suggests as a possibility).

Similarly on Android, WhatsApp suggests unencrypted backups to Google Drive by default.

Putting on my tinfoil hat, I am pretty sure that Google/Apple/Meta have some deal (successor to PRISM) where end-to-end encrypted messaging is tolerated as long as they have defaults that make it possible to access chats anyway. Apple not enabling ADP by default and WhatsApp doing Google Drive backups that are not end-to-end encrypted is the implementation. Since most people just use the defaults, it undermines security of people who care.

It's a 'win-win', the tech companies can wash their hands in innocence, the agencies get access to data, and phone users believe that they are chatting in a secure/private manner.


"end-to-end message encryption is a sham as long as" -- I agree with that but would add even more caveats. If someone can't list those caveats off the top of their head they shouldn't be pretending they aren't able to communicate securely.

Just look at Salt Typhoon, every single person should be way more paranoid than they are, including government & agency officials. The attach surface and potential damage - financial and reputation - will only get worse with AI automation and impersonation, and that's for people who are doing nothing interesting and are law abiding citizens.


Given the shoddy state of network security at large, especially on infrastructure projects (power plants, hospitals, dams, etc.) I always feel like major governments sit on so destructive potential to disrupt communications and anything connected to the Internet of its adversaries to have mutual assured destruction potential of a nuclear bomb.

No one’s crazy enough to push that button, because once you do there is no turning back.


I have often wondered about this exact situation. Like there are many instances of companies who depend on keeping their network secure and are actively taking preventative measures to keep their network safe that end up getting hacked. So surely there has to have been infiltration to some of the critical infrastructure keeping cities running. Why don't we hear more about it?

Only semi-conscientious companies will even KNOW they were compromised.

Suspect the rest are either not even looking and/or the attackers removed all their traces before anyone could possibly see.

When was the last time YOU inspected the authorization logs in systemd or the event log in Windows on your personal computer…

In Windows Defender we trust…


I mean the Hungarian minister of Foreign Affairs briefed Lavrov on internal EU matters and there are recordings of one or more calls. It seems that opsec is bad at pretty much every level.

We’re already forgetting when the Secretary of War invited a journalist to the secret SIGNAL group chat

Signal data is not backed up, they have a local backup solution and an in-app e2e cloud backup for $2/month.

The backup is free for text and something like 60 days of media. You only have to pay to backup all media.

This is what I’ve always hated with Apple Time Machine, which I think MUST have been deliberate:

    - create an encrypted disk
    - install Mac OS on the encrypted disk
    - use Time Machine to back it up with encrypt turned on
All good so far. Ok, time to restore:

    - Restore from Time Machine
    - enjoy your PLAIN TEXT install :poo:

This isn't really an issue anymore. All M series Macs (and T2?) are always encrypted by default.

> the tech companies can wash their hands in innocence

Hostile defaults, not just in tech, is how Western liberal soft power often works. They can always claim "hey, you have the choice", but they know very well most people won't even know they have the choice, or is it so cumbersome or costly to move away from the hostile defaults - and stay that way - that in practice, the effect is the same as if you lived in a totalitarian regime. The difference is that you can keep believing in the deception of "freedom" in a Western liberal society; in a totalitarian regime, you are much more likely to know you've got a jackboot on your throat, because there is one.

What is needed isn't radical liberal atomistic individualism which rationalizes the antisocial war of all against all that rewards raw might. You won't find freedom there. You need a culture of respect of and sense of duty toward the authentic common good, backed by moral authority, where authority is power + justice.


People keep pushing signal because it is supposedly secure. But it runs on platforms that are so complex with so much eco system garbage that there is no way know even within a low percentage of confidence if you've done everything required to ensure you are communicating just with the person you think you are. There could be listeners at just about every layer and that is still without looking at the meta-data angle which is just as important (who communicated with who and when, and possibly from where).

I've raised concerns about the Signal project whitewashing risks such as keyboard apps or the OS itself, and the usual response is that it's my fault for using an untrustworthy OS and outside Signal's scope.

At some point there need to be a frank admission that ETE encrypted messaging apps are just the top layer of an opaque stack that could easily be operating against you.

They've made encryption so slick and routine that they've opened a whole new vector of attack through excessive user trust and laziness.

Encrypting a message used to be slow, laborious and cumbersome; which meant that there was a reticence to send messages that didn't need to be sent, and therefore to minimise disclosure. Nowadays everything is sent, under an umbrella of misplaced trust.


There is nothing secure about sending encrypted content to notifications. If it were secure, it would only notify that there is a message, with no details included.

> If it were secure, it would only notify that there is a message, with no details included.

You're right. This is configurable via settings, but is not the default state.

That said: if I can get friends and family to use Signal instead of iMessage, that gives me the opportunity to disable those notifications and experience more security benefits.

But I agree with your point: most people think that Signal is bulletproof out of the box, and it's clearly not.


You only control one side of any conversation.

Once again there is a trade off between security and user convenience.

If security is the main differentiator then app should start in the most secure mode possible. Then allow users to turn on features while alerting them to the risks. Or at least ask users at startup whether they want "high sec mode" or "convenient mode".

As the app becomes more popular as a general messaging replacement, there will be a push towards greater convenience and broad based appeal, undermining the original security marketing as observed here.


Exactly, but, sooner or later the cost of support overcomes the need for security, that's what is driving this. Popularity is the main reason signal is now less secure than it was in the past.

The median user isn't going to change default settings, so your app is as secure as whatever the default it.

Even if I change the setting, my messages aren't truly secure against this unless all recipients do the same on all of their devices.

> Oh, you did turn on disappearing messages? We send the messages in notifications. So the OS can keep them.

Worse than that, they did not take advantage of the ability to send that message data as an encrypted payload inside the notification.

https://blog.davidlibeau.fr/push-notifications-are-a-privacy...

Either do not include sensitive user data inside a notification by default, or encrypt that data before you send it to the notification server.


According to Michael Tsai, they did use encrypted notification payloads. The OS just then stores the decrypted payloads in its notification database. [0]

[0] https://mjtsai.com/blog/2026/04/10/notifications-privacy/


Signal developer here. Our FCM and APN notifications are empty and just tell the app to wake up, fetch encrypted messages, decrypt them, and then generate the notification ourselves locally.

That's certainly a better state of affairs.

So you just need to fix the default setting and not display the message text in notifications to prevent this issue in the future?


We send the messages in notifications. So the OS can keep them. Turns out Apple was doing that. There is an option you can turn on to prevent that. It is off by default.

At least on my iPhone the default is to preview messages only when unlocked [0]. This user went out of their way to show previews in a locked state which meant it was vulnerable by digital acquisition without unlock code.

[0] https://files.catbox.moe/3gwjoy.png


That doesn’t solve the problem. You have to configure Signal to not send the information in the notification.

“We learned that specifically on iPhones, if one’s settings in the Signal app allow for message notifications and previews to show up on the lock screen, [then] the iPhone will internally store those notifications/message previews in the internal memory of the device,” a supporter of the defendants who was taking notes during the trial told 404 Media

Doesn't indicate this is an issue when you have it set to preview when unlocked.


Use SimpleX if you really want a secure messenger. Endorsed by Whonix, which in endorsed by Snowden.

https://www.whonix.org/wiki/Chat#Recommendation


SimpleX has the same problem with notifications. You still have to properly configure it. Opsec is hard.

> endorsed by Snowden.

Who is endorsed by the Russian government


If the encryption security isn’t a freaking pain in every ass in the Tri county area, it’s not secure.

That’s been my go-to and I’ve yet to see it not work.


0) send a public key. 1) encrypt the file with your private key 2) send file.

WTF. This is super simple stuff.


3) recipient stores decrypted content in plain text and backs that up in well-known cloud storage systems

They are an entity separate from Mozilla:

* https://blog.thunderbird.net/2020/01/thunderbirds-new-home/


They are not entirely separate from Mozilla. The MZLA Technologies Corporation is a for-profit subsidiary of the Mozilla Foundation. They have access to some of Mozilla's common infrastructure, but are otherwise entirely funded by donations. Donations to MZLA only fund Thunderbird and no other products.

Seems fine if you can donate to Thunderbird development. Compared to Firefox, where I don't think it's possible to donate to development at all (only to Mozilla activism side).

You can buy their Products. Afaik if you buy i.e. Firefox relay the revenue does not go to the foundation.

Edit: I just checked the Invoice, payment goes indeed to Mozilla Corporation, not the foundation.


Mozilla also runs hiring and HR for MZLA. They control who gets hired and fired.

It is more like money laundering, than independent entity.


> Mozilla also runs hiring and HR for MZLA. They control who gets hired and fired.

This is completely and utterly false.

MZLA hiring posts are placed on the Mozilla hiring site, and nothing more.


They are a wholly owned subsidiary. They're separate from Firefox, not Mozilla.

To be more clear: * MZLA are a subsidiary of Mozilla FOUNDATION * MZLA are separate from Mozilla CORPORATION aka Firefox

And both are owned and controlled by Mozilla Foundation, which is the issue. Why on earth would I donate money to an organization that seems dedicated to doing as little as possible other than acting as a tool to be used for the personal benefit of its leaders?

Seems to be some misunderstanding of what bike bells are for here...

A bell is helpful in a situation where a pedestrian is not aware of an approaching bike. The bell informs the pedestrian of two things:

1. That there is an approaching bike.

2. Roughly were the bike is approaching from.

The hope is that the pedestrian will then behave in a predictable way to allow a safe pass by the bike. In almost all cases the pedestrian will be able to simply continue doing what they were doing before they heard the bell.

If a pedestrian can not hear bike bells, for whatever reason, that is not a problem. They can just stay consistent with the centreline of the path/road/way. They then have a responsibility to shoulder check when shifting from side to side.


Not sure I understand your criticism.

Yes, bike bells are for pedestrians to hear.

Problem: Pedestrians today wear ANC noise cancelling, thus being unable to hear approaching bikes' bells.

Skoda: We made a bell with a frequency usually not cancelled by ANC, so these pedestrians still hear it.

Sounds reasonable to me.


So this is the exciting paper:

* https://arxiv.org/pdf/2603.28627

The new thing here seems to be the use of the neutral atom technique. Supposedly we are up to 96 entangled qubits for a second or two based on neutral atoms.

Shouldn't that be enough capability to factor 15 using Shor's?


You can save time by first looking at the required noise performance of these schemes. From the abstract of the paper:

>On superconducting architectures with 10−3 physical error rates...

So good old 0.1% noise performance again. That seems to have come from the "20 million noisy qubits to break RSA" scheme[1] from back in 2019. That level of noise performance is still wildly out of reach and for all we know might be physically impossible.

[1] https://arxiv.org/abs/1905.09749


> That level of noise performance is still wildly out of reach

It's only ~1 order of magnitude away from current capability. current gen QCs are around 1% gate error rate, and a decade ago SOTA was ~10% error rate, so if progress continues it should be achievable relatively soon.


People don't understand the exponential function.

Let's say you start adding water to a fish tank drop by drop, and double the number of drops each time. One drop, two, four, eight, and so on. When is the fish tank half full? When it's like 1/16 of the way full, or something like that.


> [0.1% gate error rate] is still wildly out of reach

This is false. When Fowler et al assumed 0.1% gate error rates would be reached for his estimates in 2012 [0], that was ostentatious. Now it's frankly a bit overly conservative. All the big architectures are approaching or surpassing 0.1% gate error rates.

From 2022 to 2024, the google team improved mean two qubit gate error rate from 0.6% [1] to 0.4% [2]. Quantinuum's Helios has a two qubit gate error rate of 0.08% [3]. IBM has Heron processors available on their cloud service with two qubit gate error rates ranging from 0.2% to 0.7% [4]. Neutral atom machines have demonstrated 0.5% gate error rates [5].

[0]: https://arxiv.org/abs/1208.0928

[1]: fig 1c of https://arxiv.org/pdf/2207.06431

[2]: fig 1b of https://arxiv.org/pdf/2408.13687

[3]: https://arxiv.org/abs/2511.05465

[4]: https://quantum.cloud.ibm.com/computers?processorType=Heron (numbers may vary as the website is not static)

[5]: https://arxiv.org/abs/2304.05420


I can think of a case where it turned out that there was some aspect of the noise performance that made the technology unsuitable for running Shor's algorithm. So would one of the presented low noise approaches actually work for Shor's?


Things like freezers don't take a huge amount of power. It's definitely about things that do space heating/cooling. The traditional approach is to put your electric water heater on a timer. That way you can schedule your hot water use on a consistent schedule but only heat the water at night when you can be sure the rates are lower.


In my case I've got a contactor on the house power board that disconnects the hot water at night, so it's only heating off solar power. You can also get specialised solar diverters that do the same thing, but they cost literally ten times as much and only squeeze a tiny bit of extra efficiency out of the system.

If you do go down this path, make sure you insulate the crap out of the cylinder and surrounding water pipes, mine only heats once a day and that's once the sun's providing the power for it.


This article inspired me to look and see what this computer is. Apparently it is a "AMD Athlon(tm) II X2 250 Processor" from 2009. So 17 years old. It has 8 GB of DDR3 memory and runs at 3 GHz. It currently has OpenBSD on it, but at least one source thinks it could run Windows 10.

The fact that I didn't know any of this is what is significant here. At some point I stopped caring about this sort of thing. It really doesn't matter any more. Don't get my wrong, I am as nerdy as they come. My first computer was a wire wrapped 8080 based system. That was followed by an also wire wrapped 8086 based system of my own design I used for day to day computing tasks (it ran Forth). If someone like me can get to the point of not caring there is no real reason for anyone else to care.


Your electricity bill alone could justify the cost of a new computer purchase if you're not shutting that down after every session.


65W TDP? Let's say we want to run a PC so we're switching to a newer low-end Ryzen with a 35W TDP and that that's a 30W difference for the whole system. Let's say we're running the system 24/7 and the CPU is pulling its full TDP constantly. Average US residential electricity price is $0.18/kWh.

0.03 kW * 24 h * 365 d * $0.18 = $47.30/year


In the UK, residential electricity tariffs are currently capped by the regulator at 27.69p per kWh, resulting in a total yearly cost of £72.77. Much higher than in the US, but still much cheaper than a new PC.


£72.77 is more than enough for a PC: https://www.ebay.co.uk/itm/377057425659


PC power draw at the wall is different than TDP. Idle power goes to a lot of components.

Even CPU TDP is not an accurate measure. My latest AMD CPU will pull more than it’s rated “TDP” under certain loads.


Yup. But from the OP, all the information we have is the CPU model, and the GP decided that was enough to say it should be thrown in the trash for power inefficiency, so I thought it was enough for some bad math.

(FWIW, searching for the CPU model brings up an old review where the full system they’re testing pulls 145W under some amount of load. While that’s not nothing, it’s also not outrageous for a desktop PC that does the desktop PC things you require of it.)


On the other hand the original comment was about turning it off when not in use - and idle power consumption should be a lot less than the TDP.


So $50/yr for 4 years gives you ~$150 with $50 extra for shipping or whatever, which gets you a decent Lenovo M700 Tiny with much better performance in both power and power consumption.


I guess. It's hardly an open-and-shut case of "throw your old computer away!" though, especially when this is a worst-case scenario of running a desktop computer at full blast 24/7 without it ever going into sleep mode or being turned off, and when you don't know what the user's needs are. Maybe a mini-PC with basically no expansion just won't really work for them?


Watts in TDP are not the same as watts in electricity, although they're both measures of energy.

TDP is a thermal measurement, it's how much heat energy your heatsink and fan need to be able to dissipate to keep the unit within operational temperatures. It does not directly correlate to the amount of electricity consumed in operation.


It’s close enough. Computers mostly make heat with some math as a distant second.


I know, but it should be roughly correlated and only serves as comparison for wildly inaccurate napkin math anyway.


An interesting point. Some random measurement gets 49W idle[1] which is probably close enough. I don't constantly compile stuff or stream video. At my local electricity rate of $0.072/kWh that works out to $31USD/year.

New systems idle at something like 25 Watts according to a lazy search. So 49-25=24W. That works out to $15/year hypothetically saved by going to a newer system. But I live in a cold climate and the heating season is something like half the year. But I only pay something like half as much for gas heat as opposed to electric heat. So let's just knock a quarter off and end up with 15-(15/4)=$11.25USD hypothetically saved per year. I will leave it here as I don't know how much the hypothetical alternative computer would cost and, as already mentioned, I don't care.

[1] https://forums.anandtech.com/threads/athlon-ii-x2-250-vs-ath...


Someone's never tried to locally compile a Rust program. :)


Or C++. I buy fairly fast computers to compile stuff. Generally top of the line desktop hardware because Threadripper isn't as much better as it's more expensive (and annoying to cool!), so the next price point that makes sense is a highly clocked (because single thread also matters) Epyc for like 10k€.

I also do caching and distributed compilation with sccache.


Did any of the components fail over time?

HDD/SSD?

Pretty surprising to have this thing still be working 17 years later, unless it spent a good chunk of that in 'cold storage'.


Hardware has a weird case of either dying pretty quickly or running forever, in my experience.

I think I can count on one hand the total number of drives I've ever had fail - and four of those were in the warranty period, back in the 90s-2000s.


I have two Phenom II x6 - same generation as Athlon II. One desktop and one server.

The server ran non-stop for the first 10 years. Motherboard, a 790, failed and upgraded to 880G. One memory stick failed, replaced by lifetime warranty (Kingston) but the pair I received was slower CL9-10-9 vs. 9-9-9 for the failed one. After 10 years my router and a rk3288 SBC took most of it's jobs. I moved most of the hard drives (7x 2GB Seagate ST2000DL and 1 spare) into a DAS (SATA RAID enclosure) connected directly to the router where they are still running. None failed. The server bacame an offline backup. I started it weekly to sync. Last week I replaced it with a rk3588 ITX board - not because it failed, but because I wanted to explore / play with the new ARM CPU.

The desktop is also still working. I bought it second-hand a few years after the first. It was used at least 4h every evening and at least 10h every weekend. I'm still using it right now. One HDD failed - it was a 120GB PATA Seagate from ~2004 IIRC. No data loss, it was in RAID1. One GPU failed, a GTS 250, upgraded to GTX 970, still working. I'm going to keep using it for at least 5 more years, possibly more. Firefox no longer supports Win7 and I'm in the process of migrating to Linux. Total Commander (I'm a user since Win31) and file associations are holding me back. xdg-open is... absolutely horrible.


Well it has a SSD in it now so it must have gone through at least one actual hard drive...


I sync up my newsboat feed reader with syncthing so I am up to date on multiple devices. I wonder if Thunderbird could be made to work in the same way...


I use newsraft. It's the Donald Trump of rss readers! Quick, portable, easily scriptable. In short... excellent!


I actually got a job for deleting code. I was fixing a problem on a contract and noticed that I could fix the problem by getting rid of the section that contained the problem. The functionality could be provided in a much simpler way. Later the company created a position and I was given first refusal before they interviewed anyone.

It was a 8 bit embedded application in something like 10k of code. When I left I generated a short and clear explanation of why what I had done was awesome in terms of their future business ... because that is what you have to do if you work contracts. Which is the real message of the article. You have to write things up.


That sounds like good work & a good outcome. I think there may be one ingredient left though to this:

> You have to write things up.

...and someone has to read it who understands the value you brought by deleting complexity.


The unfortunate fact about E2EE messaging is that it is hard to do. Even if you do have reproducible builds, the user is likely to make some critical mistake. What proportion of, say, Signal users actually compare any "safety numbers" for example? There is no reason to worry about software integrity if the system is already insecure due to poor usability.

Sure, we should all be doing PGP on Tails with verified key fingerprints. But how many people can actually do that?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: