Hacker Newsnew | past | comments | ask | show | jobs | submit | frank_be's commentslogin

Anonymous signup is a smart move, no PII means no breach surface. But for a dead man's switch specifically, anonymity creates an interesting tension.

You want the service to be able to verify you're actually gone (not just on vacation, not just changing phones, not incapacitated temporarily). But you don't want it to know what it's guarding or who you are. Those two goals pull in opposite directions. The more the service knows about you, the better it can distinguish "dead" from "busy." The less it knows, the safer your secrets are.

That's a genuinely hard protocol problem. Multi-party verification, tiered escalation, trusted contacts as oracles, ... there are approaches, but nobody's cracked it cleanly yet. The 12-word anonymous account is a solid starting primitive, though.


“There are no solutions, only trade-offs” - Thomas Sowell

We would rather err on the side of not knowing about our customers. But it’s a choice that comes with consequences.

As you mentioned, we mitigate this with:

- Multi-channel verification: We send check-in links to multiple channels at once (email, Telegram, Signal). It’s unlikely you’ll miss all of them accidentally.

- Long escalation times: You can set the grace period up to 3 months - far longer than most vacations and longer than anyone we’ve met has been away from their phone.

- Tiered escalation: We keep sending check-ins until it’s time to alert your contacts.

- Trusted contacts: You can alert a friend earlier than family, and they can warn you that messages are about to go out.

Thank you for checking us out! If you’d like to discuss anything else or share more feedback, reach out: https://alcazarsec.com/contact


The multi-channel verification and tiered escalation are the way to go. that's the hard part on the trigger side.

What I keep coming back to with anonymous services is the delivery experience. Once the switch fires, what does the recipient's journey actually look like? Especially if they're non-technical (which, statistically, they probably are).

There's an interesting tension between keeping the service zero-knowledge and making the output usable by someone who's never touched a terminal.

Curious how you're thinking about that side of it.


Here we prioritize ease of use.

The only way to make a message-sending service truly zero-knowledge is to require contacts to upload a public key beforehand and encrypt every message with that key before storage in the database.

Unfortunately, this approach requires contacts to be technical people who understand encryption, keys, and key custody. When you die, you want to reach your family and friends—not only the tech-savvy ones.

So we encrypt messages at rest with a key stored separately from the database. This forces attackers to compromise two separate infrastructures (and before we notice and rotate the keys) to access any data. When sending the messages, we decrypt them in memory and deliver them in plaintext. That way, your parents don’t need a computer science degree to read your last message.

And if your threat model requires it, you can also use our Portable Secret to password-protect the documents. We provide both options.


Smart trade-off. The "two separate infrastructures" model is pragmatic — perfect security that nobody can use is no security at all.

The Portable Secret option is a nice touch for the paranoid-but-organized crowd. Do you find most users actually use it, or does the convenience of plaintext delivery win out?


Most users stick to plaintext. That's expected. Under typical threat models, convenience & ease of use outweigh perfect security.


That $120B number is real, but the problem is more subtle than most people think. A USB drive solves "where are my keys", and then creates three new problems: physical destruction (fire, flood, theft), the assumption that your beneficiary knows what to do with it, and the maintenance overhead of keeping it current as you rotate accounts and add wallets.

The hardest design question in this space isn't encryption. It's the human handoff. How do you build something secure enough that a stranger can't crack it, but simple enough that a grieving spouse who's never touched a terminal can actually use it? Most solutions I've seen either punt on that question or assume the recipient is technical. "I secured my keys" vs "my family can actually recover them" is where the real product problem lives imho.


The trust argument for self-hosting is real: I wouldn't hand my master seed phrase to a random SaaS either.

But there's a deep irony in self-hosted dead man's switches: they need to work when you can no longer maintain them. Docker images rot. SSL certs expire. The VPS stops getting paid. Your DNS registrar reclaims the domain. In the 15-year scenario (which is the actual scenario this thing is built for) self-hosted infrastructure has a shelf life that works against its own purpose.

Who watches the watchman? You need infrastructure that outlives you, which almost by definition means trusting something external. The question isn't whether to trust. More how to structure that trust so you're not just handing a third party your keys.


I work as a CI(S)O for a startup. We have lots of freelancers and have Soc2. Unless you fake your soc2, there are two options: give freelancers a company laptop, or force them to install the agent.

We do two things.

One: we give them the choice. Yes it costs money, but not that much.

Two: we went with Kolide. To understand how they are different, go read https://honest.security.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: