Hacker Newsnew | past | comments | ask | show | jobs | submit | muppetman's commentslogin

I did this for years too until mobile devices became popular. I have ~4 mobile phones for various things (yes this isn't normal) and ~4 different computers/laptops I use. Trying to keep a Keypass in sync between them is a nightmare. A proper password manager (Bitwarden or other) removes all that hassle. I have fingerprint unlock on the the mobiles that support fingerprint, face unlock on the devices that support that etc. I have browser addons to make password entry quick and easy while remaining secure.

Once I moved to a password manager I realised how clunky and poor dragging a Keypass vault around was.


Fair enough. I don't use it on mobile (I try to do the fewest things possible on mobile so I manage without a password manager).

But it's not that though. They're hosting an encrypted version that they don't have the keys for. They are doing the backend sync for you, and writing the clients that YOU run, that sync yuur passwords everywhere.

To suggest they have a copy of your passwords is to misunderstand what they're doing. It's the same as saying you host your Keypass on Dropbox so now Dropbox have a copy of your passwords/secrets.

The value they are providing is seamless sync between a huge range of platforms/devices and making it as frictionless as possible to entry your password when you need to (biometrics to unlock the vault, browser addons to seemlessly enter the passwords etc)

Your Dad has a single point of failure for all his accounts. That's not a win.


All of you keep missing the "something related to it."

They have something that could end up being a juicy point-of-failure that does not need to exist.


I don't see this claim being made anywhere? They say it's usually the time the rent seeking begins, not that it's begun.

I just saw a tab with a Google search for "Zuckerberg nudes" lololol

Yea that threw me too. Very clever.

I love Bitwarden and use it every day, but I pretty much also agree with his post. I have Bitwarden for personal stuff and 1password for my, and the 1password experience is night and day better. It's just so good, it always works. Bitwarden sometimes (especially on Android) will just not autofill. On my PC sometimes it won't recognise the domain correctly even though I've got an entry set for "base domain" etc. I am ALWAYS fighting with it to get my passwords out. Look at the Bitwarden Reddit its full of similar complaints.

Of course the price between 1pass and Bitwarden reflects why 1pass is so much better. And you don't really realise how clunky BitWarden is if it's all you use, until you also have to use some other password manager.


And I could tell you the opposite about 1Password. About half of the time, the extension does not realize ond which domain it is and autofill is broken.

To each their own (bugs).


Fair point - I've had no issue with it but I certainly don't use it as much as I do Bitwarden.

The Bitwarden Chrome extension is really bad, which is also the reason I've never been able to switch from 1Password to Bitwarden.

Yes, it's terrible. It's where I landed when I migrated from Keepass though so I've stuck with it.

1Password user here and it regularly shits the bed with autofill or recognising a domain.

Not to mention the absolutely garbage performance of the Windows desktop app.


You could have just posted "I didn't read the article" instead of this comment. It specifically addresses vaultwarden quite a number of times.

Not the original commenter. Just thought I would comment here. I'd be super interested in reading more information in why Bitwarden Lite is inadequate vs vaultwarden.

Who's looking at a damn fan? My lord. This is like caring what colour the filters in my air conditioner are.

Idiots will have anything marketed to them.


Which is why they focus on getting a fan out before adding a second color

Calling people idiots for having different taste? You truly are a muppetman.

Wow those Halloween shops really flopped huh?

If only they flapped. Maybe they'd still be in the air.

This annoys me, especially the last “It takes at least 25 years” rhetoric.

It didn’t take 25 years for SSL. SSH. Gzip encoding on HTTP pages. QUIC. Web to replace NNTP. GPRS/HSDPA/3G/4G/5G They all rolled out just fine and were pretty backwards and forwards compatible with each other.

The whole SLAAC/DHCPv6/RA thing is a total clusterfuck. I’m sure there’s many reasons that’s the case but my god. What does your ISP support? Good luck.

We need IPv6 we really do. But it seems to this day the designers of it took everything good/easy/simple and workable about v4 and threw it out. And then are wondering why v6 uptake is so slow.

If they’d designed something that was easy to understand, not too hard to implement quickly and easily, and solved a tangible problem it’d have taken off like a rocket ship. Instead they expected humans to parse hex, which no one does, and massive long numbers that aren’t easily memorable. Sure they threw that one clever :: hack in there but it hardly opened it up to easy accessibility.

Of course hindsight is easy to moan but the “It’s great what’s the problem?” tone of this article annoys me.


I was at some of those IETF meetings in the mid-1990s and attended some early IPv6 working group sessions. We knew the conversion would take time, but I don’t think any of us thought it would be this slow. I was involved with multiple L3 switches and routers from 1997 through 2010. The issue was always that IPv6 basically required lots of boxes in the middle to understand it in order to roll it out, so when would it be commercially necessary? Yes, you can do tunneling and NAT at various points, but it always requires more than just the endpoints. It shows up in DNS and socket APIs. There’s no easy way to determine if a path supports it, and the path can change in an instant due to a route change. All that is very different than SSL or QUIC where only the endpoints have to be involved. That’s why QUIC uses UDP, for instance, so old intermediate devices just see it as a protocol they already know. SSL just assigned port 443 and the “https” protocol in the web URL. If a web client contacts a server on port 443 that doesn’t use SSL, it just fails. To put it another way, the level of the stack that you’re changing matters. SSL and QUIC are really L5+. IPv6 is squarely L3. There are no protocol negotiation mechanism available at L3. So, from a business standpoint, when do you take the hit and integrate it all into the processing pipeline? How do you do that in a way that doesn’t impact your IPv4 forwarding performance, because that’s what the near-term market will judge you on? How do you afford the development and test cost associated with a whole other development (almost double)? If you’re doing software forwarding, the answers are a lot easier. As soon as you’re designing silicon, it’s a lot harder. When you’re under a lot of commercial pressure, it’s difficult to be the one who goes first. And remember that this hardware evolves on roughly 10 year cycles (2 years for design, 3-5 year market sales, 3-5 year depreciation at the customer before they buy new ones). Oh, and customer rollout of IPv6 is a major project with lots of program management and testing, not just buying a box or two. So, yea hindsight is easy. Eventually you get there, but it’s a long road.


> It didn’t take 25 years for SSL. SSH. Gzip encoding on HTTP pages. QUIC. Web to replace NNTP.

All that's required to implement each of those is two computers: 1 client and 1 server. Whereas supporting IPv6 requires every router between the two computers to also support IPv6. Similarly, if your current software doesn't support SSL/SSH/Gzip/etc., it's pretty easy to switch to different software, whereas it's hard or impossible for most people to switch ISPs.

> GPRS/HSDPA/3G/4G/5G

Radio spectrum costs providers millions of dollars, and each new cellular protocol increased spectrum efficiency, so upgrading means that providers can support more users with less spectrum. The problem is that most of the "Western" countries still have lots of IPv4 addresses, so there isn't much cost benefit to switching to IPv6. However, China and India both have lots of users and fewer IPv4 addresses, so there is a cost benefit to switching to IPv6 there, and unsurprisingly both of these countries have really high IPv6 adoption rates.


> Instead they expected humans to parse hex, which no one does

Of all aspects of IPv6 you took the only one that doesn't complicate implementations and can easily be swapped if you wanted.


Wait til you’ve got to copy & paste em, or see em comingled with hw addresses


Wait till you find an application that accepts 1.65793 as an IPv4 address. Or 134744072.

  $ ping -c 1   1.65793
  PING 1.65793 (1.1.1.1) 56(84) bytes of data.
  64 bytes from 1.1.1.1: icmp_seq=1 ttl=54 time=1.56 ms
  
  --- 1.65793 ping statistics ---
  1 packets transmitted, 1 received, 0% packet loss, time 0ms
  rtt min/avg/max/mdev = 1.560/1.560/1.560/0.000 ms
(by the way, this was way less of a dumb peculiarity back when IPv6 was designed)


I damn near have a stroke every time I try to reason about IPv4 addresses as an integer. But hey, I guess four bytes is four bytes no matter how you read them.


I'm not disagreeing that's a bad aspect of IPv6, I'm just saying that it's not that big of a issue for its adoption.


I think it’s one of many that indicates the underlying issues for its adoption. It’s a 90s technology, not as much thought was given about how it would be used.


> The whole SLAAC/DHCPv6/RA thing is a total clusterfuck.

SLAAC is easily the thing I love most about IPv6. It just works. Routers publish advertisements, clients configure themselves. No DHCP server, no address collisions, no worry. What's bugging you about it?


What problem is this actually solving? I've deployed DHCP countless times in all sorts of environments and its "statefulness" was never an issue. Heck, even with SLAAC there's now DAD making it mildly stateful.

Don't get me wrong, SLAAC also works fine, but is it solving anything important enough to justify sacrificing 64 entire address bits for?


* privacy addresses are great

* deriving additional addresses for specific functions is great (e.g. XLAT464/CLAT)

* you don't get collisions when you lose your DHCP lease database

* as Brian says, DHCP wasn't quite there yet when IPv6 was designed

* ability to proactively change things by sending different RAs (e.g. router or prefix failover, though these don't work as well as one would hope)

* ability to encode mnemonic information into those 64 bits (when configuring addresses statically)

* optimization for the routing layers in assuming prefixes mostly won't be longer than /64

… and probably 20 others that don't come to mind immediately. I didn't even spend seconds thinking about the ones I listed here.


Privacy addresses... Isn't it silly to talk of privacy if the prefix doesn't change?


Absolutely schizo.

"I wish to participate in a global telecommunications network and I wish to connect immediately to all my friends and be available to them 24/7 and I wish to play games with strangers across the country and I wish to receive all my email within 300ms with no spam and I wish to watch the latest news from Iran in 4K streaming Dolby"... but priiiiivacy!


SEND secures NDP by putting a public key into those 64 bits, and also having big sparse networks renders network scanning rather useless at finding vulnerable hosts, so there are reasons to make subnets /64 other than SLAAC.

Also we can always reduce the standard subnet size in 4000::/3 if we ever somehow run out of space in 2000::/3 (and if we don't then we didn't sacrifice anything to use /64s).


DHCP requires explicit configuration; it needs a range that hopefully doesn't conflict with any VPN you use; it needs changes if your range ever gets too small; and it's just another moving part really.

With SLAAC, it's just another implementation detail of the protocol that you usually don't have to even think about, because it just works. That is a clear benefit to me.


When it fail, you find there is no option to tune its behaviour.

Plug in a rough router and see quickly you can find it.


What kind of failure are you referring to? What would you want to tune? You can still easily locate all devices on your network.


I like the ability to

  ping somehostname
on the local network and have it work (where ping can be any command or browser). That's easy with DHCP+DNS, and either impossible or amazingly ugly with DLAAC.


It’s a no-brainer with SLAAC and mDNS, which is what pretty much all home routers do out of the box.


That's an extra service or two running on every device with extra configuration, and... Maybe it's more reliable now? I vaguely recall having a bad time.

What does the router do out of the box, or at all, for mdns? Isn't it a p2p service?


> It didn’t take 25 years for SSL.

It wasn't even on the map until 1994. Prior to that it was an ad-hoc mess of "encryption" standards. It wasn't even important enough to become ubiquitous until Firesheep existed.

Even then SSL just incorporated a bunch of things that already existed into an extensible agreement protocol, which, in the long run, due to middleware machines, became inextensible and the protocol somewhat inelegant for it's task. 30 years later and it's due for a replacement but we're stuck with it. Perhaps slow adoption isn't a metric that portends doom.


I think most of the web wasn't encrypted by default until letsencrypt came on the scene just over a decade ago. (I remember a few "free cert" offerings that were entirely manual, and cost you $200 if you wanted to revoke a cert)

It's firmly the default now, and very odd if a site doesn't default to https.


> What does your ISP support?

My ISP is Spectrum. They get a 0/10 on IPv6 support on this test page [1].

[1] https://test-ipv6.com


Is it possible that you own your own router and have at some point configured the router to turn up 6 off? I know it is turned off on my router because I had some issues with Verizon ipv6 and tp link in the past.


Good idea–on my list of to-check items.


FWIW, I'm also on Spectrum (by virtue of the Time Warner acquisition back in the day) and I get 10/10 on that page. That is, after turning off Firefox "Enhanced Privacy Protection" which actually blocked the page from loading at all for some reason. Got 9/10 using Chrome. Both on Linux.


> It didn’t take 25 years for SSL. SSH. Gzip encoding on HTTP pages. QUIC. Web to replace NNTP. GPRS/HSDPA/3G/4G/5G They all rolled out just fine and were pretty backwards and forwards compatible with each other.

You're comparing incremental rollout with migratory rollout for most of these; (not the mobile phone standards.) That's apples and oranges.

You can argue for other proposals. But at the end of the day the best you could've done is steal bits from TCP and UDP port numbers, which is... NAT. Other than that if you want to make a serious claim you need to do the work (or find and understand other people's work. It's not that people haven't tried before. They just failed.)

And, ultimately, this is quite close to typical political problems. Unpopular choices have to be made, for the benefit of all, but people don't like them especially in the short term so they don't get voted for.

> If they’d designed something that was easy to understand, […]

I can't argue on this since it's been far too long since I had to begin understanding IPv4 or IPv6… bane of experience, I guess.

> […] not too hard to implement quickly and easily, […]

As someone actually writing code for routers, IPv6 is easier in quite a few regards, especially link-local addresses make life so much easier. (Yet they're also a frequent point of hate. I absolutely cannot agree with that based on personal experience, like, it's not even within my window of possible opinions.)

> […] expected humans to parse hex […]

You're assuming hex is worse than decimal with binary ranges. Why? Of course it's clear to you that the numbers go to 256 because you're a tech person. But if you know that, you very likely also know hex. (And I'll claim the disjunct sets between these are the same size magnitude.)

Anyway I think I've bulletpointed enough, there's arguments to be made, and they have been made 25 years ago, and 20 years ago, and 15 years ago, and 10 years ago and 5 years ago.

Please, just stop. The herd is moving. If anything had enough sway, it would've had enough sway 15 years ago. Learn some IPv6. There's cool things in there. For example, did you know you can "ping ff02::1%eth0"?


how do you encode 128 bits without making a long number? and not using hex?


Far easier to use ipv8, which just has 5 octets instead of 4.


That still means replacing every part of the chain.


There are lots of legacy things in tcp/ip headers. One of them can be for the extra octlet.

When ipv4 legacy flies around, that oclet will be null or 0. The entire internet could route just fine, especially if you put the extra octlet at the end. 1.1.1.1 gets an extra 1.1.1.1.newoctlet.

So every existing IP gets a bonus 255 new IPs, and for now, routing of those is hardlocked to that IP, and it works with all legacy gear.

In 30 years or something, we can care about the mobility of those new IPs.


Pray tell me exactly where in the IP packet you put those extra octets. In a way that it affects zero other devices?


You're at the very beginning, baby steps stage of inventing IPv6 there.

You aren't the first person to come up with the idea of adding extra bits to IP addresses to make them longer. The problem isn't finding somewhere to stash the extra bits in the packet format (which is trivial; you can simply set the next-protocol field to a special value and then put the bits at the start of the payload), it's getting all software to use those extra bits -- and getting that to work requires doing all of the new AF family, new sockaddr struct, new DNS records, dual stack/translation/tunnels etc etc that v6 does.

Please consider that maybe the people working on v6 weren't actually complete imbeciles and did in fact think things through.


Please consider that maybe the people working on v6 weren't actually complete imbeciles and did in fact think things through.

It is possible for the world to change, and for designs and plans and viewpoints 30+ years ago to be less correct today.

This world is not that world. That world had massive concerns about the processing cost of NAT. That was one reason for ipv6. It also had different ideas about where the net would go. We now know that the "internet of things" and "having your fridge online", as well as "5G in everything so people can't firewall it off" is just insane and malign.

We also know that tying an IP address to a person (compared to an ISP using NAT) reduces privacy. That devious and devilish actors abound.

Even though they thought these things might be neat, many of them aren't.


None of that has anything to do with what you said in the post I replied to. "Add an extra octet to v4 addresses" has hard technical barriers to deal with if you want it to work, regardless of what the world looks like or what you're designing for.

> We now know that the "internet of things" and "having your fridge online", as well as "5G in everything so people can't firewall it off" is just insane and malign

None of this is really relevant either. IP's job is to handle the addressing used when sending data over the Internet, and it should do this job well regardless of what people end up doing with it.

> We also know that tying an IP address to a person (compared to an ISP using NAT) reduces privacy

We don't tie IP addresses to people. PI allocations might sort of count, but regular users don't get those.


None of that has anything to do with what you said in the post I replied to.

Of course not, why would it? I quoted what I was replying to, and all of my comments made perfect sense in that context. In that context, I was discussing the winning ipv6's original design considerations, and yes "IPs for everything" was one of them, hence me talking about it.


I intended the quoted part to mean something like "they did consider adding extra octets to v4 addresses and setting those octets to zero to mean v4".

It's not like they weren't able to come up with that idea. It's just that if you follow that train of thought through to its conclusion, you'll either decide it can't work or you'll make enough changes to end up with something that works basically the same way v6 does.

But yes, having enough IPs for everything was obviously a design goal. It would be excessively silly to go through all the work to increase the address size and not increase it by enough to handle whatever people ended up wanting to do with it.


> That world had massive concerns about the processing cost of NAT

The processing cost of NAT is still a problem. There's that classic post by a Native American tribal ISP where it was cheaper for them to pay to replace their clients IPv4-only Roku devices with IPv6 capable Apple TVs than to upgrade their CGNAT appliance to handle the video traffic.


You misunderstand.

The concerns about the "processing cost of NAT" were edge concerns. Companies, homes, edge-devices with 100 or 1000 RFC1918 addressed devices behind them. When ipv6 was created, NAT wasn't a thing, as processing power just wasn't there.

And it was thought the processing power would never be there.

Yet now everyone has NAT in little devices at home. So the need to route 100 IPs into every person's home isn't a thing. Which is inline with my comment about how the world looked different 30 years ago, and how the concept of "IPs for everything" is the reverse of what people even want now.


We have that variant of IPv8, it's what CGNAT gives you, especially if you run MAP-E or MAP-T (which are technically not quite NAT, but kinda are, it's… complicated). You take some bits from the port number and essentially repurpose them into part of the address.

It's a nice band-aid technology, no less and no more.


have that be the invisible bottom layer. come up with a list of 256 common words, one per byte, and have that be the human visible IP address. mentally reading a string of words, however nonsensical, is way easier than a soup of undifferentiated hex digits.


Easier if you’re a native English speaker. Harder if you’re not.

My only gripe with IPv6 addresses is they look too similar to MAC addresses. But as a representation, I think they’re absolutely fine.


fair point about native english speakers, but there's also no reason this scheme can't be localised


That would cause worse confusion when working with teams from different localisations. Not to mention the complexity of now adding localisations to the address parser.


Yeah the at least 25 years thing is a cop out. The IPng committee specifically chose the protocol that didn't have a transition plan, and today still doesn't have a transition plan.

I expect we're going to plateau with adoption for a long while now. 50% adoption is meaningless if it doesn't tangibly make a dent in the IPv4 exhaustion problem.


Well, other than the transition plans that it has and still has. The exact same plans that the other options like TUBA had.

If you ignore those then sure, it didn't have a plan.


Stomping your foot angrily at ISPs and Internet facing entities to adopt a protocol noone cares and/or getting governments to intervene because you've exhausted all your options and progress is stagnant is not a transition plan, that's a hail mary.


If you can't enforce a flag day then that's all you're left with, isn't it? Other than maybe hacking into people's networks, upgrading them and then somehow preventing them from undoing your work.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: