It's important to point out that these programs have two different objectives.
The Mullvad client is designed to connect to a closed-source service, which is run by someone else. It supports a bunch of different plugins, including openvpn and WireGuard. So probably it could adopt rosenpass, at least with its WG plugin.
WireGuard is designed for minimal protocol variability, high assurance implementations, and ultra small code size. It's used by VPN services, but also by end-users creating their own tunnels.
True. But they should have just forked either wireguard-go or boringtun to implement this functionality, and give up on the wg driver, the WG author seems like he doesn't care about PQC. Juggling multiple tooling is always a hassle.
It's also not clear how the WG PSK change is coordinated, and whether that entails a brief loss of connectivity - packet loss, latency spike.
They maintain separate peers for Pre-quantum and Post-quantum so that connectivity isn't interrupted. Each Pre-quantum peer is implicitly paired with a corresponding Post-quantum peer. Negotiating the PSK happens over a grpc api they expose at `10.64.0.1:1337`. The spec is public, if you're curious: https://github.com/mullvad/mullvadvpn-app/blob/main/talpid-t...
If you're a fuddy-dud like me who uses the Vanilla WireGuard config files, I wrote a tool to upgrade your pre-quantum peer to a post-quantum one. https://github.com/d-z-m/pq-adapter-mullvad
I'm intentionally not using Kyber, the key xor only happens if you elect to use both.
It works just fine with McEliece only.
> You also don't need Go
You don't need any language in particular. That's the beauty of the .proto spec. Can generate some client(and server) code in whatever language you want(that protoc supports).
Mate, you could just read the code…or give it a try ;)
> the WG author seems like he doesn't care about PQC
This is plainly not true; WG supports post-quantum security with the use of the PSK mechanism as we do.
PQ-crypto is high quality but it is also new and fairly inefficient; not a good thing to integrate into the kernel directly. Using the PSK mechanism is the best way to do this I know of at this point in time.
> It's also not clear how the WG PSK change is coordinated, and whether that entails a brief loss of connectivity - packet loss, latency spike.
WireGuard establishes a session with the existing PSK; we replace the PSK every two minutes but WireGuard keeps its established session around until it renegotiates a session.
Both WG and RP rekey their session every two minutes; there is no interruption.
So is the rosenpass tunnel separate from the non PQC tunnel (the non PQC tunnel being used just for rosenpass)?
Because afaik the moment the PSK is changed all packets immediately start being encrypted by it.
If the change doesn't coincide on both the sender and receiver (within an instant), there will be dropped packets until both PSK's are the same again. Being separate from WG, I don't see how you can insert yourself into their state machine for better coordination.
There's another reason why USB-to-parallel adaptors can't replace a PC with a printer port: the latency for round-trip applications is significantly higher over USB. It takes a few microseconds to read/write a byte from a PC printer port. Over USB, the control message latency leads to this taking tens of milliseconds. USB beats it on throughput, but only when transferring lots of data.
Before they disappeared, printer ports on motherboards began getting worse for latency and compatibility. This was likely due to cost reduction since low latency two-way communication was not needed for printing.
There was an interface for old Commodore floppy drives that is just remapped pins on the printer port[1]. When PC's circa 2005 stopped working with it, I designed a USB microcontroller board to implement the protocol[2]. It had to do some fancy state machines to get around the round-trip problem, caching a set of commands until the host was ready for the transfer. Then it would send them all back-to-back and start streaming back the bulk data. Fun stuff.
> Over USB, the control message latency leads to this taking tens of milliseconds.
This sounds an order of magnitude wrong. I have just setup a loopback with a CH340 USB to serial adapter and ran the following code:
#!/usr/bin/python3
import serial
import time
ser = serial.Serial(port="/dev/ttyUSB0", baudrate=1_000_000, timeout=1)
iters = 100
x = time.time()
for i in range(iters):
towrite = b"%i\n"%i
ser.write(towrite)
line = ser.readline()
assert(line == towrite)
delta_ms = (time.time() - x)*1000
print("Finished %i iterations in %ims = %.1fms/iteration"%(iters, delta_ms, delta_ms/iters))
and it says Finished 100 iterations in 272ms = 2.7ms/iteration
The slowness was due to a hardware bug in the 6522 VIA chip. The shift register (FIFO) would lock up randomly. Since this couldn't be fixed before the floppy drive needed to be shipped, they had the 6502 CPU bit-bang the IEC protocol, which was slower. The hardware design for the 154x floppy drive was fine, and some clever software tricks allow stock hardware to stream data back to the C64 and decode the GCR at the full media rate.
Yeah, the ISP I founded in 1995 (elite.net) was a PM2ER for both dialup and routing with a Pentium 90 as the shell & web server. We quickly hit the 30 line limit and went up to the PRI-based Portmaster models. Fun and exciting times, just bringing a rural community online for the first time ever.
These typically work by changing the media bit encoding to be easier to process with 6502 instructions instead of a lookup table. Others simplify the table and use more RAM so that fewer lookups are needed. I haven’t looked at how Transwarp works yet but it seems like the latter.
Browsers can generate machine code and make it executable (for e.g. V8), and so can any other app. So they could download it and put it into newly allocated executable memory pages.
Mosh uses AES-OCB (and has since 2011), and we found this bug when we tried to switch over to the OpenSSL implementation (away from our own ocb.cc taken from the original authors) and Launchpad ran it through our CI testsuite as part of the Mosh dev PPA build for i686 Ubuntu. (It wasn't caught by GitHub Actions because it only happens on 32-bit x86.) https://github.com/mobile-shell/mosh/issues/1174 for more.
So I would say (a) OCB is widely used, at least by the ~million Mosh users on various platforms, and (b) this episode somewhat reinforces my (perhaps overweight already) paranoia about depending on other people's code or the blast radius of even well-meaning pull requests. (We really wanted to switch over to the OpenSSL implementation rather than shipping our own, in part because ours was depending on some OpenSSL AES primitives that OpenSSL recently deprecated for external users.)
Maybe one lesson here is that many people believe in the benefits of unit tests for their own code, but we're not as thorough or experienced in writing acceptance tests for our dependencies.
Mosh got lucky this time that we had pretty good tests that exercised the library enough to find this bug, and we run them as part of the package build, but it's not that farfetched to imagine that we might have users on a platform that we don't build a package for (and therefore don't run our testsuite on).
As a non-crypto-nerd: How viable is it to make a “safe” OpenSSL, which just doesn’t support all the cipher modes (?) that the HN crowd would mock me for accidentally using?
The modes of operation aren't the main reason people use OpenSSL; it's the support for all the gnarly (and less gnarly) protocols and wire formats that show up when doing applied cryptography.
Progress is being made on replacing OpenSSL in a lot of contexts (specifically, the RustCrypto[1] folks are doing excellent work and so is cryptography[2]), but there are still plenty of areas where OpenSSL is needed to compose the mostly algebraic cryptography with the right wire format.
Edit: I forgot to mention rustls[3], which uses ring[4] under the hood.
Sun did have a firewall by the early 90's. It had application-level proxies, and you'd have to configure applications to bounce through it if you wanted to get to the Internet. In many ways, this was more secure than today's default for firewalls where you can make any outbound connection you want but only the inbound connections are filtered.
Note that I'm not arguing that Sun was a leader in security, but they did make some efforts that other companies didn't.
https://www.govexec.com/federal-news/1999/02/postal-service-...