Yes, it is, but after that, it stays installed and runs before you open the web app the next time. In effect, this makes it trust-on-first use, just like if you'd install a desktop app which then had some secure update mechanism.
Yes, it can, but whenever the Service Worker's code changes, the user is warned. There's an "updatefound" event [1] which both the web page and the old Service Worker get. I wrote a blog post with more background info at [2].
Of course, it would be better to only warn the user if the Service Worker changes if it doesn't match the version on GitHub, but that's blocked on [3].
Furthermore, there are some very edge-case situations where the Service Worker can update when Airborn OS is not open or not visible (e.g., in a hidden iframe [4]). That is why, when you register and check "Notify me before updating Airborn OS", it asks you for permission to send you notifications. Those notifications are currently only used to warn you when the Service Worker updates.
The concept of Transparent Web Apps (TWA) is an interesting one - more so than AirbornOS itself. Any chance of packaging it (i.e. in the form of library, or make it as a SaaS) to promote adoption? This can enable more TWA and perhaps pivot yourself to become the TWA hub/directory.
Yes, I'm planning to make a library for it. It won't be quite install-and-forget, because the developer has to 1) push to GitHub or another public log before every deploy and 2) somehow let the Service Worker know where to find the latest version on GitHub. But it's definitely possible.
A TWA hub sounds interesting. I think that if you have a list of web apps that
1. are a Transparent Web App
2. have had a security audit
you could then add some UI in the browser that says "this web app keeps your data private". That would be useful not just for apps that use client-side encryption, but also very simple web apps, like say word counters. It's very useful for users to know whether the word counter sends their data to the server or not.
Of course, step 2 would be quite expensive, although for simple web apps it would be manageable. It would have to be financed by either the web apps themselves, or some big entity like Mozilla (which for years has had volunteers manually check browser extensions for things like this, too).
The JavaScript trust issue will always be a concern when webcrypto is used, and webcrypto is a web Api expected to be made broadly available across browsers (desktop and mobile). If somehow the concept of TWA can be made easy for developers to adopt and you pivot to handle/streamline the heavy lifting, I think you may have a much more lucrative nitch. Think of this as VeriSign for TWA. Wouldn’t that be cool? Someone (i.e you or Mozilla) should do this. This really promotes the open web.
> Yes, it can, but whenever the Service Worker's code changes, the user is warned. There's an "updatefound" event [1] which both the web page and the old Service Worker get. I wrote a blog post with more background info at [2].
Unless this is an extremely rare event, a hacker could easily just piggy back on a recent change and inject a worker that does not alert on "updatefound".
If the service is hacked and if the "update found" alert isn't a very rare event, users will not validate every single update against GitHub, and a place check would see an update, allowing the attacker to push code when an update is made. A good portion of users will run the infected code before the service is shutdown.
I never said it was easy. I said it was possible and doable.
First, you said “easily”, so I thought it must be effortless. Second, only paranoids and security conscious individuals will ever validate against GitHub. Your average Joe really doesn’t care. I do however agree with you that security must be hardened as much as possible, but your hypothetical case is weak. It will work for any software that auto update. Browser? OS? Let me give you another one, what if a satellite flies over your head and capture your password as you type? Oh and how about your CPU? Should we start making our own hardware now? You know.. can’t trust anyone after all.
Yes, I did and was wrong. Easy does not mean effortless and I also meant it in a context where the machine was already compromised, but didn't make that clear.
> only paranoids and security conscious individuals will ever validate against GitHub.
The op expects this to happen anytime a user gets a message saying the code is updated. I agree that no one will do it and it'll become a click through.
> but your hypothetical case is weak. It will work for any software that auto update. Browser? OS?
I state in another post that you are always subject to this issue. However, when running local software, and not at the mercy of someone else's computer/server, you have the ability to choose when and how you update.
I can also validate all code against signatures and public logs before running it, which is not something that can be done with service workers or any website in general. twiss says as much themselves: https://github.com/w3c/ServiceWorker/issues/1208
And yes, there is normally an implicit assumption that the hardware is not spying on you simply because there are no alternatives.
It really comes down to how often I need to validate my trust and how easy it is to do so.
Web apps, under the best conditions, are hard to grab and pull all the source into external files for examination and almost impossible to do so before executing it, baring using tools like curl or wget, and running the js yourself to figure out what else needs to be downloaded. Not to mention that needs to be done every time I access the app.
With a traditional app or is, I could (not that anyone does) verify the code (which for f/oss is easy to obtain) before I compile it myself. (Where I'm trusting the compiler, yes.)
I'm not trying to argue that there is a perfect method. I'm arguing that this application isn't even in the running for a good, fairly trustable method. I don't believe it solves the fundamental issue of having to trust the code download every single time, regardless of their service worker check because it itself is not protected. We'll, protected by the vigilance of the end user, which is where we started anyway.
What if the hacker changes the HTML file to point to their evil version of JavaScript file without ever triggering the Service Worker’s cache? Will that defeat the security feature?
Oh yes, as if it was an offline app when initiated. Interesting. You should really make this more broadly available and perhaps build service around it. If you can make it security sounded, then I’m sold. Perhaps, it solves the trust issue of JavaScript. For the fun of it, maybe throw in some blocktrain tech or something to enhance that trust model of yours. Would love to see how this gets developed.
That would move the trust from GitHub to that public log. However, GitHub provides us not just the "publicly verifiable update" part, but also the "authenticated update" part. In other words, how do you know that the person putting something in a blockchain is the owner of the website? You'd need a public key, and then not lose it, etc. But it's indeed possible.
This is a really interesting train of thought that I wasn't familiar with. Thanks for pointing out Binary Transparency, and I'll be eager to see where your product goes!
How does that work? Isn't a service worker started by plain JS code?