It's not that it's that complex to need all of this. It's about ease of use. Home Assistant OS makes life simpler for users (such as myself), it makes it easy to use adding that run as additional docker containers, it makes plugging in USB z-wave/zigbee devices a breeze.
While it is technically no longer supported, you can still install the whole kit and caboodle using pip in a Python virtual environment, but why would you?
> You can still install the whole kit and caboodle using pip in a Python virtual environment, but why would you?
This is how I did it, instead of the container or HA OS in a VM.
If you want the simplicity of everything preconfigured, managed, and hands-off, go with HA OS, whether in a VM on a beefier machine, standalone, or the HA Green/Yellow dedicated hardware.
But if you already have a home server and want to add HA, I found just pip installing to be easier than dealing with the container.
Maybe I'm just the silly type that enjoys fiddling with Linux, but I'd argue that it actually makes more sense to install HA bare metal over a container. HA doesn't actually have any major dependencies outside of what pip installs, so setup wasn't any more annoying than via container. And then you never have to deal with container annoyances like passing hardware through to it or weird failures and misconfigurations.
Contrast this with https://frigate.video/, which has so many fragile native dependencies and a super complex stack that trying to install manually is an exercise in futility. I gave up and used the container.
This is what I've been running on my generic x86-64 system for a couple of years now, 0 issues. Even migrated to a newer system recently because I wanted something that was slightly faster for ESPHome compilations.
"self-installing" being the key point. Those instructions require you to use some other piece of software to write the image onto your boot disk. In my case I used an Ubuntu livecd to download and write the image to the machine. It's obviously not a showstopper but it is slightly annoying.
That's why Matter and Thread are IPv6. You don't need IPv4 at all... and if you run out of IPv6 address space, I'd love to see just how many devices/sensors you have in your home.
What might be a problem is the number of devices that can be connected. For example someone who is using an Xfinity Gateway for their WiFi has a limit of 100 devices on each band (2.4 GHz, 5 GHz, 6 GHz) if they are on an XER10, XB10, or XB8. An XB7 can have 75 on each of 2.4 GHz and 5 GHz. An XB6 can have 30 on 2.4 GHz and 75 on 5 GHz.
I went through the whole blind research rabbit hole and ended up with Smartwings via Amazon. I had looked into a lot of other providers and nothing had a similar combination of reliability, cost, and customer service.
yeah, I think it's one of those annoyingly over stylized articles that animates/loads as you scroll and Archive doesn't preserve the javascript (or whatever) to make it look & work right
FedEx in my case paid the bill to customs, shipped me my item, and then secondary sent me a bill to pay for the customs fees after I had already received the item.
They don't want shipments stuck in port because storage there is expensive.
That assumes the server has a lot of additional CPU power to serve the content as HTML (and thus do the templating server side), whereas with XSLT I can serve XML and the XSLT and the client side can render the page according to the XSLT.
The XSLT can also be served once, and then cached for a very long time period, and the XML can be very small.
With server-side rendering you control the amount of compute you are providing, with client-side rendering you cannot control anything and if the app would be dog slow on some devices you can't do anything.
If this is the reason to remove and or not add something to the web, then we should take a good hard look at things like WebSerial/WebBluetooth/WebGPU/Canvas/WebMIDI and other stuff that has been added that is used by a very small percentage of people yet all could contain various security bugs...
If the goal is to reduce security bugs, then we should stop introducing niche features that only make sense when you are trying to have the browser replace the whole OS.
whatever you do with xslt you can do it in a saner way, but whatever we need to use serial/bluetooth/webgpu/midi for there is no other way, and canvas is massively used.
I'd love to see more powerful HTML templating that'd be able to handle arbitrary XML or JSON inputs, but until we get that, we'll have to make do with XSLT.
For now, there's no alternative that allows serving an XML file with the raw data from e.g. an embedded microcontroller in a way that renders a full website in the browser if desired.
Even more so if you want to support people downloading the data and viewing it from a local file.
If you're OK with the startup cost of 2-3 more files for the viewer bootstrap, you could just fetch the XML data from the microcontroller using JS. I assume the xsl stylesheet is already a separate file.
I don't think anyone is attached to the technology of xslt itself, but to the UX it provides.
Your microcontroller only serves the actual xml data, the xslt is served from a different server somewhere else (e.g., the manufacturer's website). You can download the .xml, double-click it, and it'll get the xslt treatment just the same.
In your example, either the microcontroller would have to serve the entire UI to parse and present the data, or you'd have to navigate to the manufacturers website, input the URL of your microcontroller, and it'd have to do a cors fetch to process the data.
Service workers are already predestined to do this kind of resource processing and interception, and it'd provide the same UX.
The service worker would not be associated with any specific origin, but it would still receive the regular lifecycle of events, including a fetch event for every load of an xml document pointing at this specific service worker script.
Of course there is a better way than webserial/bluetooth/webgpu/webmidi: Write actual applications instead of eroding the meaning and user expectations of a web browser. The expectation should not be that the browser can access your hardware directly. That is a much more significant risk for browsers than XSLT could ever be.
Sounds like libxslt needs more than just a small number of fixes, and it sounds like Google could be paying someone, like you, to help provide the necessary guidance and feedback to increase the usability and capabilities of the library and evolve it for the better.
Instead Google and others just use it, and expect that any issues that come up to be immediately fixed by the one or two open source maintainers that happen to work on it in their spare time. The power imbalance must not be lost on you here...
If you wanted to dive into what [3] does, you could do so, you could then document it, or refactor it so that it is more obvious, or remove the compile time flag entirely. There is institutional knowledge everywhere...
or, the downstream users who use it and benefit directly from it could step up, but websites and their users are extremely good at expecting things to just magically keep working especially if they don't pay for it. it was free, so it should be free forever, and someone set it up many moons ago, so it should keep working for many more magically!
// of course we know that, as end-users became the product, Big Tech [sic?] started making sure that users remain dumb.
Browser vendors aren't maintaining the web for fee, they are for profit corporations that have chosen to take on that role for the benefits it provides to them. It's only fair that we demand that they also respect the responsibilities that come with it. And we can also point out the hollowness about complaints of hardship due to having to maintain the web's legacy when they keep making it harder for independent browser developers by adding tons on new complexity.
Sure, of course, but unless funding is coming from users the economics won't change, because:
The vendors cite an aspect of said responsibility (security!) to get rid of an other aspect (costly maintenance of a low-revenue feature).
The web is evolving, there's a ton of things that developers (and website product people, and end-users) want. Of course it comes with a lot of "frivolous" innovation, but that's part of finding the right abstractions/APIs.
(And just to make it clear, I think it's terrible for the web and vendors that ~100% of the funding comes from a shady oligopoly that makes money by selling users - but IMHO this doesn't invalidate the aforementioned resource allocation trade off.)
While it is technically no longer supported, you can still install the whole kit and caboodle using pip in a Python virtual environment, but why would you?