We had a SCSI zip-drive at our uni and it was a brilliant way to drag megabytes of content home. Even though I had amazing internet (2Mbit shared by 100+ ppl), the zip drive would still be a good way of getting stuff home.
Then I got to experience the click of death and the internet connection was bumped to 100Mbit and I didn't need to replace my zip drive.
You have to deal with a lot more stuff. You have to order/pay for a server (capex), mount it somewhere, wire up lights-out-mgmt and recovery and do a few more tasks that the provider has already done.
Then, say if the motherboard gives up, you have to do quite a bit of work to get it replaced, you might be down for hours or maybe days.
For a single server I don't think it makes sense. For 8 servers, maybe. Depends on the opportunity cost.
Have you done this yourself? If you haven't I think you'd discover server hardware is actually shockingly reliable. You could go years without needing to physically touch anything on a single machine. I find that people who are used to cloud assume stuff is breaking all the time. That's true at scale, but when you have a handful of machines you can go a very long time between failures.
Yes, having done this for decades, it happens often enough that you need to plan for it. You need to have redundancy, spare parts, and staffing or you are basically gambling. All of this has to be tested, too, or you might find that your failover mechanism has dependencies you didn’t plan for or unexpected failure modes (I’ve twice experienced data center hard outages due to the power distribution system failing oddly when switching between mains and UPS power, or UPS and generator).
Using something like AWS can make it easy to assume that servers don’t fail often but that’s because the major players have all of that behind the scenes, heavily tested, and will migrate VMs when prefail indicators trigger but before stuff is done.
If you have failover redundancy of services across your systems of some kind to mitigate then great. With proper setup no worries. I guess it depends how much you want to take on vs hand off.
MoE is excellent for the unified memory inference hardware like DGX Sparc, Apple Studio, etc. Large memory size means you can have quite a few B's and the smaller experts keeps those tokens flowing fast.
I feel like this is captures the point very well. Google removing this software, means that for 99% of the users on the platform, the choice to play this gets taken away from user.
I think it is interesting that these pieces of software are now being inspired by Midnight Commander and are being built by people who never worked with or experiences the original, Norton Commander.
the whole point of varnish software keeping a public version of "vinyl cache" as "varnish cache" with TLS is to give people a way to access a FOSS version with native TLS.
I think TLS is table-stakes now, and has been for the last 10 years, at least.
fwiw; Varnish Software still maintains and supports hitch, but we can't say we see a bright future for it. Both the ergonomics and the performance of not being integrated into Varnish are pretty bad. It was the crutch we leaned as it was the best thing we could make available.
I would recommend migrating off within a year or two.
To claim "the ergonomics and the performance of not being integrated into Varnish are pretty bad" you would need to show some numbers.
In my view, https://vinyl-cache.org/tutorials/tls_haproxy.html debunks the "ergonomics are bad" argument, because using TLS backends is literally no different than using non-TLS.
On performance, the fundamentals have already been laid out in https://vinyl-cache.org/docs/trunk/phk/ssl.html - crypto being so expensive, that the additional I/O to copy in and out another process makes no difference.
We've been pushing 1.5Tbps with TLS in lab settings. I've yet to see any other HTTP product being able to saturate these kind of networking. There is lots to be said about threading, but it is able to push a lot bandwidth.
And yes, I think the ergonomics are bad. Having varnish lose visibility into the transport means ACLs are gone, JA3 and similar are gone and the opportunity to defend from DoS are much more limited.
Crypto used to be expensive in 2010. It is no longer that expensive. All the serialization, on the other hand, that is expensive and latency is adding up.
Every single HTTP server in use out there has TLS support. The users expectation is that the HTTP server can deal with TLS.
Thanks for the info, but I'm a bit confused, sorry.
The reason for hitch was that tls and caching are a different concern, and the current recommendation is to use haproxy, which also isnt integrated into varnish/vinyl.
But you say that the reason to migrate off hitch is that its not integrated?
But what happend to separation of concerns, then? Is the plan to integrate tls termination into vinyl? Is this a change of policy/outlook?
Varnish Software released hitch to facilitate TLS for varnish-cache.
Now that Varnish has been renamed, Varnish Software will keep what has been referred to as a downstream version or a fork, which has TLS built in, basically taking the TLS support from Varnish Enterprise.
This makes Hitch a moot point. So, I assume it'll receive security updates, but not much more.
Wrt. separation of concerns. Varnish with in-core TLS can push terabits per second (synthetic load, but still). Sure, for my blog, that isn't gonna matter, but having a single component to run/update is still valuable.
In particular using hitch/haproxy/nginx for backend is cumbersome.
Totally agree. But, if i may, the docs on varnish and tls are hella confusing. I just re-read the varnish v9 docs, and its not clear at all that/if it supports tls termination.
Literally every doc, from the install guide to the "beef in the sandwich" talks about it NOT supporting tls termination... then one teeny para in "extra features in v9.0" mentions 'use -A flag'...
This is cool! But also, worth mentioning. Sure I know its an open source project so you don't owe anyone anything, but also one with a huge company behind it - and this is a huge change of stance and also, sounds cool.
So because perbu was clearly talking with his varnish software hat on, here's the perspective from someone working on Vinyl Cache FOSS only:
I already commented on the separation of concerns in the tutorial, and the unpublished project which one person from uplex is working on full time will have the key store in a separate process. You might want to read the intro of the tutorial if you have not done so.
But the main reason for why the new project will be integrating TLS more deeply has not been mentioned: It is HTTP/3, or rather QUIC. More on that later this year.
haproxy supports both the offload (client) and onload (backend) use case. This is the main reason for why I personally prefer it. I can not comment on how well hitch works in comparison, because I have not used it for years.
Security is not a concern for the purpose of my question here, please ignore that for now. I'm just looking for text summary and search functionality here, not looking to give it full system access and let it loose on my computer or network. I can easily set up VM/sandboxing/airgapping/etc. as needed.
My question is really just about what can handle that volume of data (ideally, with the quoted sections/duplications/etc. that come with email chains) and still produce useful (textual) output.
Yes. This is my experience as well. The software quality is generally horrible. It surely has improved a lot over the last couple of months, but it is still pretty horrible.
It is quite normal for me to have to force-close Claude Desktop.
Then I got to experience the click of death and the internet connection was bumped to 100Mbit and I didn't need to replace my zip drive.
reply