Does WAL really offer multiple concurrent writers? I know little about DBs and I've done a couple of Google searches and people say it allows concurrent reads while a write is happening, but no concurrent writers?
Not everybody says so... So, can anyone explain what's the right way to think about WAL?
No, it does not allow concurrent writes (with some exceptions if you get into it [0]). You should generally use it only if write serialisation is acceptable. Reads and writes are concurrent except for the commit stage of writes, which SQLite tries to keep short but is workload- and storage-dependent.
Now this is more controversial take and you should always benchmark on your own traffic projections, but:
consider that if you don't have a ton of indexes, the raw throughput of SQLite is so good that on many access patterns you'd already have to shard a Postgres instance anyway to surpass where SQLite single-write limitation would become the bottleneck.
Thanks! even I run a sqlite in "production" (is it production if you have no visitors?) and WAL mode is enabled, but I had to work around concurrent writes, so I was really confused. I may have misunderstood the comments.
I don't know what to say. People keep saying these engineers exist and here I am not having seen a single, and I follow many indie hackers communities.
A devops coworker found my blog and asked me how I host it, is it Kubernetes. I told him it's a dedicated server and he seemed amazed. And this was just a blog. It's real
Devops engineers did not know 101 of cable management or what even a cage nut is and being amazed to see a small office running 3 used dell servers bought dirt cheap, and shocked when it sounded like a air raid when they booted up, thought hot swapping was just magic.
It is always the case - earlier in the 80s-90s programmers were shaking their heads when people stopped learning assembly and trusted the compilers fully
This is nothing and hardly is shocking? new skills are learnt only if valuable otherwise one layer below seems like magic.
My point is that none of these coworkers have ever been at that stage. He was surprised about me hosting something because he seems to think hosting is expensive and for companies. Straight in at the top end of k8s and microservices
There's plenty of people that got a CS degree and went to work and this is only a job for them, they have no interest outside of work. Unfortunately I'm not one of those people so I get off work troubleshooting issues to troubleshoot issues at home lol though there aren't that many just my choice to self host cameras through HomeKit sometimes falls apart somehow but im also squeezing every KB or RAM out of that beelink I can.
Don't get me wrong I don't think a homelab is necessary, but I think people who have only done this in a big corporate environment are doing themselves a disservice - either a small company or a homelab can fix that itch, but like you say a lot of people don't have the interest
It's like a developer who went straight from knowing nothing about programming to JavaScript and never looked back. They missed C, they missed assembly, they missed cycle counting, they missed knowing what your memory footprint is at all times in your application, they missed keeping your inner loops tight and in the cache... It's not just "oh this person doesn't have a nerdy hobby." These are real skill holes in [many] developers' backgrounds, just like knowing how to host something on bare metal+OS is a real skill hole for some devops people.
I once interviewed for a small print shop that was proudly throwing out every AWS product name when describing their stack. They serve a few hundred customers and their previous system worked for decades entirely over email and a web form. I decided I wasn't interested around the point where he explained how they're migrating to lambdas
hey - devs aren' the only ones who fall in the premature optimization trap! Everyone from the CTO envisioning the scale of their future startup down to the IT intern is influenced by this, plus it's in the best interest of a dedicated infra guy to have a lot of dedicated infra. If you don't manage people K8s can become your kingdom and the size a badge of importance.
In this case I think it was a bit of CTO envisioning scale, then a bit of CTO genuinely overestimating what is needed, plus a good amount of CTO just being the average nerdy dev who likes the idea of shiny toys and cool sounding stuff - "we're running on k8s!".
A year or so after I left they ran out of money. They would've lasted longer if the infra guy would've just stayed the backend guy and helped get projects done more quickly instead of shiny k8s setups for projects with a dozen end-users per day. Recently I saw that the CTO has started a new startup - and ironically the only guy who he took with him onto the new team looks to have been the infra guy!
I don't blame infra guy, he genuinely believed he was doing the right thing.
I am not sure WireGuard existed at the time, and I used SoftEther and based it all on doing outbound tunnels to TCP/443* to avoid firewall blocks in corporate networks.
Thanks! Yes, Tela already does UDP hole-punching. I made Tela because I wasn't allowed to install Tailscale on my new corporate laptop, and no other available solution seemed to tick the right boxes. It started as a simple way to RDP to my home workstation, but then I realised that if I could do that, I could finally pull my ad-hoc home cloud into one tool. The hub model is very much by design, for organisational purposes. The hole-punching feature gives me the P2P speed (and even STUN, if available). An upcoming version will allow hub-to-hub topologies.
It should have occurred to me that tela is also Spanish, since about every third word I hear in a Tagalog sentence seems to be of Spanish origin.
Does tela create an L3 network? if that's the case, what do you do to avoid IP addressing clashes? In Wormhole I decided by default to use CGNAT addressing (100.64.0.0/10)
I did not go too far unfortunately, so I did not face problems such as discoverability (do you have to know/remember all the IP addresses from the devices connected? DNS? etc).
No, it doesn't create an L3 like Tailscale. A client (a machine running the tela CLI) connects to an agent (a machine running telad) via a hub (a machine running telahubd), but once they connect they negotiate a P2P route if they can. That's all managed by Wireguard an gVisor. The remote service is forwarded to a port on localhost, so SSH to a VM somewhere else would just be ssh to, say, localhost:10022. I'm investigating a local DNS so that users can instead type `ssh paul@dev-vm` instead of `ssh -p 10022 paul@localhost`.
I am working on an internal tool for one of my customers in the same arena and it is incredible how little creativity LLMs have - they end up showing similarly-looking software that explore the same "novel" ideas.
And yeah, We’ve noticed the same thing — when you ask LLMs to generate “novel” tooling ideas they tend to converge to very similar patterns. Many projects end up looking alike because they focus mostly on model orchestration.
That’s actually part of why we started building SiClaw. For AIOps to work in real infrastructure, the harder problems are things like security boundaries, multi-tenant environments, and how agents reason through diagnostics safely.
Sometimes I feel some of these rewrites or competitors to existing products just completely miss the point. We don't use ngrok because it has low latency or not. I'd say the amount of ngrok users that care that much about the latency you can shave off using QUIC is negligent.
Do you have any use cases where this is important?
Also, it takes 10 minutes to find a valid football stream, even without a VPN. Such is life.