Hacker Newsnew | past | comments | ask | show | jobs | submit | agartner's commentslogin

There was another really bad incident today: https://www.githubstatus.com/incidents/zsg1lk7w13cf

> We have resolved a regression present when using merge queue with either squash merges or rebases. If you use merge queue in this configuration, some pull requests may have been merged incorrectly between 2026-04-23 16:05-20:43 UTC.

We had ~8 commits get entirely reverted on our default branch during this time. I've never seen a github incident quite this bad.


Downtime is one thing. Silently reverting commits on your default branch is something else entirely.

Similar here. Somewhat ironic that a tool that was supposed to be preventing merge conflicts was authoring completely mangled commits to our mainline branch.

yeah this is crazy we had many PRs reverted as well on many repos. downtime is 1 thing, but reverting PRs is failure on another level

Really makes one appreciate that concept of A.C.I.D. database transactions.

We've also seen quite a few commits disappear from main, the status of the PRs continued on merged. Was stressful.

Same, we got an email with a PDF with the affected commits and how to fix. What an absolute clown show.

If anyone is looking to make a similar thing in go I suggest https://pkg.go.dev/mvdan.cc/sh/v3/interp

I've been using it to make my own sandbox that is much more configurable than the default claude code sandbox: https://github.com/gartnera/lite-sandbox


Tailscale policy tests are a bit hard to write but help us have confidence in our changes.

https://tailscale.com/kb/1337/policy-syntax#tests


I've tried using policy tests but as far as I remember you can't test access to specific hostnames, only tags. I know Tailscale ACLs operate on tags but in tests I want to validate that users can access specific things, validating they can access tags isn't very useful. I also don't really think the tests should be in the ACL file itself, I would much prefer if it were external, or if the Tailscale CLI had a command to run ad-hoc reachability testing.


I guess it was only a matter of time...

Part of this is fair since there is a cost to operating the control plane.

One way around this is to go back to using check runs. I imagine a third party could handle webhooks, parse the .github/workflows/example.yml, then execute the action via https://github.com/nektos/act (or similar), then post the result.


inb4 "our webhooks are now 2c per call"


I've been using this for a ~year now and it works very well. Thanks!



Working on decently cool things with relatively limited bureaucracy.


You can do that at established companies. If the cool thing comes to an end you'll often have a boring job you can keep or stay at while you find something else.


I've been patiently waiting to convert my ZFS array to bcachefs. I'm very excited about better SSD promotion logic. But I'm not willing to spend any time on an experimental filesystem on my main systems.

> But you can expect to get flamed about running Debian, or perhaps more accurately, not being willing to spearhead Kent's crusade against Debian's Rust packaging policies.

It is quite unfortunate that Kent couldn't have just said "Debian isn't supported, we will revisit this when bcachefs is more stable" and stopped talking after that. Debian and experimental software just don't work well together.


Oh, the author's completely misrepresenting what happened here.

We had a major snafu with Debian, where a maintainer volunteered to package bcachefs-tools (this was not something I ever asked for!), and I explained that Debian policy on Rust dependencies would cause problems, and asked him not to do that.

But he did debundle, and then down the road he broke the build (by debundled bindgen and ignoring the minimum version we'd specified), and then _sat on it for months_, without even reporting the issue.

So Debian users weren't getting updates, and that meant they didn't get a critical bugfix that fixed passing of mount options.

Then a bunch of Debian users weren't able to mount in degraded mode when a drive died. And guess who was fielding bug reports?

After that is when I insisted that if bcachefs-tools is packaged for debian, dependencies don't get debundled.

If you're going to create a mess and drop it in my lap, and it results in users not able to access their filesystem, don't go around bitching about being asked to not repeat that.


Yeah just typical Debian stuff. jwz has been ranting about this for years. It's not worth spending any time on it.

Some suggestions:

- Only "supporting" the latest mainline kernel and latest tools. I prefer to point to CI system configurations to show exactly what it "supported"

- Make this clear via your website and a pinned issue on Github.

- Force users to report the versions they use via an issue template: https://docs.github.com/en/communities/using-templates-to-en.... Immediately close any issues not meeting your version/system requirements without further discussion or thought.


That last one’s great advice. I don’t remember if you can use checkboxes there and I’m too lazy to look at the moment, but I could imagine the first question being:

  [ ] I am using Debian packages
and auto-closing if set.


Do you ever admit you're wrong?


I think I did once back in 2002.


I seem to recall a previous fs creator with ego problems was tried and convicted of murder, and then his work unceremoniously disappeared into an oubliette.

I’m 99% sure you’re joking but as an outsider I have… concerns.


It does help to have a sense of humor :)


That was a good one! Keep up your humor. It's a tough environment out there.


Genuinly curious: it seems like you are making a remark on his character, right? But why did you do so? Just fed up? Or did he actually state something wrong in the parent comment?


I've been running bcachefs on my spare dedicated SteamOS gaming machine for fun. Especially for the SSD promotion logic. It's a spare computer with an old 128GB SSD and 3TB HDD that I've got as a single filesystem. I love not having to manage games between the SSD/HDD. Too bad it's a mITX build with no space for more old drives I could stick in.


Here's a quick example I put together on how to use these runners to accelerate docker builds: https://github.com/gartnera/actions-arm64-native-example


amazing, exactly what I was looking for. thank you


There are attestations that the binaries were built via CI:

https://userdocs.github.io/qbittorrent-nox-static/artifact-a...

Here's a verification of the latest build:

  gh attestation verify x86_64-qbittorrent-nox -o userdocs
  Loaded digest sha256:af9ceace3fea1418ebd98cd7a8a7b6e06c3d8d768f4d9c9ce0d3e9a3256f40d9 for file://x86_64-qbittorrent-nox
  Loaded 1 attestation from GitHub API
   Verification succeeded!

  sha256:af9ceace3fea1418ebd98cd7a8a7b6e06c3d8d768f4d9c9ce0d3e9a3256f40d9 was attested by:
  REPO                             PREDICATE_TYPE                  WORKFLOW
  userdocs/qbittorrent-nox-static  https://slsa.dev/provenance/v1  .github/workflows/matrix_multi_build_and_release_qbt_workflow_files.yml@refs/heads/master


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: