I am not a lawyer, but from my understanding the legal precedent is NEC v. Intel which established that clean-room software development is not infringing, even if it performs the same functionality as the original.
As an aside, this clean room engineering is one of the plot points of Season 1 of the TV show Halt and Catch Fire where the fictional characters do this with the BIOS image they dumped.
You can write automated refactoring with clang tools if you need AST-level knowledge across your project (or monorepo).
I’m not sure if there’s other public examples leveraging this, but Chromium has this document [0] which has a few examples. And there’s also the clang-tidy docs [1].
I’ve recently acquired some photo books that don’t appear to have any ISBN but are listed on WorldCat and have OCLC Numbers and are catalogued in the Japanese National Diet Library. Not sure if they actually don't have ISBNs or if I just haven't been able to find them, but from what I got from some research it's quite common for self-published books.
I'm not sure if you're aware, but there's the Wheel Variants proposal [0] that the WheelNext initiative is working through that was presented at PyCon 2025 [1][2], which hopes to solve some of those problems.
uv has implemented experimental support, which they announced here [3].
If you watch the video, (one of) the reasons why the AI was winning was because it was using “meta” information from the Street View camera images, and not necessarily because it’s successfully identifying locations purely based on the landmarks in the image.
> I realized that the AI was using the smudges on the camera to help make an educated guess here.
Pro geoguessr players do the same thing. The vividness of the colors and weirdness in the sky are two examples I've seen Rainbolt use in the past (and he's not even the best).
Things like cargo-crev [0] or cargo vet [1] aim to tackle a subset of that problem.
There’s also alternate implementations of crev [2] for other languages, but I’m not sure about the maturity of those integrations and their ecosystems.
The University of Waterloo has a similar course, CS452: Real-time Programming.
It’s not quite the same as having physical access to the train set, but a student eventually wrote a simulator for the Märklin train set [0]. Another student wrote an emulator for the TS-7200 used for the class [1] if you don’t want to test your kernel in QEMU.
I don't own AirPods, but one of the things that I've struggled with after the proliferation of headphone jack removal is that on all the Bluetooth headphones/earbuds I've tried the lowest volume setting is still too loud. I normally use Shure SE215s wired, but I've tried the Sennheiser PXC550, Sony WH-1000XM3, Jabra Elite 7 Sport with similar impressions, and tried using my work 2021 MacBook Pro as the audio source instead of my phone. Surely I'm not the only one who feels this way?
On my Samsung phone, I've had to manually set individual app volumes to 80% via Sound Assistant, have additional volume steps enabled, and have the system sound set to the lowest setting when using Bluetooth.
Bought 1MORE BT per Wirecutter’s recommendation and I’ve had the exact same problem, complete with tinnitus tendencies. The minimum volume seems highest when connected to my iPad.
Rob ter Horst's (The Quantified Scientist) test results compared against a fingertip pulse oximeter—which include at ground level and on flights—seem to indicate that they're okay for detecting whether your SpO2 readings are normal/abnormal. Basically it seems that if you get a one-off abnormal reading then it's possible for it to be a false positive, while you're unlikely to consistently get false positive results.
See the video for the Apple Watch Series 6 [0], and Series 7 [1].
There's also tests for the Series 8 [2], although it doesn't include data collected in a low oxygen environment.
As an aside, this clean room engineering is one of the plot points of Season 1 of the TV show Halt and Catch Fire where the fictional characters do this with the BIOS image they dumped.