This also smells of an autoregressive model trying to make a point that TiinyAI simply forked another repo and claimed as their own invention, before realizing mid-paragraph it's by the same people:
>So no, TiinyAI did not “launch” PowerInfer. SJTU researchers did.
>TiinyAI’s GitHub repo is a fork of the original PowerInfer repository. At least one of the original academic authors appears tied to the code history. So there is clearly some real overlap between the research world and the product world.
ARM/RISC-V extensions may be another reason. If a wide-spread variant configuration exists, why not build for it? See:
- RISC-V's official extensions[1]
- ARM's JS-specific float-to-fixed[2]
This seems to be from astral, the organization behind ruff[1], uv[2], and rye[3] (all good things!).
They seem to force using python -m pip on Windows since they don't ship pip.exe[4]? This is very interesting.
From their doc[5] since the README.md's super-bare:
> These Python distributions contain a fully-usable, full-featured Python installation: most extension modules from the Python standard library are present and their library dependencies are either distributed with the distribution or are statically linked.
> The Python distributions are built in a manner to minimize run-time dependencies. This includes limiting the CPU instructions that can be used and limiting the set of shared libraries required at run-time. The goal is for the produced distribution to work on any system for the targeted architecture.
TL;DR: Why not add a capability/permissions model to CI?
I agree that pinning commits is reasonable and that GitHub's UI and Actions system are awful. However, you said:
> Maybe accounts should even require ID verification
This would worsen the following problems:
1. GitHub actions are seen as "trustworthy"
2. GitHub actions lack granular permissions with default no
3. Rising incentives to attempt developer machine compromise, including via $5 wrench[1]
4. Risk of identity information being stolen via breach
> It's time to take things seriously.
Why not add strong capability models to CI? We have SEGFAULT for programs, right? Let's expand on the idea. Stop an action run when:
* an action attempts unexpected network access
* an action attempts IO on unexpected files or folders
The US DoD and related organizations seem to like enforcing this at the compiler level. For example, Ada's got:
* a heavily contract-based approach[2] for function preconditions
* pragma capabilities to forbid using certain features in a module
Other languages have inherited similar ideas in weaker forms, and I mean more than just Rust's borrow checker. Even C# requires explicit declaration to accept null values as arguments [3].
Some languages are taking a stronger approach. For example, Gren's[4] developers are considering the following for IO:
1. you need to have permission to access the disk and other devices
2. permissions default to no
> We can't afford to fuck around anymore,
Sadly, the "industry" seems to disagree with us here. Do you remember when:
1. Microsoft tried to ship 99% of a credit card number and SSN exfiltration tool[5] as a core OS component?
2. BSoD-as-service stopped global air travel?
It seems like a great time to be selling better CI solutions. ¯\_(ツ)_/¯
My understanding of black is that it solves bikeshedding by making everyone a little unhappy.
For aligned column readability and other scenarios, # fmt: off and # fmt: on become crucial. The problem is that like # type: ignore, those start spreading if you're not careful.
My only complaint with black is that it only splits long definitions into per-line if they exceed a limit. That’s probably configurable, now that I write it down.
Other than that, I actually quite like its formatting choices.
TL;DR: Octo[1] and OctoJam were cozy little highlights despite the grimness of the pandemic years.
Octo[1] targets variants of CHIP-8, an ancient virtual console. The language is so different from daily work that even its annoyances were refreshing. Yes, that includes having to overwrite parts of instructions to get desired behavior.
The maintainer has moved on[2] to working on Decker[3], but I'm still grateful for his dedication. He underestimates his contributions to encouraging a new generation of emulator developers. I haven't had time to do a deep dive into emulation beyond CHIP-8, but I've enjoyed making:
Others have done far better. Timendus even wrote his own linker toolkit to build a multi-tasking operating system[4]. If you're interested, there have been some rumblings of an October event of some sort now and then on the EmuDev Discord server to fill the gap left by OctoJam's end.
TL;DR: How much of this is a potential class-action[1] and how much of this is failure to deliver on AI?
Am I missing something? On one hand, I think I get it: Intel hasn't been a GPU company historically. On the other, this quotes seems suspicious given Intel's gen 13 and 14 cores have issues:
> Simply put, we must align our cost structure with our new operating model and fundamentally change the way we operate
At the same time, my understanding is AMD seems ahead[2] of Intel in AI / CUDA support. This quote seems to be a nod to that without saying much else:
> Our revenues have not grown as expected — and we’ve yet to fully benefit from powerful trends, like AI. Our costs are too high, our margins are too low.
Before anyone point oout "Intel® Extension for Pytorch*" exists[3]:
1. That seems to be the official name (what?)
2. Their installation homepage seems a little convoluted[4]
> This isn't X. It's Y.