cargo check reported over 16,000 compiler errors when I wrote that message. It could not print a version number or run JavaScript. I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive. There’ll be a blog post with more details.
In case people no longer remember, when China started to require websites to register for a license before be allowed to operate, it was for "protecting the children" too.
This simple policy then goes on to silence most individual publisher(/self-media) and consolidated the industry into the hands of the few, with no opportunity left for smaller entrepreneurs. This is arguably much worse than allowing children to watch porn online, because this will for sure effect people's whole life in a negative way.
Also, if EU really wants "VPN services to be restricted to adults only", they should just fine the children who uses it, or their parent for allowing it to happen. The same way you fine drivers for traffic violation, but not the road.
And if EU still think that's not enough, maybe they should just cut the cable, like what North Korea did.
> I work on Bun and this is my branch
>
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
>
> I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
The EU Digital (identity) Wallet EUDI requires hardware attestation by Google or Apple, effectively tying all the digital EU identities to American duopoly. Talk about digital sovereignity. Apparently protecting the children > sovereignity.
1. First, year ~2015 legal framework was created under disguise of banning pirated media(specifically torrents.ru)(legislative push). State-wide DNS ban introduced. Very easy to circumvent via quering 8.8.8.8
2. Then, having legal basis, govt included extra stuff in banned list(casinos, terrorist orgs, etc)(executive push). IP bans introduced, applied very carefully.
3. Legal expanded allowing govt to ban specific media on very vague criterias(legislative push). IP blocks tried on some large websites. DPI hardware mandated to be installed by ISPs to filter by HTTPS SNI(executive push).
4. At ~2019 Roskomnadzor(RKN) created, special govt entity which enforces bans without court orders(legislative push).
5. ~2021 sites become banned if they are not filtering content by Russian laws by request of RKN(executive push). VPN services were obligated to also DPI-filter traffic(legislative push).
6. ~2023 Crackdown on VPN started(executive push). Popular commercial services were IP-banned, OpenVPN and IPSec connections selectively degraded by DPI.
7. ~2025 Heavy VPN filtering(vless, wireguard, etc) introduced(executive push). Performance of certain sites were degraded(youtube, twitter, etc).
Years ago, I joined a company, took over a dev team and was asked to launch the product in 3 months.
They were using AWS, so I logged in the account to add a few more machines. Right there, in front of my eyes, were the signs of an adversarial, abusive relationship.
The UI to fire up a new machine did not show me the price. I had to look up the price in another table that did not have the specs.
I had to have the two tables open, cross check the specs and price.
If I had learned one thing from my past life was that if you see the signs of an abusive relationship, you have the option to walk out, and you don't, all that follows is your own fault.
Created a DigitalOcean account, moved everything over. Set up our CI/CDs to deploy there, and spent the next two months on the product, launching one month earlier than promised.
Some years before that I saw a video online where a person digs a hole near a river and puts a pipe connecting the river and the hole. The fishes push themselves hard in the pipe to get to their trap. Choosing the path of least resistance, and never backing off from a mistake: recipes to end up like those fishes. The video left a big impression on me.
Give me the ability to choose what I trust. “You can either trust Apple and nobody else, even yourself, or you can trust literally everybody” is obviously not a good faith implementation of this. Apple excels at steering the narrative with false conflation and false dichotomy, I’d also remind you of the came-and-went secure boot debate, which Apple successfully steered into Apple owns the encryption keys vs no encryption, and people just kind of forgot to ask, wait, why can’t I have the keys to my device?
I am a physics professor and often use Gemini to check my papers. It is a formidable tool: it was able to find a clerical error (a missing imaginary unit in a complex mathematical expression) I was not able to find for days, and it often underlines connections between concepts and ideas that I overlooked.
However, it often makes conceptual errors that I can spot only because I have good knowledge of the topic I am discussing. For instance, in 3D Clifford algebras it repeatedly confuses exponential of bivectors and of pseudoscalars.
Good to know that ChatGPT 5.5 Pro can produce a publishable paper, but from what I have seen so far with Gemini, it seems to me that it is better to consider LLMs as very efficient students who can read papers and books in no time but still need a lot of mentoring.
As someone who has spent a vast portion of life believing technology would make life better, I've come to the realisation that this idea is a fallacy. Technology amplifies power and until we collectively redefine and enforce a value system that benefits us all, the advancements in technology simply serve as a means of subjugation
Very impressive that they could do this so quickly because I have been on a similar project (porting TypeScript to Rust) for 5 months. But I guess I don't have access to Mythos and unlimited tokens. I'm also close to 100% pass rate. 99.6% at the time of writing.
Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.
Also want to note that writing the code using LLM doesn't remove the need to have a vision for the design and tradeoffs you make as you build a project. So Jarred and his team are the right kind of people to be able to leverage LLMs to write huge amounts of code.
My concern here is that by gravitating to HTML you lose the ability for a human (you!) to easily co-author the document with the LLM. If it’s just an explainer for your consumption, that’s not a concern - but if it’s a spec sheet for something more complex, I deeply value being able to dive in and edit what is produced for me. With a HTML doc it is much harder to do that than with MD.
Now of course you could just reprompt your LLM to change the HTML - but when I already have a clear idea of what I want to say in my head, that’s just another roadblock in the way.
If this pattern becomes more common I suspect human/LLM co-creation will further dwindle in favour of just delegating voice, tone and content choice to the LLM. I was surprised not to see this concern in the blog post’s FAQ.
Least shocking thing I've read about LLMs recently.
They are essentially like that one JPEG meme, where each pass of saving as JPEG slightly degrades the quality until by the end its unrecognizable.
Except with LLMs, the starting point is intent. Each pass of the LLMs degrades the intent, like in the case of a precise scientific paper, just a little bit of nuance, a little bit of precision is lost with a re-wording here and there.
LLMs are mean reversion machines, the more 'outside of their training' the context/work load they are currently dealing with, the more they will tend to gradually pull that into some homogenous abstract equilibrium
Here's how things play out: Zuck gets some idea, he's surrounded by a bunch of yes men who say "yes, this will definitely change the world", then it turns into this optics game of kissing the ring. You ask yourself "how could they blow 80B on the Metaverse like that", this is how.
DON'T JOIN META, no matter how fast the recruiters reply to your messages. No matter how cool the work sounds (the managers lie in team matching). There's a reason why the average tenure is <2 years.
It's a toxic and fear based culture. You join, the people around you are already thinking how to scapegoat you. People gatekeep actual work and save it for political favorites and everyone else on the outside is stuck cooking up bullshit projects. If you do manage to find work on your own, people will immediately start scheming to steal it
Everyone wants to pin this on the Microsoft acquisition or incompetence but it seems pretty clear to me from the material
GitHub has posted that AI has 10xed the amount of code being committed to GH, which has downstream effects everywhere - CI, Actions, code ingestion, everywhere. The author pins it on weird things like MS Copilot, which kind of feels like he’s listing off things he doesn’t like rather than casual favors. This is ignoring the 800 pound gorilla in the room.
Back in the day it was reasonably common for CMSs and forums to only have an index.php, and routing entirely by query string (in form-urlencoded form, people were not savages). So you would have index.php?p=home and index.php?p=shop. Or index.php?action=showthread&forum=42&thread=17976. It should be immediately obvious that in that scheme 404 is indeed the correct answer to unknown query parameters
In fact lots of sites still work like that, they just hide it behind a couple rewrite rules in apache/nginx for SEO reasons
This is all way too much. If you see a duplicate idempotency key, skip the replay and always return 409. This becomes a client problem. Clients already need to help enforce idempotent contracts; "check for conflict response" is not an onerous imposition.
I've built multiple ecommerce APIs with this approach and they work great. No heroic measures required. You can often satisfy this contract with a unique constraint; if not, a simple presence check in redis. No hashing or worrying about PII.
Any user who does not like Gatekeeper can turn it off on their machine in ten seconds by running this in a Terminal:
sudo spctl —-master-disable
People will say, no, that’s too big a hammer, it’s not safe… but then, like, what do you actually want? Either you keep Gatekeeper because you like the friction it introduces, or you don’t like that friction and you should go turn it off. Pick one, you obviously can’t have both!
Of course, you as the developer can’t make this choice for your users… but isn’t that as it should be? The user decides what code is allowed to run on their machines. And the default setting is restrictive because anyone who knows what they’re doing can easily change it.
P.S. Meanwhile, on iOS there’s no way to install unsigned software at all, and on Android (starting soon) the process takes 24 hours instead of ten seconds. That is actually ridiculous because it’s taking away user choice.
P.P.S. To be clear, modern macOS has plenty of other restrictions which can’t really be turned off and which I find super annoying. Gatekeeper just isn’t one of them.
Edit: I’ve just learned that as of Sequoia, you have to also tick a box in Settings after running the Terminal command. So maybe it takes 30 seconds instead of ten seconds. That’s mildly more annoying, but still doesn’t really seem like a big deal to me.
I'm suspicious of their results with regards to tool usage.
It's unsurprising that round-tripping long content through an LLM results in corruption. Frequent LLM users already know not to do that.
They claim that tool use didn't help, which surprised me... but they also said:
> To test this, we implemented a basic agentic harness (Yao et al., 2022) with file reading, writing, and code execution tools (Appendix M). We note this is not an optimized state-of-the-art agent system; future work could explore more sophisticated harnesses.
And yeah, their basic harness consists of read_file() and write_file() - that's just round-tripping with an extra step!
The str_replace and insert commands are essential for avoiding round-trip risky edits of the whole file.
They do at least provide a run_python() tool, so it's possible the better models figured out how to run string replacement using that. I'd like to see their system prompt and if it encouraged Python-based manipulation over reading and then writing the file.
You can approach the task in whatever
way you find most effective:
programmatically or directly
by writing files
As with so many papers like this, the results of the paper reflect more on the design of the harness that the paper's authors used than on the models themselves.
I'm confident an experienced AI engineer / prompt engineer / pick your preferred title could get better results on this test by iterating on the harness itself.
You know I was actually really curious about this so I went back to the HTML and URL W3C standards and surprisingly they don't actually have any definitions of format other than being percent encoded. One might conflate query strings with "form-urlencoded"[0] query strings, which is one potential interoperability format, but in general a queries string is just any percent encoded string following a "?" in a url[1], and just another property in the "URL" HTML object that can be used in the generation of a response. While additionally there is a URLSearchParams object that is the result of parsing the query string with the form-urlencoded parser, this is simply an interoperability layer for JavaScript.
I'm going to be honest, I was pretty geared up to have a contrarian opinion until I looked at the standards but they're actually pretty clear, a 404 could be a proper response to unexpected query string; query string is as much part of the URL API as the path is and I think pretty much everyone can acknowledge that just tacking random stuff onto the path would be ill advised and undefined behavior.
This is a really good thread on why this technology is becoming a problem for "open" anything. The argument "we can create our own separate web" is fine until all of your services are behind the web that locks you into owning a Google approved or Apple approved mobile device.
IA needs to do what Usenet has done. Have a bunch of mission-aligned but unrelated orgs (under different ownership and distributed around the world) that peer with each other, distribute all the content obtained by any of the orgs to each other, but that have no technical channel nor capability to distribute DMCA complaints and takedown requests.
This is (AFAIK) basically how Usenet piracy works. You send your warez to one provider, and that provider instantly replicates them to all the providers they peer with, recursively, until they eventually reach the entire network. When any of those providers get a DMCA complaint, they remove the offending files (as they're required to do by law), but they don't inform other providers that they've received a DMCA notice, so those providers keep serving those files. This makes it much harder to remove data from the network than it is to add it.
> Internet Archive Switzerland joins a growing group of mission-aligned organizations, alongside Internet Archive, Internet Archive Canada, and Internet Archive Europe. Together, these independent libraries strengthen a shared vision: building a distributed, resilient digital library for the world.
> seriously want to stop adolescents from watching porn
no, they want to pretend this is the issue, so that pervasive monitoring or permission and/or deanonymization is normalized. It is to serve the state apparatus, rather than any actual protection.
I am one of the original authors of Space Cadet Pinball and I just want to say it is absolutely wonderful there are people who love our old pinball game enough to keep it alive. You made my day.
I am forwarding this post to my Cinematronics co-founders and friends, Mike Sandige (lead engineer) and Kevin Gliner (designer and product manager). They will enjoy seeing this as much as I did.
I made the tragic mistake of getting a Bambu printer (an X1C, with AMS even...) right before they gave all of us the middle finger. I now have it offline, running out of date firmware, connected to a special WiFi network that is isolated from the Internet.
That upset me, but now I'm pissed. Now I don't even care about their stupid printers. Now I'd like to waste Bambu Lab's time and cause problems for them.
And also, while this X1C should be going strong for years, my eyes are on Prusa should I want another printer any time soon for any reason. Less polished or not, they seem like they're still better for consumers even though they are apparently less open than they used to be. But I'm of course interested in hearing what people recommend, too. (I got an X1C because I knew it would be simple, but I don't particularly mind getting my hands dirty or anything. I did build an Ender 3 kit before that.)