Hacker Newsnew | past | comments | ask | show | jobs | submit | ethmarks's commentslogin

The source for the site is here: https://github.com/dyne/cjit/tree/main/docs. It's a VitePress site with a custom theme. Glancing through the code, I don't see any obvious signs of LLM coding. It also definitely wasn't created with Codex specifically, because according to the commit history, the first version of the site was in late 2024, months before Codex even released.

I don't think that complaining about things necessitates believing that you're entitled to them. I agree that complaining about things you received for free is in rather poor taste, but I don't think that it's morally wrong in the way that you seem to think it is. If an article you read for free had a pop-up ad on it, you have not been wronged in any way and do not have grounds to sue them, but you should be permitted to voice your complaint, so long as it's of the form "I don't like this" and not "look what they subjected me to, those monsters".

Pardon my ignorance, but why does the "447 TB/cm^2" density value use square centimeters instead of a volume unit? Does the information capacity of this material really scale in proportion to area? How? Or is it just a typo?


fluorographane is a single atomic layer — one carbon thick — so storage density is naturally per unit area. The paper also gives volumetric density for the nanotape spool architecture (0.4–9 ZB/cm³, Section 4.4).


The Copilot in Visual Studio (Code) is not the same as Microsoft's Copilot. The former is GitHub's AI product and the latter is Microsoft's AI product. You can tell them apart because GitHub Copilot's icon is a helmet with goggles and Microsoft Copilot's icon is a colourful swirl thing.

It's wildly confusing branding not only because they're identically-named things that both repackage OpenAI's LLMs, but also because they're both ultimately owned by the same company.

I can only assume that the conflicting naming convention was either due to sheer incompetence or because they decided that confusing users was advantageous to them.


> they're identically-named things that both repackage OpenAI's LLMs

Haven't tried it yet but the GitHub Copilot extension for VSCode also seems to integrate Claude, Gemini and other non-OAI stuff


They do, and those models are served by Microsoft. You pay a premium per “request” (what that means is not fully clear to me) for certain models. If you use the native chat extension in VSCode for GitHub CoPilot, with Opus model selected, you are not paying Anthropic. This counts against your GitHub Copilot subscription.

The Claude Code extension for VSCode from Anthropic will use your Claude subscription. But honestly it’s not very good - I use it but only to “open in terminal” (this adds some small quality of life features like awareness it’s in VSC so it opens files in the editor pane next to it).


The best non-Clude Code CLI integration by far has been Zed's and I prefer Zed over what VS Code has become.


And let’s not forget that Visual Studio Code (the IDE) is not Visual Studio (the IDE).


This is my biggest frustration as a full time .NET developer. Its especially worse when you're searching for Visual Studio (IDE) specifics, and get results for VS Code. It bewilders me why a company that owns a search engine names their products so poorly.


> This is my biggest frustration as a full time .NET developer.

Larger than the difference between the .Net Framework (that is a framework) and .Net Core (that is a framework)?


Copilot for Visual Studio (IDE) has multiple models, not just OpenAI models, it also includes Claude. It is basically a competitor to JetBrains AI.

The only good "AI" editor that supports Claude Code natively has so far been Zed. It's not PERFECT, but it has been the best experience short of just running Claude Code directly in the CLI.


Kurzgesagt didn't invent the concept of disassembling Mercury to build a Dyson swarm. Stuart Armstrong proposed it in a lecture in 2012[0].

[0]: https://youtu.be/zQTfuI-9jIo?si=3jwmhoB7zx6rclhb


Pretty sure the idea predates that lecture, it appears in Charles Stross' novel Accelerando from 2005 (which is based on short stories that were published years earlier).


There are other substances that can be used for reactor coolant. Molten salt reactors are actually substantially more efficient than water-cooled reactors because they have a higher operating temperature. You can also use liquid metal as coolant, such as lead or bismuth.


Could you elaborate? Why would being deep in the gravity well be a non-starter? I thought Mercury's proximity to Sol was a huge advantage because of the ample solar power which would make planet-side manufacturing easier.


They asked if the astronauts "want to risk it", not if it was actually safe. Those are very different questions. The astronauts are, in fact, the world's leading experts on whether or not they personally want to risk it, so it's not entirely unreasonable to think that they could answer that question.

It just depends on whether you think that the fact that they accept the risks is reason enough to let them fly a potentially-dangerous spacecraft.


I know we all have a lot of respect for astronauts, but the fact is that they blindly trust whoever tells them "it's safe enough" that it is, actually, safe enough.

Artemis II doesn't need astronauts to do its flights. Astronauts are trained to survive in a spaceship that does not need them to do anything at all. That it is their dream to survive in such a spaceship does not say at all that they have any valid idea of how much risk they are taking.

We can say "maybe the astronauts would accept to fly knowing that they have a probability of 1/30 of dying" all we want, but that doesn't answer the question here, which is: what is the probability that they die?

The article says "we don't really know: the first test flight was very concerning, and we used the exact same methods to prepare the second flight, so we won't really know how unsafe it is until we try it".

Sure, they have made tests on the ground. But the first flight proves that those tests are not enough, otherwise Artemis I wouldn't have had those issues in the first place.


This is a perfect way to put it.

Artemis II is not safe, at least by the standards we apply to things. It's the third flight of a capsule, on the second flight of the rocket, and the first flight of things like the life support system.

At the end of the day, one of the reasons astronauts are respected is they understand those risks, and go into space anyway. That doesn't mean we shouldn't try to minimize risks - but at some point the risk becomes acceptable, and the cost of reducing it too great.

To paraphrase a quote from Star Trek - risk is their business.


Taking a related quote from Dollhouse: "That is their business, but that is not their purpose."


Project Hail Mary. It's a sci-fi novel by Andy Weir (author of The Martian) that was adapted into a movie that released in theaters a couple weeks ago. It's fantastic and you should totally read/watch it.


Unintentional denial-of-service attacks from AI scrapers are definitely a problem, I just don't know if "theft" is the right way to classify them. They shouldn't get lumped in with intellectual property concerns, which are a different matter. AI scrapers are a tragedy of the commons problem kind of like Kessler syndrome: a few bad actors can ruin low Earth orbit for everyone via space pollution, which is definitely a problem, but saying that they "stole" LEO from humanity doesn't feel like the right terminology. Maybe the problem with AI scrapers could be better described as "bandwidth pollution" or "network overfishing" or something.


Theft isn't far off, it seems closer to me than using the word for IP violations.

When a crawler aggressively crawls your site, they're permanently depriving you the use of those resources for their intended purpose. Arguably, it looks a lot like conversion.


> Arguably, it looks a lot like conversion.

is this why media networks are buying social ai apps


If I took a photo off your photography blog and used it on my corporate website without your say or input, I don't think it would be unfair to call that stealing.

Doing that on a mass scale with an obfuscation step in between suddenly makes it ok? I'm not convinced.


you're totally right about not being theft, but we have a term. you used it yourself, "distributed denial of service". that's all it is. these crawlers should be kicked off the internet for abuse. people should contact the isp of origin.


Firstly, since this argument is about semantic pedantry anyways, it's just denial-of-service, not distributed denial-of-service. AI scraper requests come from centralized servers, not a botnet.

Secondly, denial-of-service implies intentionality and malice that I don't think is present from AI scrapers. They cause huge problems, but only as a negligent byproduct of other goals. I think that the tragedy of the commons framing is more accurate.

EDIT: my first point was arguably incorrect because some scrapers do use decentralized infrastructure and my second point was clearly incorrect because "denial-of-service" describes the effect, not the intention. I retract both points and apologize.


If they came from centralized servers they would be easy to block. The whole problem is that they have a seemingly endless supply of source IPs - that' means they are "distributed" in every way that matters on the Internet even if the requests are coordinated centrally.


ah, no fun, I was going to continue the semantic deconstruction with a whole bunch of technicalities about how you're not quite precisely accurate and you gotta go do the right thing and retract your statements.

boo. took all the fun out of it ;)


The first is incorrect, these scrapers are usually distributed across many IPs, in my experience. I usually refer to them as "disturbed, non-identifying crawlers (DNCs)" when I want to be maximally explicit. (The worst I've seen is some crawler/botnet making exactly one request per IP -_-)


I think the second is incorrect too. DDoS is a DDoS no matter what the intent is.


I think one could argue that one. Is a DDoS a symptom? In which case the intent is irrelevant. Or is a DDoS an attack/crime? In which case it is. We kind of use it to mean both. But I think it's generally the latter. Wikipedia describes it as a "cyberattack", so actually I think intent is relevant to our (society's) current definition.


The semantics that make sense to me is that "DDoS" describes the symptom/effect irrespective of intent, and "DDoS attack" describes the malicious crime. But the terms are frequently used interchangeably.


Sufficiently advanced negligence is indistinguishable from malice. There is a point you no longer gain anything from treating them differently.


Yes I completely agree.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: