Hacker Newsnew | past | comments | ask | show | jobs | submit | dbdr's commentslogin

If that's the right approach, then it would be useful to make that library public as a crate, because writing such hardened code is generally useful. Possibly as a step before inclusion in the rust stdlib itself.

The GPL prevents you from reading the licensed code before writing related non-GPL code? Which section of the GPL says that?

It's based on an interpretation of "derived from".

It does not matter if it's in the GPL explicitly or not since we're talking about uutils and their stance on it, and they've written that:

https://github.com/uutils/coreutils/blob/6b8a5a15b4f077f8609...

> we cannot accept any changes based on the GNU source code [..]. It is however possible to look at other implementations under a BSD or MIT license like Apple's implementation or OpenBSD.

The wording of that clearly implies that you should not look at GNU source code in order to contribute to uutils.


"we cannot accept any changes based on the GNU source code" is false. They are choosing not to accept it.

"We cannot accept it without issuing a breaking change to the project by significantly changing the license terms."

"clearly implies"

Hmmmm....


This is clean room implementation 101, and why LLMs are so controversial in terms of licensing.

I didn't downvote, but I feel the last two points show a lack of nuance. It's saying "Rust doesn't prevent 100% of the bugs, like all other programming languages", while failing to acknowledge that if a programming language prevents entire classes of bugs, it's a very significant improvement.

Nobody disputes that Rust is one of the programming languages that prevent several classes of frequent bugs, which is a valuable feature when compared with C/C++, even if that is a very low bar.

What many do not accept among the claims of the Rust fans is that rewriting a mature and very big codebase from another language into Rust is likely to reduce the number of bugs of that codebase.

For some buggier codebases, a rewrite in Rust or any other safer language may indeed help, but I agree with the opinion expressed by many other people that in most cases a rewrite from scratch is much more likely to have bugs, regardless in what programming language it is written.

If someone has the time to do it, a rewrite is useful in most cases, but it should be expected that it will take a lot of time after the completion of the project until it will have as few bugs as mature projects.


As other people have mentioned, the goal of uutils was not "let's reduce bugs in coreutils by rewriting it in Rust", it was "it's 2013 and here's a pre-1.0 language that looks neat and claims to be a credible replacement for C, let's test that hypothesis by porting coreutils, giving us an excuse to learn and play with a new language in the process". It seems worth emphasizing that its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme, it was just some people hacking on a codebase for fun.

Whether or not it was wise for Canonical to attempt to then take that codebase and uplift it into Ubuntu is a different story altogether, but one that has no bearing on the motivations of the people behind the original port itself.

You can see an alternative approach with the authors of sudo-rs. Rather than porting all of userspace to Rust for fun, they identified a single component of a particularly security-critical nature (sudo), and then further justified their rewrite by removing legacy features, thereby producing an overall simpler tool with less surface area to attack in the first place. It was not "we're going to rewrite sudo in Rust so it has fewer bugs", it was "we're going to rewrite sudo with the goal of having fewer bugs, and as one subcomponent of that, we're going to use Rust". And of course sudo-rs has had fresh bugs of its own, as any rewrite will. But the mere existence of bugs does not invalidate their hypothesis, which is that a conscientious rewrite of a tool can result in fewer bugs overall.


But are the current uutils developers the same as the 2013 developers? At least based on GitHub's graphs, that's not the case (it looks fairly bimodal to me), and so it wouldn't be unreasonable to treat the 2013-era project differently to the 2020-era project. So judging the 2020-era project for its current and ongoing failures does not seem unreasonable.

Similarly, sudo-rs dropping "legacy" features leaves a bad taste in my mind, there are multiple privilege escalation tools that exist (doas being the first that comes to mind), and doing something better and not claiming "sudo" (and rather providing a compat mode ala podman for docker) would to me seem a better long term path than causing more breakage (and as shown by uutils, breakage on "core" utils can very easily lead to security issue).

I personally find uutils lack of care to be concerning because I've been writing (as a very low priority side project) a network utility in rust, and while it not aiming to be a drop in rewrite for anything, I would much rather not attract the same drama.


doas and sudo-rs occupy different niches, specifically doas aims for extreme minimalism and deliberately sacrifices even more compatibility than sudo-rs, which represents a middle ground.

> It seems worth emphasizing that its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme, it was just some people hacking on a codebase for fun.

What the motivation and intent was in 2013 is not necessarily relevant to what the motivation and intent is now.

It's even less relevant to what the effect is: the goal may be to replace $FOO software with $BAR software, but as things stand right now $FOO is "GPL" and $BAR is "MIT".

So, yeah, I don't want them to succeed at their primary goal, because that replaces pro-user software with pro-business software.


> its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme

No, they openly refuse to accept any GPL code. And even have a strict policy of not even reading GPL code.


No, once you have an MIT-licensed codebase without a copyright assignment scheme, you no longer have the freedom to relicense it at will. You could attempt to have a mixed-license codebase, which is supported by the GPL, and specify that all new contributions must accept the GPL, but this is tantamount to an incompatible fork of the project from the perspective of any downstream users, and anyone who insists on contributing code under the GPL has the freedom to perform this fork themselves.

This is simply false. You can accept GPL contributions and clearly indicate the names of the contributors as required by MIT. There is no "incompatible fork".

No, GPL and MIT have significantly different compliance requirements. You cannot suddenly begin shipping code with stricter compliance requirements to downstream users without potentially exposing them to legal liability.

It's not a low bar when C/C++/D are basically the only languages in which you can write certain kinds of programs.

It also means that in English:

> Digital:

> [...]

> 6) of or relating to the fingers or toes. Ex: digital dexterity


When you are reading a book, you certainly need to use your attention. However, you stay in a given topic/world for a sustained amount of time. This feels very different and much less tiring than scrolling on your phone jumping from topic to topic. Especially social media feeds that have been optimized to keep using it as long as possible (dopamine hits and all).

Newspapers are probably an intermediate between those two, to various degrees depending on the specific newspaper (trash vs deeper analysis).


That's great if it works. But it's way harder to produce a formal proof. So my expectation is that this will fail for most difficult problems, even when the non-formal proof is correct.

Even if they take the same number of ticks, shouldn't xor fundamentally needing less work also mean it can be performed while drawing less power/heating less, which is just as much an improvement in the long run?

That wasn’t much of a concern in the 70s and 80s.

Also, you probably spend much more energy moving the bits around the chip and out to RAM than you do on the actual calculation.

*user-mode code.

Your city/rural distinction is insightful. I think it can be taken into account relatively easily. Name explicitly the cities/locations were the requirement would apply. Possibly based on some objective criteria like population density.

Can such policies be implemented individually by cities?

Not sure about the legal frameworks in the US but that’s exactly how it works in most places in the UK. Cities have restrictions for on-street parking (metered, permitted, illegal) whereas the towns and villages don’t (unless they also bring in bylaws to help with congestion).

In the US it varies a lot based on what state you're in. Some states give the cities a wide latitude for such policies, but some states (notably 'red' ones where the state government is likely to be conservative and the cities are likely to be liberal) do not grant cities the flexibility to make ordinances like this.

Also restrictions such as residents only parking in both cities and towns.

> not doing the proper reviews, but once again, this is not remotely unique to AI code; this is what 99% of companies are already doing.

But is the scale similar, or will AI coding make the problem significantly worse?


If you're not doing code review, you're not doing code review. If you're not gating builds on code review, you're not doing code review. If your developers are lazy and just approve the PRs as they land, you're not doing code review.

If you're thinking there is some magical line where LOC < n gets properly reviewed, but LOC > n doesn't, I assure you that's not the case.

And no one is turning off their approval gates in their build pipeline just to accommodate AI code.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: