Hacker Newsnew | past | comments | ask | show | jobs | submit | fweimer's commentslogin

It's more complicated. There is no single correct way to check for errors. Some standard library functions can return data and an error: https://pkg.go.dev/io#Reader

This is true, but it feels like a mistake. It's too late to change now, of course, but I feel like (0, nil) and (!=0, !=nil) should both have been forbidden. The former is "discouraged" now, at least. It does simplify implementations to allow these cases, but it complicates consumers of the interface, and there are far more of the latter than the former.

You can still override malloc and call __libc_malloc if you do not want to bother with dlsym/RTLD_NEXT. These function aliases are undocumented, but for a quick experiment, that shouldn't matter.


The Romani people. Perhaps even Jews, depending on attitudes towards Israel.

Probably other groups who are even less visible, so we don't know about the challenges they face. The 19th century push for nation states has marginalized and tried to erase many groups.


I think it's pretty telling how Europeans treat those people now, and how Israeli people treat Palestinians today.

Some of us learned to be better.


wake me up when Spain has let their minorities a path for self determination and when you do something about the savages that loot and destroy everything in your previous held colonies like Wagner and the Russian African Corpus

Ah, so now Israel treats Palestinians like Spain treats Catalans?

I think we might live on different planets.


And the arms industry has been pushing smart mines for decades, so that they can keep selling them despite the really bad long-term consequences (well beyond the end of hostilities) and the Ottawa Treaty ban. In the end, land mines are killing people although the mines are supposed to be sufficiently advanced not to target persons.

From a security perspective, the “return to base” part seems rather problematic. I doubt you'd want to these things to be concentrated in a single place. And I expect that the long-term problems will be rather similar to mines, even if the electronics are non-operational after a while.


"Smart mines" specifically can be designed so that they're literally incapable of exploding once a deployment timer expires, or a fixed design time limit is reached.

It just makes the mines themselves more expensive - and landmines are very much a "cheap and cheerful" product.

For most autonomous weapons, the situation is even more favorable. Very few things can pack the power to sit for decades waiting for a chance to strike. Dumb landmines only get there by the virtue of being powered by the enemy.


> A human can't search 10 apps for the best rates / lowest fees but an agent can.

Why would those apps permit access by agents?

It's always been the case that “agents” could watch content with ads, so that the users can watch the same content later, but without ads. The technology never went mainstream, though. I expect agents posing as humans would have a similar whiff of illegality, preventing wide adoption.

Local agents running open weights models won't really work because everybody will train their services against the most popular ones anyway.


What whiff of illegality? Personal recording and ad skipping DVRs are completely legal products (at least in the US). Courts have ruled on this.

As a U.S. consumer, can you buy a DVR that can record HDCP streams (without importing it yourself from a different country)? Even one that does not automatically edit out ads?

If I search "HDCP remover" on Amazon I see tons of results for $15-$30, sure. Reviews say they work as advertised. That typically exists in a different space from DVRs since it's not relevant for broadcast TV as far as I know (AFAIK there's nothing for DVRs to remove in the first place), but it'd be easy enough to chain it if you needed to.

The IEEE 754 standard covers decimal floating point arithmetic, too. Decimal floating point avoids issues like 0.1 + 0.1 + 0.1 not being equal to 0.3 despite usually being displayed as 0.3. Maybe it's reasonable to use that instead?

Some earlier spreadsheets such as Multiplan used it (but not in the IEEE variety) because it was all soft-float for most users anyway.


Those tools exist, but you have to pay by the token. I'm not sure if they scale financially to large code bases such as the Linux kernel. They are far more accessible than Coccinelle or Perl, though.

Honestly, I rather use Coccinelle, where I understand exactly what it does, when it does it and why it does it…

I would also rather use a tool that I trust than delegate the task to unreliable third party.

But to the person bringing up AI, you don't have to choose one or the other! Models use tools. Good tools for people are usually also good tools for models. The problem models have in learning to use tools like Coccinelle effectively is that there are too many of the tools and not enough documentation for each tool. If there were a unified, standard platform however then many humans would start to gain abilities through fluent tool use and of enough of those people would write docs and blog posts. Where people lead, models follow without doubt. Once a large enough corpus of writing existed documenting a single platform the models would also be fluent, just like they are fluent in JS and React because of how large the web platform is


How does LibreOffice handle ODF standardization? If they want to add a new feature that result in changes how things are formatted visually, write they papers to update the ISO standard for ODF, working with other office suite implementers to achieve interoperability, wait a couple of years for the new standard with the changes getting published, and finally turn on the feature for users?

My impression is that this is more or less how ISO standards are supposed to work. Personally, I don't want to work in such an environment.


Well, that's almost how it work but of course without the waiting bits. The change would be added to LOExt namespace and would be written to the document and read on load. Then the change is proposed for inclusion into the next ODF version. Once the ODF version is released, LO would add support for that as well and changed if needed. On next save the feature would use the ODF version instead of LOExt.

The process has its issues and could cause problems, but in practice I don't remember anyone reporting issues.


Pretty much, and yes, this is not a desirable path for progress.

But communists have an absurd love for bureaucracy, and their need to control is unlimited, so they'll argue to the death about stupid shit instead of, you know, actually competing.


There is the VEX justification Vulnerable_code_not_in_execute_path. But it's an application-level assertion. I don't think there's a standardized mechanism that can describe this at the component level, from which the application-level assertion could be synthesized. Standardized vulnerability metadata is per component, not per component-to-component relationship. So it's just easier to fix vulnerability.

But I don't quite understand what Dependabot is doing for Go specifically. The vulnerability goes away without source code changes if the dependency is updated from version 1.1.0 to 1.1.1. So anyone building the software (producing an application binary) could just do that, and the intermediate packages would not have to change at all. But it doesn't seem like the standard Go toolchain automates this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: