Hacker Newsnew | past | comments | ask | show | jobs | submit | bjt's commentslogin

The fact that these exist does not mean that they're immune from legal challenge. If the original creators wanted to sue, there are all kinds of claims that would have a decent shot in court (e.g. trademark, trade dress, design patents) besides "you copied our copyrighted source code." The clones exist more because people are being cool about it, and because there's not a strong economic incentive to challenge them. Those things can change at any time.

Sony vs Bleem. They already lost this case in court.

That was a very different case.

Out of the two claims, the only one that made it to appeals court was about whether it was fair use for Bleem to use screenshots of PS1 games to advertise its emulator (which was compatible with those games). The Ninth Circuit decided it was. But that's not relevant here.

The other claim was more relevant, as it was an unfair competition claim that apparently had something to do with Bleem's reimplementation of the PS1 BIOS. But the district court's record of the case doesn't seem to be available online, and the information I was able to find online was vague, so I don't know what exactly the facts or legal arguments were on that claim. Without an appeal it also doesn't set precedent.

If there were a lawsuit over OpenTTD, it would probably be for copyright infringement rather than unfair competition, and it would probably focus more on fair use and copyrightability. For fair use, it matters how much something is functional versus creative. The PS1 BIOS is relatively functional, but a game design and implementation are highly creative. On the other hand, despite being creative, game mechanics by themselves are not copyrightable. So it might come down to the extent to which OpenTTD's code was based on the reverse-engineered original code, as opposed to being a truly from-scratch reimplementation of the same mechanics. Visual appearance would also be relevant. Oracle v. Google would be an important precedent.


FreeBSD, NetBSD and OpenBSD at first when every BSD OS was just part of 386BSD it used to have AT&T code. That code was rewritten replacing every propietary part and after that (and noticing BSD 4.4 was incomplete) we got clean FreeBSD, NetBSD and OpenBSD from a NetBSD fork.

Another similar case with exact grounds was GNU which with Linux it completed an OS albeit in a hacky way, because the original OS would have been GNU+Hurd, but both are reimplementing Unix. Same SH derived shell, but extended. Kinda like OpenTTD. We have GNU Coreutils, Findutils, GNU AWK reimplementing and extending AWK (even when AWK was propietary), GNU Zip, Tar... the list goes on and on.

Oh, another one: Lesstif vs Motif. Same UI, if not very, very close to Motif 1.2 in order to be interoperable. Today it doesn't matter because nearly a decade a go Motif was relicensed into the GPL, but tons of libre software depending on propietary Motif was just seamlessly running with LessTIF libraries except for some rough edges/bugs. One of the most known example was DDD, a GUI for GDB.


You can get some good guesses from the comment itself.

> I assumed the writer was a journalist or author with a non-technical background trying to explore a more "utopian" vision of where trends could go.

If you assume you're reading something from a person with intention and a perspective, who you could connect with or influence in some way, then that affects the experience of reading. It's not just the words on the page.


This reminds me of having the reverse experience with the 2017 New Yorker viral "Cat Person" story [0] which a (usually trustworthy) friend forwarded and enthusiastically told me to read: waste of time shaggy-dog story, intentional engagement-trolling aimed at the intersection of the hot-button topics of its target readership *. But why are we culturally expected to allow more slack to a human author, even a meretricious one? Both are comparably bad. The LLM-authored one needs a disclaimer at the top to set its readers' expectations right, then readers can make an informed choice.

(* "Cat Person" honestly felt like the literary equivalent of Rickrolling; I would have stopped reading it after the first page if not for my friend's glowing endorsement.)

https://news.ycombinator.com/item?id=27778689


(Sorry, the correct link for Roupenian's 2017 story "Cat Person" is at https://news.ycombinator.com/item?id=15892630 )

Oh god, that was insipid.

It had a very similar quality to the AI'd article from this thread. A sort of attempt at Being Literary but never really ever getting to the point of saying anything. It has the same feeling of wallowing, of over indulging in its shtick.


> If source code can now be generated from a specification, the specification is where the essential intellectual content of a GPL project resides. Blanchard's own claim—that he worked only from the test suite and API without reading the source—is, paradoxically, an argument for protecting that test suite and API specification under copyleft terms.

This is an interesting reversal in itself. If you make the specification protected under copyright, then the whole practice of clean room implementations is invalid.


Even with LLMs, we need a way to translate between the imprecise plain English description of a program and the completely-unambiguous level of code. You need the ability to see when the LLM has resolved ambiguities in the wrong direction and steer it back. If you can't speak code, that's going to be a very error-prone process.


That's not saying that Gemini is profitable though.


Gemini now is on the top of search results


It's not about the .online TLD being "weird". The problem is that it was free. That's going to attract a swarm of fraudsters, spammers, etc, and then turn into a strong "this is probably fraud" signal in all kinds of fraud scoring systems.

There are lots of domains out there other than .com that are just fine.


.online, .top, .xyz. info and .shop are some of the top TLDs that scammers use, precisely because of their rock bottom registrar fees that make them attractive for sites that have a shelf life of a few hours or a few days before being blocked. As a result, many places have a blanket "suspicious" flag for fresh domains under these TLDs.

If you plan on building a legit site, do not use any of these cheap TLDs.


Paying through the nose for a .com that is remotely memorable and easy to spell is not a great path forward for a hobbyist or someone who simply wants their own domain for email.

I know someone with a .org domain, and even they have a ton of issues with false flags on their emails due to not coming from a big email provider. They’ve been blacklisted a couple times and regularly get flagged as spam. I’m surprised he hasn’t given up after dealing with this stuff for 25 years.

These new TLDs, I thought, were supposed to open up more options for regular people to get a domain that is semi-decent. Instead they’re essentially useless. Some of the prices are also still insane, due to assumed “premium” status or domain squatters.

There has to be a better way to police this stuff.


If you live in the west/developed world, the solution for hobbyists/small projects/individuals is generally to use the a local ccTLD. I'm from Australia, I use a `.au`. Between `.au` (which they opened up recently) and `id.au` its not hard to find a memorable/useful url for about $20/year, as people/companies have been mostly keeping to the `.{org,com,net}.au` names.

I see a lot of .fr, .de, .jp (and many other European ccTLDs) used by people from those places for their hobbyists/small projects/individual purposes. The regulators and operators of these domains tend to be pretty decently reputable. They often require proof of either local residence/citizenship or local business, which keeps domains more available, at the cost of requiring you to hand over some identifying information.

Now for whatever reason, I don't really see the `.us` one in use at all, so that is potentially a big exception to the initial premise for people from the US. I presume that its due to combination of it being operated by GoDaddy and the fact the `.com` and `.org` are sort of defacto US ccTLDs..


Try finding a pithy domain these for under 10,000 these days. I tried a week ago and had to settle for something a lot longer than I wanted and even then it was something from outside the common three letter TLDs.


Probably this is what's happened here. Either the OP's domain was previously used for shady activities, or the almost-free stigma puts the whole .TLD in the grey list of high-risk assets. Probably is also explains the nuclear behavior of the registrar (suspension).

Free is good, but sometimes it's not.


Moderation and recommendation are not the same thing.


When you have a feed with a million posts in it, they are. There is no practical difference between removing something and putting it on page 5000 where no one will ever see it, or from the other side, moderating away everything you wouldn't recommend.

Likewise, if you have a feed at all, it has to be in some order. Should it show everyone's posts or only people you follow? Should it show posts by popularity or something else? Is "popularity" global, regional, only among people you follow, or using some statistics based on things you yourself have previously liked?

There is no intrinsic default. Everything is a choice.


I remember back in the day when Google+ was just launched. And it had promoted content. Content not from my 'circles' but random other content. I walked out and never looked back.

Of course, Facebook started doing the same.

The thing is, anything from people not explicitly subscribed to should be considered advertorial and the platform should be responsible for all of that content.


I think maybe you shouldn't have a feed with a million posts in it? Like how many friends do you have? And how often do they post?


"We have a million pieces of content to show you, but are not allowed to editorialize" sounds like a constraint that might just spark some interesting UI innovations.

Not being allowed to use the "feed" pattern to shovel content into users' willing gullets based on maximum predicted engagement is the kind of friction that might result in healthier patterns of engagement.


While I agree "There is no intrinsic default. Everything is a choice." and "There is no practical difference between removing something and putting it on page 5000" and similar (see my own recent comments on censorship vs. propaganda):

> Should it show everyone's posts or only people you follow?

Only people (well, accounts) you follow, obviously.

That's what I always thought "following" is *for*, until it became clear that the people running the algorithms had different ideas because they collectively decided both that I must surely want to see other content I didn't ask for and also not see the content I did ask for.

> Should it show posts by popularity or something else? Is "popularity" global, regional, only among people you follow, or using some statistics based on things you yourself have previously liked?

If they want to supply a feed of "Trending in your area", IMO that would be fine, if you ask for it. Choice (user choice) is key.


Early days facebook was simple: 1) You saw posts from all people you were connected to on the platform. 2) In the reverse order they were posted.

I can tell you it was a real p**r when they decided to do an algorithmic recommendation engine - as the experience became way worse. Before I could follow what my buddies were doing, as soon as they made this change the feed became garbage.


The way modern social media platforms are designed, yes they are.


The point is that they don't have to be. You can moderate (scan for inappropriate content, copyrighted content, etc) without needing to have an algorithmic recommendation feed.


There's OpenClaw the codebase, and there's OpenClaw the community. They could build the same program very easily (as evidenced by the number of clones out there already). That part's not worth paying much for. But redirecting the whole enthusiast community around it? That's worth a lot.


Exactly. My point is that it might not be about the guy.


The point is that "who gathers it" should be irrelevant.

The government shouldn't be able to buy data that would be unconstitutional or unlawful for them to gather themselves.

On the other hand if a company is just aggregating something benign like weather data, there's no need to bar the government from buying that instead of building it themselves.


> The government shouldn't be able to buy data that would be unconstitutional or unlawful for them to gather themselves.

Now that sounds like a good argument to make in court! How do we do it?


I had a similar thought, but I think there's a key difference here.

Traditional karma scores, star counts, etc, are mostly just counters. I can see that a bunch of people upvoted, but these days it's very easy for most of those votes to come from bots or spam farms.

The important difference that I see with Vouch is not just that I'm incrementing a counter when I vouch for you, but that I am publicly telling the world "you can trust this person". And if you turn out to be untrustworthy, that will cost me something in a much more meaningful way than if some Github project that I starred turns out to be untrustworthy. If my reputation stands to suffer from being careless in what I vouch for, then I have a stronger incentive to verify your trustworthiness before I vouch for you, AND I have an ongoing incentive to discourage you from abusing the trust you've been given.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: