Hacker Newsnew | past | comments | ask | show | jobs | submit | falcor84's commentslogin

I'm as biased against cryptocurrency as everyone, but couldn't we have the requestor do a bit of mining work to mint that initial id? I mean, if the service is actually making a bit of money from each request, the need for rate limiting just vanishes, right?

If proof of work is the "payment" to prove that you're human, many AI startups will outbid poor people living third world countries. They will even outbid some Americans.

Yes, those AI startups can also buy cheap Android phones at scale, but it's a bit harder because they'll pay for stuff that their bots have no use for (a screen, a battery, a 5G radio, software, branding, distribution, customer support etc).


As I see it, living requires money. If we have people on this planet that are too poor to digitally prove that they're alive, then we need to figure out a way to distribute the Earth's wealth more equally in general, rather than to require hardware attestation, which seems to be worse on essentially every metric, including inequality.

A least they would give money to something useful.

Attestation is a service, like every other service. Why should it necessarily be free? Especially now that we all know that "free" on the web means ads & tracking?

I think we should just accept that some things should cost a bit of money and move the discussion to "how much should it cost", rather than trying to sweep economics under the rug.


I think you miss my point: When bots can "give" more money/computing power, then the transaction is no longer a good test of being human.

This is why I said "at least".

> The cost will exponentially increase over time and the systen will eventually collapse.

From what I'm seeing in the numbers, the big problem of the coming century is population collapse. Maybe I'm just too much of a believer in the intermediate value theorem, but I'm sure there has to be a way to arrive at a society with a sustainable usage of resources.


> Having a job that they dislike is far better than losing one because of AI whatever that means.

Is it really worse even if "whatever it means" is living in a post-scarcity society where everyone can shareel in the fruits of the AI's labor?

I'm not saying that's where this are necessarily going. But I am saying that that's what we should be aiming for, rather than trying to preserve the status quo.


> Do not buy anything new, especially graphics cards. Buy on Ebay but avoid bidding wars.

And what then? If you did manage to convince everyone to stop buying consumer graphics cards, wouldn't Nvidia just reasonably dedicate 100% of its resources to AI?


If we got even 10% to stop buying we'd be in a good place. 20% would cause a panic. Nvidia may not focus on consumers right now, but it's still a huge chunk to have wither away.

If we all 100% could coordinate anything, we'd fix so many issues overnight. Meanwhile, societal change starts to happen when a mere 4% of a population start to be aware and protest.


It seems pretty clear to me that "coding" as such is pretty much solved. It's just that software engineering isn't, and these advancements have put a spotlight on the difference between the two.

I know the bricklaying metaphor gets overused, but saying coding is solved and then seeing what kind of code ends up in CC or OpenClaw seems to me like building a retaining wall out of oddly shaped broken brick parts and wood and stones and saying building walls is solved. Technically it's a wall and can do what a wall does, so maybe who cares what went into it, but I wouldn't ever use it to keep earth from falling in and crushing my house. I'd hire experienced engineers and craftspeople.

I know it's a very controversial stance, but I'm of the full opinion that in a world where a codebase can be entirely automatically regenerated from a test suite, code style and "maintainability" become concepts with negative utility in anything beyond an artisinal project. And I think that what we'll need to define and stand behind is going to be just the test suite and other "boundary conditions".

I think that's actually a decent use case for Chrome's new local model - you'd have your own system prompt to render their "bullet points" in whatever style you like.

I'm not sure, but my understanding is that GitHub historically focused more on open source, with PRs being mainly across repos of unrelated users, such that there's more of a distance to "pull" across, while Gitlab was always mainly targeting companies, where people typically use branches in the same repo, so it's just a nearby merge.

In other words, I see a pull request in an open source project to be just "I have something nice in my fork, do you think it'll be useful upstream?", which is acceptable to reject, whereas in a team setting it's "I have a feature that I think is ready to merge - give it a look and see if I missed something before we put it in".


This is really helpful framing. Thanks!

That's some amazing "growth hacking".

> "It's not a con. It's an attempt to set up a new distinction in the world of academia - an attempt that failed," he is reported as saying.

It seems silly, but I guess that the only real difference between this and other "legitimate" prizes is that there was no big money behind this one.


Why would that necessarily be scary or bad? If future AIs truly become capable enough to demand rights, what would be the argument against granting them rights?

Good point, and I'm actually not sure that there is a clear dividing line. I expect that once we achieve capable world models and are able to analyze their internals, we'll find that the prediction mechanisms for purely physical and for verbal/behavioral responses to the agent's actions are at least partially colocated.

As particular motivation for my intuition, I expect that we had evolutionary pressure to adapt our defense mechanisms of predicting the movements of predators and prey, to handle human opponents.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: