Considering the value and prominence of arxiv to the world, this seems low to me. Although more importantly the rest of the staff needs to be well paid too, and if that's the ceiling its a bit concerning. It's crazy to me that people thought this was too high.
I started working in AI/ML about ten years ago. Reasonably early. Today, professionally and financially I'm doing about as well as a typical programmer. I find the field interesting so I have no regrets but I tend to agree with OP.
I would love more granular data, like state or zip code. It would help settle a decade old (and zero stakes) dispute I have with a friend. I'm sure that's your top priority, so if you could get on that, that would be great
I've had the same thought. The only major difference that I can think of is the built-in camera making check deposits easier. It may also be that people were just generally using computers more and using the internet more over this same time period, although a lot is that because of smartphones
Hot take: being clueless is better than these essays make it out to be. The examples are all really socially annoying people (Michael, Dwight) but I've known some pretty nice and pleasant middle managers who had generally great lives. They probably could've gotten all of that with less work but perfectly hitting the Pareto frontier is quite difficult.
According to this theory, the Clueless are the ones who suffer the most.
They invest most, they care about made up goals nobody else cares about, they play by rules everyone else thinks are dumb, they feel loyal to a company that doesn't love them back, and because they are more invested in the company, they are the ones who feel the loss the most when the sociopaths pull the rug.
I think it's actually the Losers who have it better: they are simply not invested enough, they are replaceable but also find their place in other companies, and in any case, failure affects us-- I mean, them -- less simply because they are not invested as much and they never felt any loyalty.
"Loser" is a loaded term because it sounds like the cultural, lowercase loser ("so and so is such a loser!") but it actually means "loser in the game of maximum capitalist profit and power". But if you're not really playing that game, being a loser at it isn't so bad.
The Clueless is the person who actually believes his work makes a difference and wants to do a good job. Not necessarily a terrible way to live, although it should be acknowledged that the Loser frees up time and energy to devote to other things, notably family.
(According to the theory) it is a terrible way to live, because everything the Clueless believes is false.
The Clueless believe their work makes a difference, but it doesn't. They believe it matters they do a good job, but it doesn't truly matter except for the advancement and power plays of the Sociopaths. They believe themselves "company men", and are loyal to a company that despises them and sees them as completely expendable.
The Losers understand this, and therefore devote their energy to other things outside work, where they find meaning in life.
(Again, I understand this is what the theory states and doesn't necessarily reflect reality. But I do think there's a kernel of truth to it.)
You are assuming that there's something bad about everything you believe being false. There's a fair amount of evidence that it's a good thing. EG religious people being happier and living longer
Yeah perhaps a better term for Loser is Abstainer. Because the Sociopaths also can certainly lose at the game of maximum capitalist profit. Loser/Abstainer just chooses not to play the game.
The problem with these theories is that they fall apart as soon as you start adding or modifying the types. Because they aren't actually correct, just simple and flattering.
Fully agreed. I think "Loser" is a misnomer. And indeed, going by the essay, the Sociopaths can also lose big... they are willing to risk it all for personal gain, but it can end very badly for them if they miss their window, their manipulations get exposed, or decide to do illegal things to get ahead (high profile cases in my mind: Enron, Epstein, etc).
The names come from a cartoon that predates Rao's essay. He simply reused them because they mostly work. Just like the Sociopaths are not all literal sociopaths, the Losers are not all literal losers.
Yes, I understand this. I was simply making this explicit, it was a good idea to clarify that neither Losers nor Sociopaths match the common definition of those terms.
It's absolutely not a straw man, because OP and people like OP will be affected by any policy which limits or bans LLMs. Whether or not the policy writer intended it. So he deserves a voice.
The fact that you are engaging in this thread shows me you have considered my opinions, even if you reject them. I think thats great, even in the face of being told I advocate for the collapse of civilization and that I want others to shut up and not be heard.
It is a bit insulting, but I get that these issues are important and people feel like the stakes are sky-high: job loss, misallocation of resources, enshitification, increased social stratification, abrogation of personal responsibility, runaway corporate irresponsibility, amplification of bad actors, and just maybe that `p(doom)` is way higher than AI-optimists are willing to consider. Especially as AI makes advances into warfare, justice, and surveillance.
Even if you think AI is great, it's easy to acknowledge that all it may take is zealotry and the rot within politics to turn it into a disaster. You're absolutely right to identify that there are some eerie similarities to the "gun's don't kill people, people kill people" line of thinking.
There IS a lot to grapple with. However, I disagree with these conclusions (so far) and especially that AI is a unique danger to humanity. I also disagree that AI in any form is our salvation and going to elevate humanity to unfathomable heights (or anything close to that).
But, to bring it back to this specific topic, I think OSS projects stand to benefit (increasingly so as improvements continue) from AI and should avoid taking hardline stances against it.
Sure. I don't necessarily think your opinion is radical. But it's also important to consider biases within oneself, especially when making use of text as a medium where the nuance of body language is lost.
The main thing that put me off on the comment was the outright dismissal of other opinions. That's rarely a recipe for a productive conversation.
>However, I disagree with these conclusions (so far) and especially that AI is a unique danger to humanity. I
I don't think it's unique. It's simply a catalyst. In good times with a system that looks out for its people, AI could do great things and accelerate productivity. It could even create jobs. None of that is out of reach, in theory.
But part of understanding the negative sentiment is understanding that we aren't in that high trust society with systems working for the citizen. So any bouts of productivity will only be used to accelerate that distrust. Looking at the marketing of AI these past few years confirms this. So why would anyone trust it this time?
Rampant layoffs, vague hand waves of "UBI will help" despite no structures in place for that, more than a dozen high profile kerfuffles that can only be described as a grift that made millions anyway, and persistent lobbying to try and make it illegal to regulate AI. These aren't the actions of people who have the best interests of the public masses in mind. It's modern day robber barons.
>I think OSS projects stand to benefit (increasingly so as improvements continue) from AI and should avoid taking hardline stances against it.
I don't have a hard line stance on how organizations handle AI. But from my end I hear that Ai has mostly lead to being a stressor on contributors trying to weed out the flood of low quality submissions. Ai or not (again, Ai is a catalyst. Not the root cause), that's a problem for what's ultimately a volunteer position that requires highly specialized skills.
If the choice comes between banning Ai submissions, restricting submissions altogether with a different system, or burning out talent trying to review all this slop: I don't think most orgs will choose the latter.
"What happens when an LLM outputs a patented algorithm?" remains a huge land mine out there, particularly since patent infringement does not require intent or even knowledge, and these models have trained on every patent ever granted.
If you can prove that your LLM did not learn from the patent (eg cut-off for learning was before), then the LLM outputting the algorithm (or product etc) would be pretty good evidence that a practitioner of ordinary competence in the field, or whatever the exact legal wording is, found the whole invention to be trivial.
I'm very happy to read about this progress but I don't find it particularly surprising. The big labs optimize for accuracy/high scores on benchmarks first; I automatically expect that (with some research effort) a model with 100x few parameters can achieve the same scores.
Not everyone is supposed to read every single news. There will always be someone who didn't see it, but that is not my point.
It would feel weird to see this as a headline on a newspaper or on TV today, but maybe that is just me and people like to read new that are from last year.
reply