This weekend, I received confirmation that, for data analysis and modeling, coding agents represent a qualitative leap forward comparable to the widespread adoption of personal computers.
I stopped doing scientific research years ago, but before moving on to other things, I had, like many others, I imagine, certain problems that I wanted to study, but given the lack of time and other concerns, I would never have picked them up again.
I launched Codex, and it managed to untangle old files and analyses, find datasets that I didn't know where they had ended up, launch analyses under my guidance, and build visualizations that would have taken me days, if not weeks, to complete.
Of course, I have experience, I know what needs to be done, and I had to correct some errors made by Codex (I am paying for Codex and Gemini now, but I could go back to paying for Claude too), but I was amazed by the quality of the analyses.
To give an example, I had a dataset of weather observations that I had downloaded from a website years ago, hundreds of time series across weather stations.
Codex managed to recover the missing time series, even though the website is no longer active, by first comparing the downloaded data with the data I had, and then also finding a digital elevation model.
Now I will guide Codex in developing a model of extreme events that will allow me to have a spatio-temporal model of extreme events that, without Codex, I would never have had the time or inclination to build.
Anyone who has worked in the big tech industry knows that probably more than half of the workforce performs tasks that, in essence, are superfluous.
But these things happened: 1) Musk has shown that Twitter can operate with 5% (approximately?) of the workforce he inherited; 2) laying off a lot of people was seen as a sign that the company was in trouble, but not now because; 3) artificial intelligence makes point 2) not a semi-desperate move, but a forward-thinking adjustment to current and future technology development.
I've been out of work for almost a year now, after being laid off, and I think it's very unlikely that I'll ever return (not because of my choice but their choice) to work in the tech industry as a W2 employee. Oh well.
1) This is by any source I can find, incorrect. Twitter had ~8,000 employees when Musk bought it. After layoffs that was trimmed to a low of around 1,500 employees (19%), and today it has around 2,800 employees.
Also worth mentioning that a lot of Twitter's products are built on X.ai which has 1,200 core employees on Grok with 3,000+ on the Datacenter build-out side.
Also if you put a product in maintenance mode you can easily get away with a fraction of your devs. Most people are at all times working on some definition of something new
Also have to consider that it’s now private which removes the pressure of having to show any semblance of a profit or, critically, share usage or advertising statistics which could (and probably are) down dramatically since the acquisition. Being private allows the fictitious storyline to persist that “we’re doing great and everyone is using our products.”
> Musk has shown that Twitter can operate with 5% (approximately?) of the workforce he inherited
Is X profitable? I don't think the argument was that Twitter couldn't _operate_ with 5% of the workforce (i.e. skeleton sysadmin crew), the issue was whether Twitter could make money and remain a viable business.
It seems that Twitter is no longer a viable business (i.e. less advertising spend, decline in users - especially high-value advertiser targets who now spend more time on LinkedIn, etc).
> laying off a lot of people was seen as a sign that the company was in trouble, but not now
I agree that saying you are laying people off because of AI is a lovely narrative for failing companies!
One needs to tease apart the effects of Musk and Musk's "policies" on advertising investments, number of users, the boom and slow decline of social media platforms (see Facebook, Instagram coming down from their peak, TikTok gaining ground, but people seem to be already tired of it and waiting for something new) and the technical/technological part of the enterprise.
I don't like layoffs, in particular when I am the one getting laid off (not at X), but the X experience, for a casual user like me, did not get worse, if it did, because there are way fewer people working at X. One may say, I don't like the algos, but that's not coming from a lack of engineers, it is a policy.
The recommendation algorithm they implement is a choice they make, it is not that if they had more engineers they would deploy a “better” one.
Every recommendation algorithm is, in the end, “bad” in some way.
The TikTok algorithm was considered the non plus ultra among recommendation algos; now you cannot watch a video of a cat on TikTok for more than 5 seconds that the next 50 videos they serve you are of cats.
The Netflix recommendation algorithm has not shown something to me that I considered hidden but interesting in years. They just show you whatever they want to push, mostly (I worked there).
You buy a pan to cook steaks on Amazon and, for some reason, the algorithm recommends to buy it along with stroboscopic lights.
I didn't say they were all working on the algorithm, there were a lot of people working in various content-related jobs: moderation, algorithm, partnership management with content creators, ad sales, and more
Without getting into a she-said/he-said debate, I don't believe traffic is shrinking because of the viability of fewer engineers.
If that were the case, it would also be easy to hire hundreds more. With the confusing mix of X.ai, Grok, and SpaceX, I don't think anyone would notice.
X seems to be much more relevant to social and political debate than any other social media platform, which, despite a declining user base, makes it an extremely valuable tool for Musk and his circle.
It may seem like I'm defending or supporting Musk, but that's not my point. What I can say is that Musk made a huge bet when he substantially, even dramatically, reduced X's workforce, and I think he won that particular bet.
X has added more useful functionality in the last year or two than twitter did in their entire existence, it is also much snappier and reliable, that's with 5% of the workforce. I don't put this down to AI though it's more like a very lean, talented and motivated teams without layers of pointless middle people. Add AI into the mix and it's naturally going to be the way forward. Companies that stay bloated and not utilising AI will die.
Being rejected every day, thus subjecting myself to the humiliating ritual of modern times, by companies that I believe could make the most of my talent (my last title was Director of AI, before I was a Staff ML Scientist at a FAANG and an award-winning scientist).
They all seem rather disappointed, at least in the automated rejection emails (mailboxes not monitored, of course) they send me, that they have found other candidates more suited to the position. It seems we are both disappointed, after all.
Not all is lost, though. I am in the enviable position of having perfect health and decent savings.
Which companies are you applying to? Even in this new world, titles still matter a great deal. A former “director of AI” and FAANG data scientist is valuable even before considering whether you are competent.
The part that stands out is that you are getting rejection emails from automated systems. With your pedigree, you should be talking direct to whoever is hiring — you’ve earned the right to bypass the automated system in the eyes of most people hiring.
When we are hiring and receive hundreds of applications, we only manage to review a few and send the same rejection to everyone else — even though we haven’t read their application.
At a minimum, you should be getting conversations with the teams you are applying to and then a personal rejection. If you are not getting beyond screening, with your credentials, it is a process issue.
Have you tried going direct to the teams? For larger companies, that can be via LinkedIn, and for startups / smaller companies you should be able to find their email.
If I were in your position, I would be identifying companies I want to work at that are hiring, and then send an email to their most senior technical person (probably CTO). You are talking senior-to-senior, and if they are interested, you will bypass the whole automated system. I can think of a couple of companies that regularly post in the HN hiring threads that would be a great fit for you.
Any suggestion that you are too expensive or over qualified sounds like a nice explanation but even if those things are true, you should still be getting interviews and personal rejections. Hiring is a painful process for most companies, the chance to talk to someone qualified is a nice treat.
At first, I was a bit selective about my applications (meaning I was applying to maybe 5 positions per week, not one per month), but in the last six months, I have sent dozens of applications for positions (real or fake, I don't know) that I thought were appropriate for my skills and experience (director, manager, some senior IC positions, not even staff).
I have no problem relocating; I could do so in 15 days (I currently live in California).
I also contacted hiring managers via email and LinkedIn, but I received virtually no response.
At this point, you might think that there is something wrong with me (professionally speaking), that I have a bad reputation of some kind, but that is not the case.
The market is clearly telling me that there is no need for someone with my credentials on paper. Many people find jobs, even quite easily, and millions of people are employed in the tech industry. But thousands of people in the tech industry are also looking for jobs every day and have a stronger network than I do. Either they are looking for you, or they are looking for someone like you, and in the latter case, there are you and hundreds of others.
Have I really tried everything? No, but I've tried a lot.
I want to make it clear that I was presenting my case in response to a question and that this is not a “poor me” post (in fact, I am anonymous and there are no links to my real identity). I am in a privileged position: I have decent savings and can get by for quite a reasonable time, but it is certainly quite disconcerting, disorienting, frustrating, and, frankly, sometimes humiliating not even to get an interview, or a call back.
Thanks for your frank comments. There are a lot of people in your position for the first time, I think, and many more to come. It sounds quite undeserved and is rather a symptom of our poor system. All I can say is I think it's likely that someone like you (who I read as both cognitively and emotionally intelligent) is likely to adapt and will thrive eventually, both due to your characteristics, and because the system isn't that broken, and will also adapt. Good luck, and don't take it personally.
Could it be that these other candidates work for cheaper? They might be scared of your credentials. It's disheartening that this field has come to a race to the bottom, accelerated by AI. It's not the juniors that are at risk, it's the seniors.
This could be a problem, but only if I had interviews or even just a phone call from a recruiter. But I'm not even getting to that stage. I just get rejection after rejection via email for every type of company and position I apply for.
Dozens of rejections, and you get to a point where it becomes a waste of time to even apply. Also, many of the job postings are clearly fake; companies like Capital One, JP Morgan, or NBC, just to name the first three companies that come to my mind, have been advertising the same positions for months, if not years.
What happens is that you fall out of the loop and become invisible, if not an outcast that no one wants to touch. You reach out to your network and you receive cold indifference; all the "friends" you thought you had are not interested in providing any factual support (e.g., strong referrals). Basically, it comes to a point where you are begging for attention and some support.
What's discouraging is that there are so many people in leadership positions who have terrible leadership skills or competence. Not that it's something others should think I possess, I'm clearly biased in this case, but they certainly don't have it.
The world is what it is, and plenty of people get laid off and are able to get interviews and find jobs. I am certainly in part responsible for the situation I am in (not in the sense that I did anything shameful or despicable, in the sense that maybe I should have spent time developing a network different from the one I have), but it is not a fun situation to be in.
Something is obviously very wrong if you're not even getting to the first (zeroth?) stage. It could be something very obvious. Have you tried asking for professional help with your resume / CV?
It may appear so, in the sense that I would think the same if I were the one reading my comments, but, even if I am sure that my resume could be improved (I worked on it multiple times, asked colleagues to have a look, as well as getting some feedback from LLMs), there is nothing obviously wrong with it.
I interviewed dozens of candidates over the years, and I have seen some crazy resumes (10 pages, every technology under the sun listed, dubious certifications). Mine is certainly not one of them.
People talk crap about shareholders on here but in reality, shareholders would hate to know management are rejecting highly qualified candidates for people they can 'manage' better.
Excuse me for making some pretty sharp statements. Twitter is objectively a worse product now. Musk is a deeply uncreative person who doesn't seem to actually like people and attracts people to him that are the same way. This shows in his truly uninspired products. Tesla is way behind the Chinese now. xAI is a copy cat. SpaceX seems to be taking old Soviet ideas. Musk I go on?
I have no professional, personal, or parasocial ties to Musk, so you can safely continue without this having any effect on me beyond a normal conversation, even if contentious.
I would limit the conversation to X, as it is the company that started the famous “you can do the same with 5% (or something like that) of the workforce” movement.
I don't think X is objectively a worse product now, in terms of its technical and technological aspects. This is different from saying that users were better/worse before, and the same goes for the algorithm or the type of information that is “pushed” on the platform.
Let's be honest: people and advertisers left X not because their product was unusable, had a bad UX/UI, etc., but for other non-technical reasons.
Autogenic training is a practice that works wonders for your ability to control yourself under pressure, whether in specific situations, in the spotlight, or under more mundane pressures. Only after consistent training (but not gruelling! It doesn't require tedious 10-day meditation retreats) can you finally notice how much mental and physical tension, and fears, real or imagined, are present in your life.
In the same way that we practice motor skills (which are also mental skills) separately (think of preparatory running exercises as pedagogical tools for sprints) and then integrate them into performance (soccer players train with specific drills for ball control and one- or two-touch passing), we should practice mental skills first in isolation and then integrate them into performance (Dave Alred, the famous coach who was once the kicking coach for Wilkinson, the fly-half for the English national rugby team, wrote about this in his book “The Pressure Principle”).
Similarly, the autogenic training skills we develop must first be developed in isolation and then integrated (but integration begins on day one) into the performance itself. That is, it is not enough to be relaxed in bed, even if this is reflected in “real life,” but relaxation, which does not mean a state of torpor, far from it, must be part of every activity and challenge.
You don't believe the current version of Claude Code will be able to write complex software on its own.
On the one hand, there is a lot of hype, an incredible amount, actually, but on the other, we have been observing in real time a technological miracle that gets better by the week.
We have no idea what, five years from now, the coding agent will be able to develop.
After two weeks of viral posts, articles, and Mac Mini buying sprees, as it's been happening up to now for every AI product that was not an LLM, it kinda disappeared from the consciousness-- as well as from the tooling, probably--of people.
A couple of months ago, Gemini 3 came out and it was "over" for the other LLM providers, "Google did it again!", said many, but after a couple of weeks, it was all "Claude code is the end of the software engineer".
It could be (and in large part, is) an exciting--and unprecedented in its speed--technological development, but it is also all so tiresome.
Architects went from drawing everything on paper to using CAD, not over a generation, but over a few years, after CAD and computers got good enough.
It therefore depends on where we place the discovery/availability of the product. If we place it at the time of prototype production (in the early 1960s for CAD), it took a generation (20-30 years), since by the early and mid-1990s, all professionals were already using CAD.
But if we place it at the time when CAD and personal computers became available to the general public (e.g., mid-1980s), it took no more than 5-10 years. I attended a technical school in the 1990s, and we started with hand drawing in the first two years and used CAD systems in the remaining three years of school.
The same can be said for AI. If we place the beginning of AI in the mid-1980s, the wider adoption of AI took more than a generation. If we place it at the time OpenAI developed GPT, it took 5-10 years.
I do not doubt that AI and AI-powered and -native applications will become part of the fabric of our personal and professional lives.
What I don't understand is why, outside of "because I can", people need to automate parts of life I did not know the existence of.
- Why, outside of edge cases, do people have to automate the payment of bills beyond the automatic cc processing?
- How many times a month do they have to set up their barber appointment?
It seems to me that the applications of Clawd and similar tools either automate trivial stuff or work on actions and circumstances that should not be there.
As an example, the other day I had a doctor visit, and between filling forms online, filling other forms online, confirming three times I would have been there and that I filled the online forms, driving to the doctor's office, and waiting, I probably spent 2 hours of my time (the visit was 2 months after I asked for it, by the way).
The visit lasted 5-7 minutes: the doctor did not have a look at the forms I filled out beforehand, and barely listened to what I was telling him during the visit.
I worry that, since "AI" will do it, there will be more forms to be filled that nobody will read, more forms to be filled to confirm that AI or me or a guardian filled the forms, and longer wait times because AI will bombard our neurons with some entertainment.
But what I want is a visit with a doctor who listens to me, they are not in a rush, and have my problem solved. If AI helps, it's great, but I don't want busy work done by AI, I don't want, because it is not needed, busy work at all.
I would love an AI to curate my feed to transition from enragement equals engagement to pure enchantment feeding me things it decides I would enjoy. And I think that's completely within the abilities of current models. It's just that it's less profitable than driving me into an endless doom scroll loop of despair.
And that's just off the top of my head. AI is neither good or evil, but we've made some pretty poor choices deploying it.
While I find the aspiration noble, it seems to me that we don't even know ourselves what we want, or, alternatively, we re-discover every day how our revealed preferences differ from our stated ones. We don't even trick other people, we trick ourselves.
There was also some evolutionary biology/psychology theory developed by Robert Trivers years ago on self-deception and fitness.
We buy a book thinking we are going to like it, and then we don't even open it. The recommender systems give us more of what we interact with (with some quite extreme funnel effects at times, like when we curiously look at a pimple popper video and for the next ten minutes the algo gives us pimple after pimple), but we find out, as stated but not as revealed, that we don't want more of what we interact with.
Nobody wants, in theory and as stated, to be constantly enraged by social media, but most of us, since numbers don't lie, are revealed to enjoy getting enraged.
I don't think AI will have a different effect in the near future, as the main problem is that we don't know, broadly speaking, what we want, apart from the obvious, e.g., I want to watch a football game and I am going to turn on the tv and watch it.
My knee-jerk reaction is that outsourcing thinking and writing to an LLM is a defeat of massive proportions, a loss of authenticity in an increasingly less authentic world.
On the other hand, before LLMs came along, didn't we ask a friend or colleague for their opinion on an email we were about to write to our boss about an important professional or personal matter?
I have been asked several times to give advice on the content and tone of emails or messages that some of my friends were about to send. On some occasions, I have written emails on their behalf.
Is it really any different to ask an LLM instead of me? Do I have a better understanding of the situation, the tone, the words, or the content to use?
Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently.
Secondly, I think when a friend is giving advice the responses are more likely to be advice, i.e. more often generalities like "you should emphasize this bit of your resume more strongly" or point fixes to grammar errors, partly because that's less effort and partly because "let me just rewrite this whole thing the way I would have written it" can come across as a bit rude if it wasn't explicitly asked for. Obviously you can prompt the LLM to only provide critique at that level, but it's also really easy to just let it do a lot more of the work.
But if you know you're prone to getting into conflicts in email, an LLM powered filter on outgoing email that flagged up "hey, you're probably going to regret sending that" mails before they went out the door seems like it might be a helpful tool.
"Firstly, when you ask a friend or colleague you're asking a favour that you know will take them some time and effort. So you save it for the important stuff, and the rest of the time you keep putting in the effort yourself. With an LLM it's much easier to lean on the assistance more frequently."
- I find this a point in favor of LLM and not a flaw. It is a philosophical stance, one for which what does not require effort or time is intrinsically not valuable (see using GLP peptides vs sucking it up for losing weight). Sure, it requires effort and dedication to clean your house, but given the means (money), wouldn't you prefer to have someone else clean your place?
"Secondly, I think when a friend is giving advice the responses are more likely to be advice"
- You can ask an LLM for advice instead of writing directly and without further reflection on the writing provided by the model.
Here I find parallels with therapy, which in its modern version, does not provide answers, but questions, means of investigation, and tools to better deal with the problems of our lives.
But if you ask people who go to therapy, the vast majority of them would much prefer to receive direct guidance (“Do this/don't do that”).
In the cases in which I wrote a message or email on behalf of someone else, I was asked to do it: can you write it for me, please? I even had to write recommendation letters for myself--I was asked to do that by my PhD supervisor.
I wasn't arguing that getting LLMs to do this is necessarily bad -- I just think it really is different from having in the past been able to ask other humans for help, and so that past experience isn't a reliable guide to whether we might find we have problems with unexpected effects of this new technology.
If you are concerned about possible harms in "outsourcing thinking and writing" (whether to an LLM or another human) then I think that the frequency and completeness with which you do that outsourcing matters a lot.
It can become an indispensable asset over time, or a tool that can be used at certain times to solve, for example, mundane problems that we have always found annoying and that we can now outsource, or a coaching companion that can help us understand something we did not understand before. Since humans are naturally lazy, most will default to the first option.
It's a bit like the evolution of driving. Today, only a small percentage of people are able to describe how an internal combustion engine works (<1%?), something that was essential in the early decades after the invention of the car. But I don't think that those who don't understand how an engine works feel that their driving experience is limited in any way.
Certainly, thinking and reasoning are universal tools, and it could be that in the near future we will find ourselves dumber than we were before, unable to do things that were once natural and intuitive.
But LLMs are here to stay, they will improve over time, and it may well be that in a few decades, the human experience will undergo a downgrade (or an upgrade?) and consist mainly of watching short videos, eating foods that are engineered to stimulate our dopamine receptors, and living a predominantly hedonistic life, devoid of meaning and responsibility. Or perhaps I am describing the average human experience of today.
Giving a certain number of hours dedicated to passive entertainment, many more people prefer to watch a terrible tv show on Netflix than to read a masterpiece of literature.
It could be because the tv show is more "entertaining" (which is tautological), a desire for social conformity (people can discuss more easily with others the latest tv show than Anna Karenina), or escaping the cognitive effort required when reading literature, which is almost always greater than the one asked for when watching a movie or tv show, or a tiktok.
It's not even about quality. I consider films like There Will Be Blood or TV shows like Deadwood to be comparable in quality to the greatest works of world literature. I've also gotten a lot of joy and entertainment out of reading crappy books.
My problem is with statements like "paper is an inferior entertainment platform". To me, this is assuming that these different media are fundamentally providing the same kind of experience, which I disagree with.
I see your point about the cognitive effort of reading, though. I guess it depends on how fluently one can read, which depends on how much exposure to books one got as a kid.
The problem is that you are talking about your experience, and not about the distribution of experience of people, which is why I wrote "at the population level".
For the more intellectually sophisticated person (does not mean "better" person, to be clear), "entertainment type" is not fungible (movies as art, advertisement as investigation into the psychology of the masses, etc.) but for the vast majority of people, it is just a way to spend time.
You are referring to critically acclaimed movies and tv shows, but for the majority of people, leisure time in front of the tv is not spent bouncing between Fellini, Von Trier, PTA, Kubrick, et similia, but binge-watching the latest terrible Netflix tv show.
It is the same with food: we like to think that what prevents the masses from enjoying fine dining is the cost of the experience, but in reality, to many (myself included, most of the time), French fries with mayonnaise, a burger, and some ice cream is just a better proposition.
I disagree myself wiht the statement that paper is inferior, entertainment-wise, to tv, games, and tiktok--they all overstimulate me, I feel dirty after being on tiktok for 20 minutes and I feel as clean as a whistle after reading for 3 hours, in addition to the subtle intellectual stimulation I get from reading-- but in terms of choices made by people, books are certainly the losing party.
The comment I am responding to is referring to their own experience, which, at the population level, does not appear to be largely shared, as, at the population level (i.e., people in general, not intellectuls, not academics, all of them), it is evident that people consider tv shows, games, and tiktok superior (i.e. revealed preference) forms of entertainment with respect to books.
How was it not clear? I would prefer to engage with more substantive comments.
What's not clear to me is how aggregate preferences about entertainment media should affect my choice of entertainment media. TFA is worded to suggest that because "nobody" reads fiction, it should be dismissed when considering what to read.
I'm perfectly willing to accept that most people prefer Netflix to Umberto Eco. However, I don't. And that is one reason I reject the analysis in the article.
Sure, I don't think anybody is forcing you or anybody else to watch Netflix or play GTA instead of reading a mystery novel.
I find those types of articles and the comments following them to be starting points for broader conversations. In this case, broader than "I like to read books, and I will continue to do so".
I stopped doing scientific research years ago, but before moving on to other things, I had, like many others, I imagine, certain problems that I wanted to study, but given the lack of time and other concerns, I would never have picked them up again. I launched Codex, and it managed to untangle old files and analyses, find datasets that I didn't know where they had ended up, launch analyses under my guidance, and build visualizations that would have taken me days, if not weeks, to complete.
Of course, I have experience, I know what needs to be done, and I had to correct some errors made by Codex (I am paying for Codex and Gemini now, but I could go back to paying for Claude too), but I was amazed by the quality of the analyses.
To give an example, I had a dataset of weather observations that I had downloaded from a website years ago, hundreds of time series across weather stations. Codex managed to recover the missing time series, even though the website is no longer active, by first comparing the downloaded data with the data I had, and then also finding a digital elevation model.
Now I will guide Codex in developing a model of extreme events that will allow me to have a spatio-temporal model of extreme events that, without Codex, I would never have had the time or inclination to build.