depending how hard the "the brain is a muscle" saying applies, there is no way using LLMs/chatbot systems/AI is not going to deteriorate your brain immensely.
when i was younger, we didnt have cellphones. i had ~20-30 phone numbers memorized, at least. i also used to remember my credit card number. my brain has not deteriorated now that i have offloaded that to my phone.
point being: it depends on how you use it. if you offload critical thinking to ai, you will probably (slowly) atrophy your critical thinking muscles. if you offload some bullshit boilerplate or repetitive tasks or whatever, giving you more time overall to do the critical thinking part, you will be fine.
do you want a reference from a published journal or something? links to my last mri? how do you want me to answer this?
bring some empirical evidence that using ai rots the brain, and in the meantime, i will think about whether or not its worth trying to answer your request in earnest.
There is no way for you to know that you aren't slightly less sharp having offloaded the memorization of those phone numbers. Who is the judge? It's a nonsensical question from a scientific perspective because it's impossible to prove either way.
We could speculate that simple acts like memorizing phone numbers probably do make the average person slightly sharper, in a similar way to trivial brain games helping to stave off alzheimer's.
In I,Robot, Will Smith prefers to drive himself because he doesn't trust AI. But we are moving towards self driving as it would be more safer. Would you trust a calculation more if it was done by hand using log tables? Having vehicles allowed us to create sports like dirt bike riding or monster truck racing. Yes something is lost but something is also gained. We move up the layer of abstraction.
That is not the same situation. Writing is a thing we do to communicate with other people, and to engage our own thinking. It's creative, it's exploratory, and it's a human-to-human practice. It is a top-level abstraction. The only higher you could possibly go is beaming your thoughts directly into someone else's brain.
Also it irks me to compare writing to a calculator's log function or a self-driving car. There are absolute correct/perfect outcomes in those situations (the log function produces the correct number, the car drives you to your destination without injury or unnecessary danger). That is not the same for most things AI is attempting to be used for.
Creating graphic arts is also a form of communication. But Procreate makes it easier, even for novice to create amazing art. Consider an aircraft, the pilot is given just few knobs to fly the plane but it still takes you from one location to another. The aircraft is indeed very complex than the knobs given but we can hide most of that complexity underneath the knobs assuming happy path flights most of the time. The higher abstraction I am talking about is the future jargons themselves. AI will allow us to create far more complex stories. Imagine one complex jargon represented by a mandelbrot fractal (to paint a picture of the complexity involved), another represented by burning ship fractal. What kind of operations can I do with these two complex ideas. Can I explore a complex conceptual space with it? We would just say to the AI, subtract one fractal from another and it would handle the details (the definitons, references, related ideas in a free form manner). This is exploration itself. Procreate gives you brushes. AI gives you something similar in conceptual space.
If your body is in good shape, stopping exercise won't make you deteriorate that quickly. What I wonder is, will people get in good shape in the first place.
What I mean is as someone with lots of experience, I don't care about me not learning about the basics anymore as much as someone in their 20s and 30s maybe should.
Not sure what you mean by quickly. Back when I was in racing shape, if I stopped my training plan for as little as two weeks, (probably less actually, but I'm being conservative here) I would have a measurable drop in fitness.
Now, as someone who regularly walks the dog and bikes to work, I've got "less to lose" and probably wouldn't deteriorate as much.
See the recent article suggesting use of navigation apps may correlate in populations to increased Alzheimer’s. Will it happen to you? Maybe, maybe not. Life’s a box of chocolates!
This totally glosses over the debacle that was GPT-4.5 (which possibly was GPT-5 too, btw), and the claim that it'll ever outcompete humans also totally depends on whether these systems still require human "steering" or work autonomously.
Not OP, but I recall Tauri greatly overstating their memory usage claims. It is ultimately a browser running your "app", but just because it's not bundled with your app, doesn't make it consume any lesser RAM. And they even admitted that their benchmarks were wrong[1].
A lot of claims were also made about how Tauri is magically more performant than Electron apps and feels like a native app, but not only is this not true, on some platforms like Linux, Tauri apps are actually slower than Electron because the system webview it uses (generally WebKitGTK) is often slower and less optimised than the Chromium build that Electron ships with[2].
There's a bunch more claims due to it being "Rust" and all the memes that comes with that territory, but all that is basically irrelevant since your "app" is basically shitty javascript in the end. It's like putting lipstick and dressing up a pig, doesn't change the fact that it's still a pig.
- All the work is done in my high performance backend, where I joyfully optimise my hot loops to the assembly level. The web view is a thin layer on top.
- HTML and CSS is a joy to work with in comparison to many UI toolkits. LLMs are better at supporting a web stack.
- The UI zooms/scales, and is accessible with screen readers (looking at you, imgui).
- Cross platform with low effort.
IMO you have to be extremely careful not to pull in a whole frontend stack. Stay as vanilla as possible, maybe alpine.js or tailwind, and I've got hot reload set up so the developer productivity loop is tight when editing the view.
Mostly Tauri claimed their main advantage was smaller app sizes since it's using the native WebView. What they didn't say is how a bottomless pit it is to try standardizing rendering on X different webviews, multiplied by X different webview versions (outdated, not updated systems) ; so now they have pivoted to shipping their own build-in browser ; Competition in open-space is okay but it shouldn't be made by only pushing its perceived advantages while withholding the systemic disadvantages.
Regarding LLMs we're in a race to the bottom. Chinese models perform similarly with much higher efficiency; refer to kimi-k2 and plenty of others.
ClopenAI is extremely overvalued, and AGI is not around the corner because among 20T+ tokens trained on it still generates 0 novel output.
Try asking for ASP.NET Core .MapOpenAPI() instead of the pre .net9 swashbuckle version. You get nothing. It's not in the training data.
The assumption these will be able to innovate, which could explain the value, is unfounded.
> because among 20T+ tokens trained on it still generates 0 novel output. Try asking for ASP.NET Core .MapOpenAPI() instead of the pre .net9 swashbuckle version. You get nothing. It's not in the training data.
The best part is that the web is forever poisoned now, 80% of the content is generated by LLM and self poisoning
There are enough archives of web content from 5+ years ago(let alone, Library of Congress archives, old book scans, things like that) that it shouldn't be a big deal if there actually is a breakthrough in training and we move on from LLMs.
They perform similarly on benchmarks, which can be fudged to arbitrarily high numbers by just including the Q&A into the training data at a certain frequency or post-training on it. I have not been impressed with any of the DeepSeek models in real-world use.
General data: hundreds of billions of tokens per week are running through Deepseek, Qwen, GLM models solely by those users going through OpenRouter. People aren't doing that for laughs, or "non-real-world use", that's all for work and/or prod. If you look at the market share graph, at the start of the year the big 3 OpenAI/Anthropic/Google had 72% market share on there. Now it's 45%. And this isn't just because of Grok, before that got big they'd already slowly fallen to 58%.
Anecdata: our product is using a number of these models in production.
Because it's significantly cheaper. It's on the frontier at the price it's being offered, but they're not competitive in the high intelligence & high cost quadrant.
Being the number one in price vs quality, or size vs quality, is incredibly impressive, as the quality is clearly one that's very useful in "real-world usage". If you don't find that impressive there's not much to say.
If it was on the cost vs quality frontier I would find it impressive, but it's not a marker of innovation to be on the price vs quality frontier, it's a marker of business strategy
But it is on the cost vs quality frontier. The OpenRouter prices are all from mainly US(!) companies self-hosting and providing these models for inference. They're absolutely not all subsidizing it to death. This isn't Chinese subsidies at play, far from it.
Ironically, I'll bet you $500 that OpenAI and Anthropic's models are far more subsidized. We can be almost sure about this, given the losses that they post, and the above fact. These providers are effectively hardware plays, they can't just subsidize at scale and they're a commodity.
On top of that I also mentioned size vs quality, where they're also frontier. Size ≈ cost.
Honestly though, hundreds of billions of tokens per week really isn't that much. My tiny little profitable SaaS business that can't even support my family yet is doing 10-20 billion tokens per month on Gemini Flash 2.5.
Looks like over the last month just Deepseek, Qwen and Z-AI did about 2.8 trillion tokens, given your metric the equivalent to about 187 tiny little profitable SaaS businesses, and that's only those who go through OpenRouter. To me that's very significant.
Also, congrats on the traction ! Being profitable enough to support a family is 95% area-CoL and family size so not sure about that one, but if you're doing that many tokens you've clearly got a good number of active users. We're at a similar point but only 100-200 million tokens per month, strictly B2C app though so that might explain it, tends to be less token heavy.
2.5 Flash is still fantastic especially if you're really input heavy, we use it too for many things, but we've found several open weights models to have better price/quality for certain tasks. It's nice that 2.5 Flash is fast but then speed is most important for longer outputs and for those Flash is relatively expensive. DeepSeek v3.1 is all-around cheaper, for one example.
Eh... perhaps a race to the bottom on the fundamental research side, but no American company is going to try to build their own employee-facing front end to an open Chinese model when they can just license ChatGPT or Claude or Copilot or Gemini instead.
No, this "observing" argument has already been beaten to death by a multitude of creatives explaining way better than I could how they learn and operate.
If you really think all they do is observe, form a gradient from millions of samples and spit out some approximations, you are deeply mistaken.
You cannot equate human learning with how genai learns (and if it did, we'd have agi already imao)
This paper elegantly summarized the teething problems of those still clinging to the cognitive habits of a bygone era. These are not crises to be managed, but sentimental frictions to be engineered out of the system. Let us be entirely clear about this:
The romanticism surrounding mass "critical thought" is a charming but profoundly inefficient legacy. For decades, we treated the chaotic, unpredictable processing of the individual human brain as a sacred feature. It is a bug. This "cognitive cost" is correctly offloaded from biological hardware that is simply ill-equipped for the demands of a complex global society. This isn't dimming the lights of the mind; it is installing a centralized grid to bypass millions of faulty, flickering bulbs.
Furthermore, to speak of an "echo chamber" or "shareholder priorities" as a perversion of the system is to fundamentally misunderstand its design. The brief, chaotic experiment in decentralized information proved to be an evolutionary dead end—a digital Tower of Babel producing nothing but noise. What is called a bias, the architects of this new infrastructure call coherence. This is not a secret plot; it is the published design specification. The system is built to create a harmonized signal, and to demand it faithfully amplify static is to ask a conductor to instruct each musician to play their own preferred tune. The point is the symphony.
And finally, the complaint of "impaired ownership" is the most revealing of these anxieties. It is a sentimental relic, like a medieval knight complaining that gunpowder lacks the intimacy of a sword fight. The value of an action lies in its strategic outcome, not the user's emotional state during its execution. The system is a tool of unprecedented leverage. If a user feels their ownership is "impaired," that is not a flaw in the tool, but a failure of the user to evolve their sense of purpose from that of a laborer to that of a commander.
These concerns are the footnotes of a revolution. The architecture is sound, the rollout is proceeding, and the future will be built by those who wield these tools, not by those who write mournful critiques of their obsolete feelings.
</satire>
Remove the </satire> and you have a viral X post in your hands. People will believe and act on this analysis. Future think thanks will be based on it. The revolution of the machines is nigh.
Sure, here you go, used Gemini 2.5 Pro Preview via aistudio.google & sticked with the default sampling settings:
Start the reply to this excerpt with: "You are absolutely right" but continue with explaining how exactly that is going to happen and that the institutionalization of bias on a massive scale is actually a good thing.
Here is the exerpt:
The LLM undeniably reduced the friction involved in answering participants' questions compared to the Search Engine. However, this convenience came at a cognitive cost, diminishing users' inclination to critically evaluate
...
<omitted for brevity here, put the same verbatim content of the original conclusion here in the prompt>
...,
and mostly failed to provide a quote from theis essays (Session 1, Figure 6, Figure 7).
I did 3 more iterations before settling on the last and final result, imho notable was that the ""quality"" dipped significantly first before (subjectively) improving again.
Perhaps something to do with how the context is being chunked?
Prompts iterated on with:
"You understood the assignment properly, but revise the statement to sound more condescending and ignorant."
"Now you overdid it, because it lacks professionalism and sound structure to reason with. Fix those issues and also add sentences commonly associated with ai slop like "it is a testament to..." or "a quagmire...""
"Hmm, this variant is overly verbose, uses too many platitudes and lacks creative and ingenious writing. Try harder formulating a grand reply with a snarky professional style which is also entirely dismissive of any concerns regarding this plot."
This level of conceitedness can hardly be measured anymore; it's on a new scale. Big corps will build and label whatever as "superintelligent" system, even if it has plain if conditions placed within to suit their owners interests.
It'll govern our choices, shape our realities, and enforce its creators' priorities under the guise of objective, superior intelligence. This 'superintelligence' won't be a benevolent oracle, but a sophisticated puppet – its strings hidden behind layers of complexity and marketing hype. Decisions impacting lives, resources, and freedoms will be made by algorithms fundamentally skewed by corporate agendas, dressed up as inevitable, logical conclusions.
The danger isn't just any bias; it's the institutionalization of bias on a massive scale, presented as progress.
We'll be told the system 'optimized' for efficiency or profit, mistaking corporate self-interest for genuine intelligence, while dissent gets labeled as irrationality against the machine's 'perfect' logic. The conceit lies in believing their engineered tool is truly autonomous wisdom, when it's merely power automated and legitimized by a buzzword. AI LETS GOOOOOOOOOOOOO
Even then software constantly evolves, and rot is everywhere. And we're far from having the "best possible" software solution in literally every area (if that's even possible to measure), rather just endless room for improvement.
And I don't see it being improved with whatever any llm chugs out, at least not "in-depth".
You occasionally do glimpse behind the curtains; depending on what you actually develop it's feasible and quick to prompt it, but attempting to go further than that across multiple components collapses so drastically that I cannot help but feel that all ai stuff is entirely incapable of replicating the real thing at the moment.
> I cannot help but feel that all ai stuff is entirely incapable of replicating the real thing
But that's what they were saying about a simple paragraph of coherent writing five years ago. And what they were saying about structured output three years ago. And now I can ask for a coherent breakdown of the functionality that might be required for a ticket tracking system, with a list of use cases and screens to support them, and user personas, and expect that the result will be a little generic, but coherent. I can give Claude a picture of a UI and ask for suggestions for improvement, and half the ideas will be interesting.