I feel like even the phrasing of the original assumption that "we have more bank tellers now that we had before", which seems to imply that ATMs didn't affect or even boost the number of bank tellers is flawed.
If you look at the graph, the number of bank tellers from 1980 to 2010 went from roughly 500k to 550k (a 10% increase). However, the U.S. population grew from 220M to 305M in the same period (a 40% increase). To me, that seems to indicate that less and less people were becoming bank tellers after the invention of the ATM. Although from the graph again, you can see that the correlation is quite poor anyway.
Yes about traffic lights pattern: 5 traffic lights along the way, each with ~90s wait time or only a zero or a single wait and then riding the "green wave". So not overstating the savings either.
Every scientist I ever met (and myself included) has a backlog of papers to read that never seems to shrink. It really is not trivial to stay up to date on research, even in niche fields, considering the huge volume of research that is being produced.
It is not uncommon for me to read a recently published review and find 2-3 interesting papers in the lot. Plus the daily Google scholar alerts. It can definitely be beneficial to have a LLM summarize a paper. Of course, at this point, one should definitely decide "is this worth reading more carefully?" and actually read at least some parts if needed.
Some jobs that I interviewed replied with an automated email saying that, if I wanted, I could ask for feedback. I always did and none of them replied... This somehow feels even more insulting.
I tried it for 2 days and honestly don't see the usefulness either. Although, the big reason is that I paired it with Claude, which only uses the per token billing method. Here are the few improvement on a simple Claude usage:
- As you mentioned, the message bot thing was kind of cool.
- It can browse the internet and act (like posting on MoltBook, which I tried).
- It has a a permanent "memory" (loads of .md files, so nothing fancy).
- It can be schedulded via cron jobs.
Overall, nothing really impressive. It is very gimmicky and it felt very unsafe the whole time (I had already read about the security issues, but sometimes you gotta live dangerously). The most annoying part was the huge token consumption (conversations start at 20k+ because of all the .md files) and it cost me roughly $12 for a few hours of testing.
But chess models aren't trained the same way LLMs are trained. If I am not mistaken, they are trained directly from chess moves using pure reinforcement learning, and it's definitely not trivial as for instance AlphaZero took 64 TPUs to train.
Modern LLMs often start at "imitation learning" pre-training on web-scale data and continue with RLVR for specific verifiable tasks like coding. You can pre-train a chess engine transformer on human or engine chess parties, "imitation learning" mode, and then add RL against other engines or as self-play - to anneal the deficiencies and improve performance.
This was used for a few different game engines in practice. Probably not worth it for chess unless you explicitly want humanlike moves, but games with wider state and things like incomplete information benefit from the early "imitation learning" regime getting them into the envelope fast.
I meant trivial in the sense it's a solved problem, I'm sure it still costs a non-negligible amount of money to train it. See for example the chess transformer built by DeepMind a couple of years ago which I referred to in a sibling comment [1].
I admit, my knowledge of reinforcement learning is a bit outdated so it seemed to me that it was unattainable for a non-specialized model to train efficiently on something like chess, which has a huge state space.
I also was under the impression that queries cost were mostly meaningless, but it seemed only is true for fresh sessions and short queries. I have to say, the result is less dramatic than I expected but still more significant for heavy users (such as myself).
If you look at the graph, the number of bank tellers from 1980 to 2010 went from roughly 500k to 550k (a 10% increase). However, the U.S. population grew from 220M to 305M in the same period (a 40% increase). To me, that seems to indicate that less and less people were becoming bank tellers after the invention of the ATM. Although from the graph again, you can see that the correlation is quite poor anyway.
reply