> how OpenAI is not-so-subtly adopting a social-network-esque model, in how it's fine-tuned its chat system to always suggest another question that the user might want to ask.
There’s that, but it could also be adaptation to the fact users… just don’t know what to do with it.
Just like the prompt suggestions they added for new conversations a little time after releasing the first app. Those seem to be mostly gone now, at least on mobile.
Oh, so many things. I guess that’s both the blessing and the curse of agentic ai today.
The most fun is a simple Claude Code in a loop, Boucle, which builds and iterates on its own framework[0][1].
The first thing it built was a persistent memory. Now it has finally built itself a "self-observation engine" after countless nudging attempts. Exploring, probing, and trying to push back the limits of these models is pure chaos, immensely frustrating, but also fun.
Aside from that, some sort of agent harness I guess we call them? Putting together a "system" / "process" with automated reviews to both steer agents, ground them (drift is a huge pain), and somehow ensure consistency while giving them enough leeway to exploit their full capabilities. Nothing ready to share yet, but I feel that without it I’ll just keep teetering on the edge of burnout.
I’ve been running a Claude Code "thing" in a loop for a few days, and that has been extremely frustrating.
But after tons of nudging it has started developing a sort of "improvement engine", as it calls it, for itself to help address that.
It go through its own logs and sessions, documents and keep track of patterns and signals, associated strategies, then regularly evaluate their impacts, independently of the agent itself, and it feeds those back to it in each loop.
I got it to build a stereoscopic Metal raytracing renderer of a tesseract for the Vision Pro in less than half a day.
It surprisingly went at it progressively, starting with a basic CPU renderer, all the way to a basic special-purpose Metal shader. Now it’s trying its teeth at adding passthrough support. YMMV.
It’s funny. The whole “review intent", "learning" from past mistakes, etc, is exactly what my current set up does too. For free. Using .md files said agents generate as they go.
There’s that, but it could also be adaptation to the fact users… just don’t know what to do with it.
Just like the prompt suggestions they added for new conversations a little time after releasing the first app. Those seem to be mostly gone now, at least on mobile.
reply