I have been very curious to try Ohm. I'm currently using a hand-rolled parser combinator library, but Ohm looks slick. The online editor is nice too https://ohmjs.org/editor/
> The human is left doing whatever the machine can't, often a narrower slice of the original role
I haven't seen anyone talk about AI and its impact on flow yet. It's pretty easy for me to achieve a flow state while coding without AI, but with AI, I'm not so sure. I spend my time managing multiple Claude instances as they work on different tasks, and there's no time to go really deep into anything.
Flow was such a productivity boost for me. Even though Claude definitely helps me finish tasks quicker, I've started wondering how much quicker it actually is, vs getting into flow.
I’ve tried having one “big” task that I’m focusing on with active back and forth while letting other Claude instances handle easier back-burner type tasks that it can effectively one-shot. But I’ve noticed that often turns into me spending more time/focus than I’d want on tasks that aren’t actually that impactful. I still think I get more done than I would otherwise, but I still haven’t found the best management strategy.
I’ve seen people share the same experience here on HN. Im also in the same boat while I find LLMs uncomfortably useful but quite tiring to work with. To maintain flow I spend more time on crafting a complete and clear promot, akin to programming in natural language and avoid the back and forth when possible.
I feel like you can get into a different sort of flow - a low-key flow where you're managing a bunch of different streams as interrupts come in. Different kind of focus, much more big-picture, kinda like playing an RTS.
Hardware will continue to improve, and eventually you'll have the choice of reaching a flow state with 2026 models, or using frontier models at our current level of performance.
In a sense, that is almost exactly the vision of the future shown in accellerando. User can and does send tons of specialized agents into the world. I am still not certain if I buy the premise of the article, but then my company is too cheap to let me play with Claude.
At this point if I see "Made with {whatever_service_you_outsourced_thinking_to}" on a PR description and you didn't even feel like putting the effort to remove it, I'm going in with the assumption that you didn't bother to do or check a lot of things.
This is very cool. Based on the title, I thought this would end with an example of using this font for code. I'd still be interested in seeing an example of that.
Good idea, but as several comments here suggest, the time when this sort of thing could be taken as satire is gone. I promise you there are multiple people here thinking that this is a good idea. I predict that within a year we will see a service that does exactly this.
Criticisms aside (sigh), according to Wikipedia, the term was introduced when proposed by mostly Googlers, with the original paper [0] submitted in 2018. To quote,
"""In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information."""
To me, model card makes sense for something like this https://x.com/OpenAI/status/2029620619743219811. For "sheet"/"brief"/"primer" it is indeed a bit annoying. I like to see the compiled results front and center before digging into a dossier.
I feel like they are often unwanted discards... but we have one and we try to put interesting books in there. Recently gave away my entire Harry Potter series.
reply