i haven't used cursor or codex or any system that says "agentic coding experience"
i speak in thoughts in my head and it is better to just translate those thoughts to code directly.
putting them into a language for LLMs to make sense and understanding the output is oof... too much overhead. and yeah the micromanagement, correcting mistakes, miscommunications, its shit
i just code like the old days and if i need any assistance, i use chatgpt
I built this as a weekend experiment to see how far you can push a basic LZ-style compressor using LLM-guided code mutations. No fancy ML models here—just a simple loop: mutate, evaluate, keep what works.
The LLM (GPT-4.1) suggests small code changes to improve compression ratio. Mutations are applied and tested on a real input file (big.txt). If the round-trip decompress fails, it's discarded. Everything is logged in a local SQLite DB.
Selection is dumb but effective: top 3 elites + 2 random survivors per generation. Each spawns 4 children. Repeat for N generations or until stagnation.
At around 30 generations, I hit a compression ratio of 1.85×. Still decent, considering the starting baseline.
It's not a framework, it's not Pareto, and there's no multi-objective fluff. Just a tiny search loop hacking away at code. Curious if others have tried something similar with code-evolving setups.
idk if this is only for me or happened to others as well, apart from the glaze, the model also became a lot more confident, it didn't use the web search tool when something out of its training data is asked, it straight up hallucinated multiple times.
i've been talking to chatgpt about rl and grpo especially in about 10-12 chats, opened a new chat, and suddenly it starts to hallucinate (it said grpo is generalized relativistic policy optimization, when i spoke to it about group relative policy optimization)
reran the same prompt with web search, it then said goods receipt purchase order.
absolute close the laptop and throw it out of the window moment.
Retrieval Augmented Generation. Fancy way of saying, retrieve chunks from your document corpus similar to your input using a similarity (mostly cosine) of embedding vectors of the chunks and input vectors, then inject those relevant chunks into your prompt to the LLM. Useful for Document Intelligence.