- In your project's CLAUDE.md file, put "Read `docs/agents/handoff/*.md` for context."
Usage:
- Whenever you've finished a feature, done a coherent "thing", or otherwise want to document all the stuff that's in your current session, type /handoff. It'll generate a file named e.g. docs/agents/handoff/2026-03-30-001-whatever-you-did.md. It'll ask you if you like the name, and you can say "yes" or "yes, and make sure you go into detail about X" or whatever else you want the handoff to specifically include info about.
- Optionally, type "/rename 2026-03-23-001-whatever-you-did" into claude, followed by "/exit" and then "claude" to re-open a fresh session. (You can resume the previous session with "claude 2026-03-23-001-whatever-you-did". On the other hand, I've never actually needed to resume a previous session, so you could just ignore this step entirely; just /exit then type claude.)
Here's an example so you can see why I like the system. I was working on a little blockchain visualizer. At the end of the session I typed /handoff, and this was the result:
The filename convention stuff was just personal preference. You can tell it to store the docs however you want to. I just like date-prefixed names because it gives a nice history of what I've done. https://github.com/user-attachments/assets/5a79b929-49ee-461...
Try to do a /handoff before your conversation gets compacted, not after. The whole point is to be a permanent record of key decisions from your session. Claude's compaction theoretically preserves all of these details, so /handoff will still work after a compaction, but it might not be as detailed as it otherwise would have been.
Hardware will continue to improve, and eventually you'll have the choice of reaching a flow state with 2026 models, or using frontier models at our current level of performance.
In a sense, that is almost exactly the vision of the future shown in accellerando. User can and does send tons of specialized agents into the world. I am still not certain if I buy the premise of the article, but then my company is too cheap to let me play with Claude.
I have GitHub Copilot Pro. I don't believe I signed up for it. I neither use it nor want it.
1. A lot of settings are 'Enabled' with no option to opt out. What can I do?
2. How do I opt out of data collection? I see the message informing me to opt out, but 'Allow GitHub to use my data for AI model training' is already disabled for my account.
Hey David - if you want to send me (martinwoodward at github.com) details of your GitHub account I can take a look. At a guess I suspect you are one of the many folks who qualified for GitHub Copilot Pro for free as a maintainer of a popular open source project.
Sounds like you are already opted out because you'd previously opted out of the setting allowing GitHub to collect this data for product improvements. But I can check that.
Note, it's only _usage_ data when using Copilot that is being trained on. Therefore if you are not using Copilot there is no usage data. We do not train on private data at rest in your repos etc.
> Nothing (reasonable) can protect against direct lightning strikes
Belkin make a number of surge protectors which offer a connected equipment warranty in the UK. Admittedly: financial protection, not data protection, but I felt it was worthwhile for the peace of mind.
Have they ever paid out on one of those, or is it like CAs who offer liability protection for their certificates carefully set up in such a way that they never have to pay out.
In the past week (besides the constant slop), there are models which have misattributed the copyright of new files to me, and stripped my copyright from existing files. It's sapping up time, energy and motivation.
reply