How about marraige based green cards when residing outside the US? Any indication how long this takes and how complicated the process is? My wife and daughter are both US citizens and we live in Germany. I am a German citizen and have no type of visa in the US.
OP’s Qwen3.6 27B Q6 seems to run north of 20GB on huggingface, and should function on an Apple Silicon with 32GB RAM. Smaller models work unreasonably well even on my M1/64GB MacBook.
I am getting 10tok/sec on a 27B of Qwen3.5 (thinking, Q4, 18GB) on an M4/32GB Mac Mini. It’s slow.
For a 9B (much smaller, non-thinking) I am getting 30tok/sec, which is fast enough for regular use if you need something from the training data (like how to use grep or Hemingways favorite cocktail).
I’m using LMStudio, which is very easy and free (beer).
Not who you asked, but I've got a Framework desktop (strix halo) with 128GB RAM. In linux up to about 112GB can be allocated towards the GPU. I can run Qwen3.5-122B (4-bit quant) quite easily on this box. I find qwen3-coder-next (80b param, MOE) runs quite well at about 36tok/sec. Qwen3.5-27b is a bit slower at about ~24tok/sec but that's a dense model.
Yeah, taking the spice list as the starting point works much better, imo. I also prepopulate the CLAUDE.md file with some information like the pinout/pinmux of the MCU otherwise claude might run in circles trying targeting the wrong pin (to be fair that also happens to me, lol).
Spicelib really just makes calls to the selected spice engine (in my case ngspice). In this setup spicelib‘s main job is to parse the raw spice data and have a unified interface regardless which spice engine is selected. But to answer the question: the path to the spice model must currently be set explicitly.
Oh, I remember seeing Jumperless a while ago, but completely forgot about. Combining this with something like Jumperless does sound interesting. What does your setup look like? Does Claude tell you: "try 1k resistor in parallel here"?
I haven't tried it with codex yet. But my approach is currently a little bit different. I draw the circuit myself, which I am usually faster at than describing the circuit in plain english. And then I give claude the spice netlist as my prompt. The biggest help for me is that I (and Claude) can very quickly verify that my spice model and my hardware are doing the same thing. And for embedded programming, Claude automatically gets feedback from the scope and can correct itself. I do want to try out other models. But it is true, Claude does like to congratulate itself ;)
Claude can absolutely correct itself and change the source code on the MCU and adapt. However, it also does make mistakes, such as claiming it matched the simulation when it obviously didn't. Or it might make dubious decisions e.g. bit bang a pin instead of using the dedicated uart subsystem. So, I don't let it build completely by itself.
I have a feature request: I build an mcp server, but now it has over 60 tools. Most sessions i really don’t need most of them. I suppose I could make this into several servers. But it would maybe be nice to give the user more power here. Like let me choose the tools that should be loaded or let me build servers that group tools together which can be loaded. Not sure if that makes sense …
Can agents not checkout different branches and then work on them? It's what people also do. I have a hard time to understand what problem is even solved here.
to be entirely fair while git is getting better, the tooling UI/UX is still designed with expectation someone read the git book and understood exactly how it works.
Which should be basic skill on anyone dealing with code, but Git is not just programmer's tool any more for a long time so better UI is welcome
claude can use worktrees.. so if you have a system with say 10 agents, each one can use a worktree per session.. no need to clone the the repo 10 times or work on branches. Worktreeees.
reply