Hacker Newsnew | past | comments | ask | show | jobs | submit | jandoze's commentslogin

Fair pushback on the framing. "Dogfooding" to me means: does Anthropic rely on their own product under real-world conditions enough that they feel these pain points and prioritize fixing them? It's less about support and more about product reliability signals. On the credentials — you're right, it reads like I'm fishing for VIP treatment. That wasn't the intent; the point was about how enterprise AI adoption actually works (practitioners test, validate, then advocate up the chain). I probably led with it too hard. And on the $200 plan: I'm running multi-agent workflows via Claude Code that are coming close to saturating even Max tier limits — the $25 option isn't a realistic fit for that workload and I was hitting my limits quite a bit. This is to build a couple personal projects but also give it a true test of how it would be used from an enterprise perspective vs. my side projects. No doubt there is some room for me to optimize as I learn more though. Hopefully I won't need to spend $200/month in the future when I'm more skilled with my prompts, use of projects, etc. That is another opportunity to leverage an agentic framework to assist users with adoption though (which might also help to manage their scaling challenges.)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: