I’ve never had any problems with MiniMax. I wouldn’t call the speed fast exactly, but it’s faster than GLM and seems similar to Opus.
It’s been fast enough that I’ve been using it as my main model (M2.7 and before that, M2.5). Opus still does better at tasks, but MiniMax is so much cheaper. I’ve used their cheaper plan and I’ve never been rate limited.
Anthropic have done a lot of things that would give me pause about trusting them in a professional context. They are anything but transparent, for example about the quota limits. Their vibe coded Claude code cli releases are a buggy mess too. Also the model quality inconsistency: before a new model release, there’s a week or two where their previous model is garbage.
A/B testing is fine in itself, you need to learn about improvements somehow, but this seems to be A/B testing cost saving optimisations rather than to provide the user with a better experience. Less transparency is rarely good.
This isn’t what I want from a professional tool. For business, we need consistency and reliability.
I’m a huge user of AI coding tools but I feel like there has been some kind of a zeitgeist shift in what is acceptable to release across the industry. Obviously it’s a time of incredibly rapid change and competition, but man there is some absolute garbage coming out of companies that I’d expect could do better without much effort. I find myself asking, like, did anyone even do 5 minutes of QA on this thing?? How has this major bug been around for so long?
“It’s kind of broken, maybe they will fix it at some point,” has become a common theme across products from all different players, from both a software defect and service reliability point of view.
I mean it's like, really they don't even need agentic AI or whatever, they could literally just employ devs and it wouldn't make a difference
like, they'll drop $100 billion on compute, but when it comes to devs who make their products, all of a sudden they must desperately cut costs and hire as little as possible
to me it makes no sense from a business perspective. Same with Google, e.g. YouTube is utterly broken, slow and laggy, but I guess because you're forced to use it, it doesn't matter. But still, if you have these huge money stockpiles, why not deploy it to improve things? It wouldn't matter anyways, it's only upside
I don’t think they’re even saving much on vibe coding it, given how many tokens they claim they’re using. I know the token cost to them is much, much lower than the token cost to us, but it still has a cost in terms of gpus running.
Plus it’s not something we can replicate since we don’t have access to infinite tokens, so it’s not even a good dogfooding case study.
Almost every 3D game in the past 20 years uses procedural foliage generation (eg SpeedTree and similar). Many use procedural terrain painting. Many use tools like Houdini.
So procedural generation is extremely prevalent in most AAA games and has been for a long time.
It’s BRUTAL but offers solutions.
reply