Eswatini (fka Swaziland) is the only African country that officially recognizes Taiwan. But it's also a tiny little place fully surrounded by South Africa.
Eswatini is also an absolute monarchy which bans the formation of political parties, does not allow women to own property, and has the highest rate of HIV/AIDS infection of any country in the world. So, probably not a great place to hold a conference.
Technically not but the devil is in the details. Having to reinstall the app every 7 days and a limit of one app doesn’t even pass the bare minimum.
Jolla has a prelaunch campaign, decent phones for 200€. I might just as well grab one. Sick of having a phone which is more expensive than my laptop but I can barely use.
Wait, I can download and run iOS on my own hardware? Not that I have tried, but I always thought Apples whole schtick was you were only allowed to run their software on their latest X revisions of their hardware?
Composer-2 is based on Kimi K2.5, but with extensive RL. Cursor estimated 3x more compute on their RL than the original K2.5 training run (some details in https://cursor.com/blog/composer-2-technical-report).
I used to hate on Composer 2 but I'm coming around to it. Opus for the big stuff and multi-file operations, Composer for all the small day-to-day IDE tasks works pretty good for me.
I'm going to be brutally honest but I have not found Kimi to be useful at all. It simply cannot compete with what closed models from Codex and Claude offers. I don't want to risk using a model outside the ecosystem and introduce variables as most of my workflow is baked into two to three large company models.
That's interesting, Kimi K2.5 used through KimiCode was comparable to Sonnet in my tests, and is an excellent alternative to Anthropic models
That being said, I noticed that Kimi being served through Openrouter providers was trash. Whatever they do on the backend to optimize for throughput really compromised the intelligence of the model. You have to work with Kimi directly if you want the best results, and that's also probably why they released a test suite to verify the intelligence of their new models.
Anthropic shouldn't have released it. The gains are marginal at best. This release feels more like Opus 4.6 with better agentic capabilities.
Mythos is what I expected Opus 4.7 to be. Are users gonna be charged more with this release, for such marginal gains.
It could set a bad precedent.
Looks like a lightweight article.
But memory usage went from 316MB -> 502 MB when I hit refresh.
Not sure why? Any one have any ideas? Why does it need half a gig of ram in the first place?
reply