For completeness, there is also Peirce’s arrow aka NOR operation which is functionally complete. Fun applications iirc VMProtect copy protection system has an internal VM based on NOR.
That’s boolean functional completeness, which is kind of a trivial result (NAND, NOR). It mirrors this one insofar as the EDL operator is also a combination of a computation and a negation in the widest senses.
Big nod. I've been trying to register our company to sell customized products. It's been quite an ordeal with document rejections etc, and at the end they just said the rejection is final. No support, no appeals, no transparency. Yet those ALLCAPS companies seem have no troubles.
We battled https://learn.microsoft.com/en-us/answers/questions/1331370/... for over a year, and finally decided to move off since there was no any resolution. Unfortunately our API servers were still behind AFD so they were affected by today's stuff...
Impressing, but I can't believe we went from fixing bugs to coffee-grounds-divination-prompt-guessing-and-tweaking when things don't actually go well /s
Would be great if they provided at least some guidance how to keep this thing on topic. Even the official demo https://chatkit.world/ is not restricted, it happily chats about whatever.
See that screenshot. It certainly shows you when your 5 hour session is set to refresh, in my understanding it also attempts to show you how you're doing with other limits via projection.
It's not exactly the same thing, but imagine my complete surprise when, in the middle of a discussion with Copilot and without warning, it announced that the conversation had reached its length limit and I had to start a new one with absolutely no context from the current one. Copilot has many, many usability quirks, but that was the first that actually made me mad.
ChatGPT and Claude do the same. And I have noticed that model performance can often degrade a lot before such a hard limit. So even when not hitting the hard limit, splitting out to a new session can be useful.
Context management is the new prompt engineering...
One problem I find is that a lot educational content has moved into YouTube and videos (monetization be damned). I have no time to watch 10mins of rambling and ads for a quick tip, LLMs are great at distilling the info. Otherwise, I agree, deep knowledge building only happens through doing stuff…
Quick google seach brings up https://github.com/pr701/nor_vm_core, which has a basic idea
reply