The only ways that comprehending emotions wouldn't belong in its own category of intelligence would be if everyone were equally capable of deducing the emotional state of others, or that performing such deduction is not something intellectual, or that such deduction is strictly a consequence of existing intellectual categories.
>The only ways that comprehending emotions wouldn't belong in its own category of intelligence would be if everyone were equally capable of deducing the emotional state of others
Not every skill gets a whole category of intelligence.
>that such deduction is strictly a consequence of existing intellectual categories
>But why this matters is there a challenge judging intelligence cross cultures?
I don't know for sure, but my own anecdotal experience is that yes, there most certainly are challenges when a person from one culture assesses the intelligence of someone else from another culture.
It would be nice to know whether this is supported by scientific evidence, or whether this is simply my own personal bias at play.
I just looked into this a bit because I thought he still had some kind of role at Microsoft even after leaving as CEO/chairman, but it turns out that in 2020 he left any and all positions at Microsoft as it was investigating him over inappropriate sexual relationships he had with Microsoft employees.
Before that he had a role as a technical advisor and sat on the board of directors.
I also found it interesting that Steve Ballmer owns considerably more of Microsoft than Bill Gates (4% for Steve Ballmer while Bill Gates owns less than 1%).
Without a significant amount of needed context that quote just sounds like some awkward rambling.
Also almost every feature added to C++ adds a great deal of complexity, everything from modules, concepts, ranges, coroutines... I mean it's been 6 years since these have been standardized and all the main compilers still have major issues in terms of bugs and quality of implementation issues.
I can hardly think of any major feature added to the language that didn't introduce a great deal of footguns, unintended consequences, significant compilation performance issues... to single out contracts is unusual to say the least.
Because Disney's deal was specifically and exclusively related to Sora, which was OpenAI's bizzare attempt at a TikTok like social networking site but using AI generated videos.
It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.
Sora was "repurposed" as their AI slop social network. OpenAI is not getting out of the business of AI video in general, they're just realizing that an AI version of TikTok isn't the best use of their capital/resources.
> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.
I have not found this to be the case. My company has some proprietary DSLs we use and we can provide the spec of the language with examples and it manages to pick up on it and use it in a very idiomatic manner. The total context needed is 41k tokens. That's not trivial but it's also not that much, especially with ChatGPT Codex and Gemini now providing context lengths of 1 million tokens. Claude Code is very likely to soon offer 1 million tokens as well and by this time next year I wouldn't be surprised if we reach context windows 2-4x that amount.
The vast majority of tokens are not used for documentation or reference material but rather are for reasoning/thinking. Unless you somehow design a programming language that is just so drastically different than anything that currently exists, you can safely bet that LLMs will pick them up with relative ease.
> Claude Code is very likely to soon offer 1 million tokens as well
You can do it today if you are willing to pay (API or on top of your subscription) [0]
> The 1M context window is currently in beta. Features, pricing, and availability may change.
> Extended context is available for:
> API and pay-as-you-go users: full access to 1M context
> Pro, Max, Teams, and Enterprise subscribers: available with extra usage enabled
> Selecting a 1M model does not immediately change billing. Your session uses standard rates until it exceeds 200K tokens of context. Beyond 200K tokens, requests are charged at long-context pricing with dedicated rate limits. For subscribers, tokens beyond 200K are billed as extra usage rather than through the subscription.
I wouldn't say strictly speaking that I've written no code, but the amount of code I've written since "committing" to using Claude Code since February is absolutely miniscule.
I prefer having Claude make even small changes at this point since every change it makes ends up tweaking it to better understand something about my coding convention, standard, interpretation etc... It does pick up on these little changes and commits them to memory so that in the long run you end up not having to make any little changes whatsoever.
And to drive this point further, even prior to using LLMs, if I review someone's work and see even a single typo or something minor that I could probably just fix in a second, I still insist that the author is the one to fix it. It's something my mentor at Google did with me which at the time I kind of felt was a bit annoying, but I've come to understand their reason for it and appreciate it.
If you know the change you want to make why wouldn't you just make it yourself.
It seems like people who concede control to an AI are mostly people who didn't feel in control of it in the first place while keeping every detail intentional is no longer a priority.
Sort of... Claude Code writes to a memory.md file that it uses to store important information across conversations. If I review mine it has plenty of details about things like coding convention, structure, and overall architecture of the application it's working on.
The second thing Claude Code does is when it reaches the end of its context window it /compact the session, which takes a summary of the current session, dumps it into a file, and then starts a new session with that summary. But it also retains logs of all the previous sessions that it can use and search through.
Looking over my session of Claude Code, out of the 256k tokens available, about 50k of these tokens are used among "memory" and session summaries, and 200k tokens are available to work with. The reality is that the vast majority of tokens Claude Code uses is for its own internal reasoning as opposed to being "front-end" facing so to speak.
Additionally given that ChatGPT Codex just increased its context length from 256k to 1 million tokens, I expect Anthropic will release an update within a month or so to catch up with their own 1 million token model.
1. The closer the context gets to full the worse it performs.
2. The more context it has the less it weights individual items.
That is Claude might learn you hate long functions and add a line about short functions. When that is the only thing in the function it is likely to follow other very closely. But when it’s 1 piece of such longer context, it is much more likely to ignore it.
3. Tokens cost money even you are currently being subsidized.
4. You have no idea how new models and new system prompt will perform with your current memory.md file.
5. Unlike learning something yourself, anything you teach Claude is likely to start being controlled by your employer. They might not let you take it with you when you go.
Caching has so many caveats. The cache expiration window is short, if you change document in the context it clears the cache, if you change anything in the prompt prefix it clears the cache. And there’s no reason to think that Anthropic will keep charging dramatically less for cached tokens on the future once they start trying to make a profit.
Yeah of course they do because it saves them more money than they are passing on to you. That doesn’t mean that they are magically able to overcome the tradeoffs inherent to caching. All of the issues I mentioned will still invalidate your cache.
reply