How much better is an LLM at mass surveillance? Obviously RAG with everyone’s details in it is useful but it’s also likely prone to hallucinations. I’m not sure LLMs are the right AI for even finding patterns in such data. As for letting LLMs autonomously kill people they clearly won’t be ready for that any time soon.
Does the administration really believe these AIs are like digital humans?
I've now moved to Claude and it's much better actually, if like me you hate their fonts (Anthropic Sans) select System fonts in the Claude preferences and you can use this snippet in Safari's Settings -> Advanced -> Stylesheet to make everything your default system font:
They won't have a decent response, this is the Internet after all. I really enjoyed it thanks for writing it and I'll take a lot of it onboard. I think everyone will have their own software stack and AIs designed perfectly for them to do their work in the future.
I imagine this will be popular in other countries too. Such an incredible product for the price. Does anyone have benchmarks comparing the A18 to an M1 say?
I work at a tech-adjacent company with no middle management and no, it sucks even more. The work doesn't disappear, it simply gets divided and spread out over a lot more people, many of those with no real executive power. I don't even think this saves my employer any money in the long run.
Yeah. There's always such a lack of realpolitik in these discussions. They turn into endless bike shedding about what a manager is supposed to do according to some ideology of management, rather than the reality of the decisions managers are actually in control of and their actual tangible outputs.
It is simply marketing nonsense - what they really mean (I think) is they support matrix multiplication (matmul) at the hardware level which given AI is mostly matrix multiplications you'll get much faster inference (and some increase in training too) on this new hardware. I'm looking forward to seeing how fast a local 96gb+ LLM is on the M5 Max with 128gb of RAM.
We've already established in this thread that memory bandwidth isn't that much greater than M4 Max - 12%? However, I wonder if batched inference will benefit greatly from the vastly improved compute. My guess is that parallel usage of the same model will be a couple times faster. So, single "threaded" use not that much better, but say you want to run a lot of batch jobs, it'd be way faster?
Better example then: I assume the TV series 24 will be banned for glorified depictions of torture and violence. Real actors portraying torture as an effective investigation method for high-stakes anti-terror investigations, despite being illegal both in the setting and in real life
Tom and Jerry depicts 2 beings assaulting each other which is bad.
Porn in this case depicts near incest also bad.
You could say tom and jerry is worse because that's aimed at kids, where as porn is aimed at adults who we tend to trust to tell fantasy from reality.
The comparison was then moved forward to ai porn.
That has no real people.
So is that bad?
It's a legitimate question. This isn't about reality, it's about depiction. Real people depicting a thing crosses the line. A cartoon depiction does not. Where does ai stand.
But as we all know, the line is one is violence where as one is sex, and you appear to think that it's worse to depict a normal consensual act than depict violence or murder.
reply