1. The AI is not the "attorney" and the student's father is using ChatGPT to put up a legal case. So title is wrong on both counts.
2. The university had already stated the reason clearly. Suing them will not work out or get your kid admitted to them.
3. Is it racial discrimination if you happen to be a particular race and things dont go your way? If they were really baised, they wouldn't admit any Asian-ethnicity people in their programs which is clearly not true.
4. Saying ChatGPT+Gemini is equal to US' top lawyers is absolutely insane.
if you compare bare bones infra for the same amount of money its insanely high. We didnt want to sound arrogant by saying you iops goes anywhere 10x to 20x, you get 10x the kind of dedicated cpu for the same price etc. so we chose a very humble number 5x. https://cloudexit.pro/why-exit.html has some actual real world numbers.
> With Rust, the actual compiler did the babysitting. The agent could be sloppy and the language wouldn’t let the sloppiness ship.
If you really don't know Rust or its best design patterns (which the authors says they dont), your agent can absolutely write garbage code that still fails. Rust doesnt make writing bad code impossible or is some magical solution to computer languages. What a nonsense article
> shouldn't it sit at the OS level, watch everything you do, and figure out what to automate by itself?
Read that again and really ask yourself if you want a private company to have access to all that and the ability to do whatever it wants with your system at the OS level.
On a smartphone, you're trusting Apple or Google to make the OS. They already can do anything they want with your system. Do you read every line of code in every security update?
> Krafton said in a statement that it disagreed with the ruling and was evaluating its options and remained focused on delivering the best possible game for fans.
This time they will use Claude or at least the pro version of ChatGPT.
Interesting you mention non-coding AI apps because this seems pretty trivial to do with any harness (have a master file, update it over sessions + snapshots etc).
Most non-coding AI tools are meant for general consumers who normally don't care if they have to do a new search each session + the hacky memory features try to tackle this over the long term. Also you can always supply it with the updated file at each prompt and ask it to return by updating the file. (if you really want to do with something like ChatGPT).
And I think its a bit hyperbole to extrapolate this to "software philosophy is changing". Like most apps still work on documents/data? Not sure what you meant there
This one-shot content generated is quality-less, no-focus. I would not, but cannot help to use the word, garbage. That's why some time you see a picture or a piece of video, or even some content, you can smell it is AI. And AI smell stinks.
The real useful AI content(for this case, the financial goal) generation is a long term efforts, and current AI agent is not good at it. That's why the author created this platform(or App) to help people achieve there.
I think the reason why AI-generated content is garbage is because the people who use AI to create them have no general skills in content-creation and also bad taste. I know what's good content and what's bad content so I can work with an AI to rapidly iterate and remove all the bad parts it generates.
But most people who one-shot content barely bother to read it, let alone edit or polish. And that's why it stinks. I doubt that better AI agents will solve this. Writing powerful content is actually a pretty hard and subjective task; writing boilerplate marketing copy is very easy and mostly objective.
If the idea is that people who don't understand content will use it to generate content, then its logical to conclude they don't understand why their content stinks as well and will continue to publish the said content.
I feel almost 100% of content published (even on high-tier publications like The Guardian) have AI involved in creating them but their strong editorial processes prevent shit like "It's not LIKE THIS BUT LIKE THIS!" type text to be present 100 times in a 500 word article.
People need to understand writing is a skill and an art. AI can emulate it but so far, no model seem to have genuine creativity or skill in their writing unless you wrestle with it (but at that point it is YOUR skill that you're using to make the AI better).
> Most non-coding AI tools are meant for general consumers who normally don't care if they have to do a new search each session + the hacky memory features try to tackle this over the long term.
They do care. The vendors don't. Or rather, they're not prioritizing it.
Technically, two of the major players own office suite software, into which they integrate AI, and it's kinda starting to become usable for something. But it is still a bit ridiculous that there's nothing in mainstream tools themselves between single-shot document output and full Computer Use in Claude Desktop. Multi-document edit wouldn't be that hard to make as an extension of "canvas" mode in these tools.
> Like most apps still work on documents/data? Not sure what you meant there
I mentioned mobile. Most apps don't work on data documents, they work in private databases, and collaborate with the OS to keep you from accessing your data directly.
It is absolutely a philosophical change. The core unit of computer use in desktop era was a file. Data belonged in files, which were owned by users. Applications were used to operate on those files. In the mobile era, the core unit of computing is an app, which owns the data, and may or may not graciously let other apps access a partial, non-canonical copy by means of "Share" menu.
> MoffettNathanson runs the numbers and comes to the conclusion that YouTube’s estimated $62 billion in 2025 will have allowed it to pass The Walt Disney Co.’s media business, which generated $60.9 billion last year (excluding Disney’s lucrative experiences division).
Just for reference in 2025 annual year, the experiences division generated just a casual $36 billion with a pretty high profit margin.
This really doesn't seem like an apples to apple comparision. Youtube is nothing like Disney fundamentally
Also pretty sure Disney generates more revenue from merchandise and other licensing agreements to use its media in derivative products than it does from the media itself.
Update — ran a real bench on the live cluster (59 memories: 8 canonical facts × 3-4 paraphrases + 6 seeded contradictions + 20 distractors). Numbers:
duplicates per query (top-10): 0.9 → 0.0
top-result correct: 75% → 87.5%
11 consolidations in 80ms
conflicts detected: 0 of 6 seeded ← this one matters
Turns out conflict detection runs on graph edges, and /v1/remember doesn't auto-extract entities — so contradictions sit there invisibly until you explicitly call relate. That's a UX gap, not a missing feature, but it breaks the "drop memories in, get contradictions out" mental model. Filed as issues #1 and #2. Dataset + script + raw results: https://gist.github.com/spranab/49c618d3625dc131308227103af5.... Honest benches surface the kind of thing demos hide; thanks for pushing.
I started out my home server journey with Plex but it just kept getting worse, forcing me to switch to Jellyfin, which imo works just as well and seems to not fall into the whole pay us to stream your media business practice yet. Paywalling such a core feature was pretty harsh
It may seem "harsh", but this is simply the reality of using proprietary software. You don't have any control over it, and unless you stick with a particular version, it can change at any time (sometimes called a "rugpull"). And with anything internet-connected, it's not usually a good idea to stick with an old version because of security issues.
With open-source software, this just isn't a problem. Even if the company behind it decides to turn evil, the community can fork it and continue on. Just look at Emby for example: it did a rugpull and changed to a proprietary license, so the community forked it and made Jellyfin.
To say the purpose of a market is to reveal insider information is how you say insider trading is a good thing without saying insider trading is a good thing.
There's a ton of scholarly debate about it, and at least most of the early stuff was pretty earnest. But rather than the debate becoming more refined and nuanced over time it seems to have bifurcated along partisan (or partisan adjacent) lines like everything else, similar to the Keynesian/Misesian divide.
The proof that free markets are efficient (even in the narrow sense economists use this word) relies on an assumption of perfect information. This has been known at least since Akerlof.
The Misesian folks are a lost cause, IMHO. They're hardcore rationalists, self-indulging in circular moral arguments from assumptions that don't apply in the real world.
That's what makes the insider trading argument so tantalizing--it's arguing that it helps move the market closer to perfect information. But, of course, the world is complicated and dynamic, and it tacitly depends on all kinds of assumptions and beliefs about the resulting costs and benefits. It would be nice if the debate shifted to pinning down those assumptions, quantifying them as best as possible, and then iteratively tweaking and adjusting regulatory models. But that's true of just about everything and probably too unrealistic an ask, especially at a time when one side is convinced markets are just a mechanism for unjust exploitation, and the other side is convinced regulation is what sustains inequity (to the extent inequity is something even worth caring about).
2. The university had already stated the reason clearly. Suing them will not work out or get your kid admitted to them.
3. Is it racial discrimination if you happen to be a particular race and things dont go your way? If they were really baised, they wouldn't admit any Asian-ethnicity people in their programs which is clearly not true.
4. Saying ChatGPT+Gemini is equal to US' top lawyers is absolutely insane.
reply