Hacker Newsnew | past | comments | ask | show | jobs | submit | qrian's commentslogin

yeah same here


My understanding was that Christopher Alexander called the quality without a name "wholeness" later in their life. Does it mean a different thing than the "resonance" in this article?


itertools.count is probably what OP is looking for


This sounds like the concept of ‘normal science’ in paradigm theory.


Bedrock has a batch mode but only for claude 3.5 which is like one year old, which isn't very useful.


I'd rather have teachers assume stupidity than malice when it comes to student utilizing AI to essentially skip their learning.


I can totally believe that they deployed it because internal metrics looked good.


I see a lot of people getting confused, and the contention here is not that ChatGPT helped prepare for martial law in any way, but the fact that someone knew about it happening before it happened. Not really related to ChatGPT IMO.


Direct link to the Google translate for anyone else who can't read Korean [0]. This comment is correct, and the English headline is confusing, especially for English-speaking readers of HN who don't have context for who actually made the decision and why it would be controversial that the head of the guard knew about it ahead of time.

> At 8:20 p.m. on December 3rd last year, when Chief Lee searched for the word, the State Council members had not yet arrived at the Presidential Office. The first State Council member to arrive, Minister of Justice Park Sung-jae, arrived at 8:30 p.m. It is being raised that Chief Lee may have been aware of the martial law plan before them. Martial law was declared at 10:30 p.m. that night.

[0] https://www-hani-co-kr.translate.goog/arti/society/society_g...


So why it is significant?


No idea, we'd need someone from Korea to clarify what the expectations are here, the news story just assumes that you know why it would matter.


It sounds like the person in question, the head of the presidential guard(?), had previously claimed that he only learned about Yoon's martial law declaration when it was proclaimed on TV. But if he was asking ChatGPT about it even before the cabinet meeting that decided on it, that means he was lying.

Considering that the whole affair is considered treason now and we now know of memos talking about "collecting persons of interest, put them in a ship and explode it" (no, seriously) --- there's a very good chance that the inner cabal who planned the coup would get life sentences or worse.

(I'm not sure how important was the person mentioned in the article - there are just too many bastards. It does seem like a random article to show up on HN.)


So it's one prson who claimed to be on the outside of the plot was caught to be on the inside, right?


Yeah


He turned to ChatGPT to find out what to do if martial law was declared. Of course, this isn't ChatGPT's fault - it's just a black comedy. Lol


The relation is the trust people set into things like ChatGPT.

That’s the dangerous part.


If there were no ChatGPT we'd be reading about a Google search here instead (or more likely we wouldn't, because it wouldn't be interesting enough to get traction among non-Koreans on HN). If the quotes in TFA are accurate he wasn't having a conversation with ChatGPT about it, he appears to have just entered some keywords and been done with it (and if he had had a conversation, it sure seems like that would come out!).

We can't infer any amount of trust from this episode except the trust to put the data into ChatGPT in the first place, and let's be honest: that ship sailed long ago and has nothing to do with ChatGPT.


Tbh I often use it to get a starting point. If you ask it about say martial law it'd likely mention the main pieces of legislation that cover it which you can then turn to.


and then it hallucinates


Even if it does that early on you'd just land on unrelated legislation. You'd notice pretty quick that it's about a whole different topic.

The reason I do it in combination with normal search is that normal search will often get clogged up by 3rd party websites and at best lead you to only the main legislation. The LLM is likely to name you the main legislation so you can search for it directly by name and also mention other major related pieces.


Is it much worse than trusting Wikipedia or another encyclopedia? Maybe it is easier to make ChatGPT give you bad advice while encyclopedias are quite dry?


ChatGPT can just send you something that is completely wrong, and you have no way of knowing. That's why it's bad. On Wikipedia, for example, there is page history, page discussions, rules about sources, sources, and you can see who wrote what. Additionally, its likely someone knowledgeable has looked at the EXACT text you're reading, with all implied and not implied nuances.

ChatGPT doesn't get nuances. It doesn't get subtle differences. It also gets large amounts of information wrong.


> ChatGPT can just send you something that is completely wrong, and you have no way of knowing.

This is true, if you decide to take a ChatGPT answer at face value without any further work. Personally I find it useful sometimes to ask an LLM a question, get an answer and the verify that answer for myself. Doing web searches and pulling together relevant information to get the answer for a question can be harder than getting an answer and then looking to verify it. Perhaps something like that was going on here, impossible to know of course.


ChatGPT rarely gives you sources for anything outside of writing software and doing homework


Here's an example: When asked about path buffer length in a programming context, ChatGPT 4-o claimed today tht 256 bytes is sufficient for *most systems*. That's an entirely false claim, like, completely invalid. It only says this because that's the tone that is expected of it. You can clearly tell that the info it wanted to convey was "256 is sufficient [here]" but it LOVES just making things sound more general than they are.

you aren't gonna look up if that little detail is right; you're gonna slowly absorb more and more subtly false info.


the point of it is that I don't have to check. otherwise now i've just added an extra set of typing and validation.

plus, now i've been biased by the immediate response. if it says "these CVEs don't have vulnerabilities" then I'm now thinking they're probably okay and just need to validate, instead of starting from zero and doing due diligence. this will lead to confirmation biases or laziness.


Everyone sees the same Wikipedia, what if chatgpt or grok gave a different answer to constitutional questions if the user's ip were, say, from a DoD network? Nobody would know.


I do not have the same trust in Wikipedia. My experience as an editor is that for each page there are a few people who think they own the page, and they remove any edit that affects their text.

Actually, there is an incentive to remove edits in Wikipedia if you want to be part of the ego-fueled bureaucracy that considers WP as their property.


Humans share the same faults.


With some humans, you can at least rely on their humility and ability to say "I don't know". This is a positive trait in people and I would rely on such honest people much more than on anyone who has all the answers to everything.

The machine seems to be unable to say or even detect that it does not know. At the same time, it communicates in flawless English (or whatever the current setting is), which is a trait we tend to associate with highly educated people from the real world. This short-circuits our bullshit detectors a bit.


> With some humans, you can at least rely on their humility and ability to say "I don't know". This is a positive trait in people and I would rely on such honest people much more than on anyone who has all the answers to everything.

You might, and I try to. Humanity as a whole? In practice, highly confident people who are totally sure but wrong, still get listened to over people who are humble and aware of their limits.

Humans also short-circuit each other's BS detectors.


The bias to assume that computers are going to produce correct answers is extremely strong.

People intuit that Wikipedia is written by people, so they can apply that knowledge appropriately.

For some reason, most people have a knee jerk reaction to a fully synthetic statement that biases them strongly towards the assumption of veracity.

I always think of LLMs as “my functioning alcoholic veteran friend bob, who has several PHDs and was blown up a couple of times in Iraq”. That seems to be a good framework in order to intuit the usefulness of llm generated output.


"The bias to assume that computers are going to produce correct answers is extremely strong."

This. We know that computers are very good at actual computation, and we don't expect them to go completely haywire in conversations either.

Though this is beginning to change, with the observation of just how blatant some of the hallucations are, accusing random people of serious crimes etc. But the pro-computer bias is still strong.

There was an awful case of a system in the UK which accused postal officers of defraudation. The software malfunctioned, but people were indicted and punished by the courts relying on infallibility of computers, and some of the innocent victims committed suicide out of shame.


They don't.

1. LLMs are put in a position where everything they say is clearly based on encyclopedic knowledge of absolutely everything

2. LLMs try to use language that is very general, helpful and friendly, and as a result end up not properly portraying nuances, like "sometimes", "in this case", "not always", etc.

3. Humans are capable of saying "I don't know", or "I think XYZ but I'm not sure"

4. Humans convey that they aren't sure by lack of nonverbal confidence

These are differing sets of skills and issues. LLMs dont behave like humans, they don't solve things like humans, and people take what they say at face value by default.


Have you used ChatGPT to investigate something you're knowledgeable about?

ChatGPT is consistently lying (hallucinating), sometimes in small ways and sometimes in not so small ways.


Yes it’s much worse. With Wikipedia we all see the same output and can review it together.


Yes. It's much much worse.


Another dangerous part is how people find out what other people do on their computers.


I thought the problem was that he didn’t use Claude. Clearly he doesn’t pass the vibe test.


Very interesting... I see a lot of parallel with my higher-ed startup - regulatory moat, fragmented market almost like consulting, and super long sales cycle...

Maybe I should be more mindful of integration cost than I do know.


For context, keygen allegedly has $195.4K revenue and 100 customers in 2024.[1]

[1]: https://getlatka.com/companies/keygen


Keygen had 100 paying customers 6 years ago. I don't report Keygen's revenue publicly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: