People don't know when they don't know and often inflate their knowledge unknowingly. I'm not saying we can't do it at all.
I'm saying we're not great at it. There's research that shows we can't even be trusted to accurately say why we make certain decisions or perform certain actions. It's all post-hoc rationalization. If you make someone believe they made another decision, they'll make something up on the fly to justify it.
When humans say "I've made a guess and this is how likely it is to be true", the graph is closer to the right than the left.
You are still talking about a different concept entirely. For example, if I take this test, every single answer I give is a guess. I am 100% certain of this.
This test is explicitly asking people things they don’t know.
>You are still talking about a different concept entirely.
I am not.
>For example, if I take this test, every single answer I give is a guess.
Just look at the graph man. Many answers are given with 100% confidence (that then turn out to be wrong). If you give a 100% confidence response, you don't think you're guessing.
>I am 100% certain of this.
You are wrong. Thank you for illustrating my point perfectly.
I don’t get how you’re failing to see the difference between knowing that you have uncertainty at all and being precise about uncertainty when making a guess.
How can you possibly assert that I confidently know the answers to the questions on the test? That makes zero sense. I don’t know the answers. I might be able to guess correctly. That doesn’t mean I know them. It is decisively a guess.
What’s your mom’s name? observe how your answer is not a guess, hopefully.
>I don’t get how you’re failing to see the difference between knowing that you have uncertainty at all and being precise about uncertainty when making a guess.
I'm not failing to see that. I'm saying that humans can be wrong about if some assertions they have are guesses or not. They're not always wrong but they're not always right either.
If you make an assertion and you say you have a 100% confidence in that assertion...that is not a guess from your point of view. I can say with 100% confidence that my mother's name is x. Great.
So what happens when i make an assertion with 100% confidence...and turn out wrong ?
Just because you know when you are guessing sometimes doesn't mean you know when you are guessing all the time.
another example.
Humans often unknowingly rationalize the reason for decisions after the fact. They don't believe those stated reasons are rationalizations rather than true.
They can be completely confident about a memory that never happened.
You are constantly making guesses you don't think are guesses.
Making an assertion while being wrong does not mean you were guessing. You were simply wrong. Yet the vast majority of the time, when we are not guessing, we are correct. And when we are guessing, we can convey the ambiguity we feel. Guessing is not defined by the guarantee of accuracy.
LLMs struggle to convey uncertainty. Some fine tuning has allowed it to aggressively point out gaps. But it doesn’t really know what it knows even if maybe under the hood probabilities vary. Further, ask it if it is sure on things and it’ll frequently assume it was wrong, even if it proceeds to spit out the same answer.
If you asked most of the participants in this paper, they'd tell you straight faced and fully believing how decision x was the better choice and give elaborate reasons why.
The clincher in this paper (and similar others) isn't that the human has made a decision and doesn't know why. It's that he has no idea why he has made a decision but doesn't realize he doesn't know why. He believes his rationalization.
I'm not the person you're arguing with, but going back to the original meta-point of this thread, I too think you're vastly over-estimating people's introspective power on their internal states, including states of knowing.
The distinction you're drawing between "guessing" and "being sure of something but being wrong about it" is hazy at best, from a cognitive science point of view, and the fact that it doesn't _feel_ hazy to a person's conscious experience is exactly why this is interesting and maybe even philosophically important.
More briefly, people are just horseshit at knowing themselves, their motivations, their state of knowledge, the origins of their knowledge. We see some of these 'failures' in LLMs, but we (as a general rule, the 'royal we') are abysmal at seeing it in ourselves.
To be fair we don't know what we know, either. Epistemology is the bedrock that all of philosophy ultimately rests on. If it were a solved problem nobody would talk about it or study it anymore. It's not.
One of the most interesting things about current ML research is that thousands of years of philosophical navel-gazing is suddenly relevant. These tools are going to teach us a lot about ourselves.