> In my experience, taking into account the opinions of such people has been the worst mistake of my life. I'm still working on the means to fix its consequences, as much as they are fixable at all.
This sounds very cryptic. Can you give an example?
My experience is that it tries to look at your situation in an objective way, and tries to help you to analyse your thoughts and actions. It comes across as very empathetic though, so there can lie a danger if you are easily persuaded into seeing it as a friend.
Hmmmm i didn't know that... so a machine is not human is your point? Look, i know it doesn't try, just like a sorting algo does not try to sort, or an article does not try to convey an opinion and a law does not try to make society more organized.
That is so reductive of an analysis that it is almost worthless. Technically true, but very unhelpful in terms of using an LLM.
It is a first principle though so it helps to “stir the context windows pot” by having it pull in research and other shit on the web that will help ground it and not just tell you exactly what you prompt it to say.
It is much easier to share personal feelings with an llm, i found. Also it tried to keep me happy to get the conversation going, but for me it feels mostly 'objective' or the most socially acceptable advice, e. g. keeping a good relationship is more important than trying a new one with someone else because you 'feel something' around them. For me it tried to find out together the sources or causes of that feeling, e.g. you recognize parts of yourself in someone else or in the past you had very good or very bad experiences around an encounter.
Weird, i am using copilot and it steers me mostly towards self reflection and tries to look at things objectively. It is very friendly and comes across as empathetic, to not hurt your feelings, that is probably baked in to keep the conversation going...
the notion that "contains —" ~= "AI generated" is a really dumb popular misconception: dashes have existed for hundreds of years. just because many people use them incorrectly or treat the hyphen as if it's some universal dash doesn't change that.
strunk & white taught me to use em dashes in something like elementary or middle school [1] — it's not hard to understand how to use them or type them... i'm baffled as to why people act like this is the case.
I've been using a reasonable gamut of Unicode punctuation in English for I think the majority of my life now as well—including this very comment, https://news.ycombinator.com/item?id=19365079 from 2019, and the above comment where I typed a horizontal ellipsis. I tend to attribute it to taking my language usage from relatively formal sources and being a desktop Linux user with a Compose key. I used to constrain myself to ASCII for email and source code, though, and would use TeX-like “--” and “---” and such instead; sometimes I would also just do that when temporarily on some setup where accessing the real stuff was harder.
But then, people have also been asking me whether I'm an AI for over twenty years, so…
Like it or not, current LLMs really like em-dashes and so usage of them is quite a lot of bayesian evidence in favor of the author being an LLM. It's unfortunate for the humans who use em-dashes but that's how it is.
There's some truth in there about in case of a serious emergency you mostly need competence over empathy. But, to me, it's always love, empathy and connectedness that wins. Why are you commenting on HN? Is it for bringing value in return for money or services or material goods? I don't think so. It is about human connection. In absence, life or death makes not much difference anymore.
Love, empathy, and connectedness are something you have to put in effort (lots of effort in some cases) and skill ( that has to be learned and honed) to express to the recipient in a way that gets through to them.
Your mention of "return for money or services or material goods" shows you didn't get the point of the article; the author plainly explains why it isn't about money in #4.
Well i wrote services as well. But anyways, since you were so kind as to take the effort of pointing me out the pointe, I took the effort of watching half of Baldwin's monologue and read the text below in more detail. Very interesting, it makes me understand better why the article today still is referred to. The douchebags for which some pretty women - knowingly or subconsciously - fall, often have a skill, other than just their slick douchebag appearances and schwung. So just being mister nice guy doesn't cut it. I can't help being a nice guy - - Thanks for giving me food for thought!
In fact entropy is relative to what we define as chaos vs what we define as ordered. When we can't explain the order, we define it as being chaotic, and for convenience we model it statistically in stead of in absolute terms. I learned that few months ago from a HN posted article.
This sounds very cryptic. Can you give an example?
reply