Hacker Newsnew | past | comments | ask | show | jobs | submit | layla5alive's commentslogin

Funny, didn't iOS have that like 15 years ago, before they probably removed it?

I had the same interpretation - Maduro was a bad guy, but when the approach taken is akin to the "Wild West," its hard to claim moral superiority - it devolves to different factions of goons with guns stealing from each other and murdering with impunity, "might makes right."

This stands in contrast to the ideals of a society based on laws and rules, where corruption is a notable exception.

We stand on the precipice of abandoning what the world worked so hard for decades to build...


Try watching the videos instead of Fox News or OANN.

Pretti tried to help a woman who was pushed down by masked agents, they then attacked and executed him.

Good tried to turn AWAY from the man with the gun and get out of the situation and he stepped in front of her and executed her, shooting even after she'd driven past him without hitting him despite him putting himself into harms way.


This is such a bad decision - its infuriating. Incredible overreach of state power. This decision laughs at values such as liberty and freedom.


This has also been my experience - asking it to take the devil's advocate of the other side of the coin and assume the persona of 'X relevant highly rational type with deep knowledge in the field' both have a lot of utility. You can do this in more than one dimension, too.


Its not that LLMs are stochastic parrots and humans are not. Its that many humans often sail through conversations stochastic parroting because they're mentally tired and "phoning it in" - so there are times when talking to the LLM, which has a higher level of knowledge, feels more fruitful on a topic than talking to a human who doesn't have the bandwidth to give you their full attention, and also lack the depth and breadth of knowledge. I can go deep on many topics with LLMs that most humans can't or won't keep up on. In the end, I'm really only talking to myself most of the time in either case, but the LLM is a more capable echo, and it doesn't tire of talking about any topic - it can dive deep into complex details, and catching its hallucinations is an exercise in itself.


Many other humans are .... Not very available - certainly many shut down when conversations reach a certain level of depth or require great focus or introspection..


Depth? Introspection?

I'd say these days the norm is to not simply shut down, but to become irrevocably and insidiously hostile, the moment someone hints at the existence of such a thing as "ground truth", "subjective interpretation", "being right or wrong" - or any of the bits and bobs that might lead one to discover the proper scary notion, "consensus reality".

"What do you mean social reality is a constructed by the consensus of the participants? Reality is what has been drilled into my head under threat of starvation! How dare you exist!", et cetera. You've heard it translated into Business English countless times.

They are deathly afraid of becoming aware of their own conditioned state of teleological illiteracy - i.e. how they are trained to know what they are doing, but never why they are doing it. It's especially bad with the guys who cosplay US STEM gang.

One is not permitted a position of significance in this world without receiving this conditioning, and I figure it's precisely this global state of cognitive disavowal which props up the value of the US dollar - and all sorts of other standees you might've recently interacted with as if they're not 2D cutouts (metaphorical ones! metaphorical!).

PSA: Look up "locus of control" and "double bind". Between those two, you might be able to get a glimpse of what's going on - but have some sort of non-addictive sedative handy in case you do.


You had me on the first three paragraphs, but the last two veer so far off course that I've no idea what you're trying to say. Mind clarifying?


Yes


I think you will enjoy Guy Debord and Raoul Vaneigem.


Just like a certain defense minister was shown to enjoy D&G; after which the latter were never heard of again. Where they go, eh?

+1 for Vaneigem, he has a nice cryptohistory of Nälkä; and you might also want to check out Villem Flusser.


> when conversations reach a certain level of depth or require great focus or introspection..

I mean... if the alternative is an LLM... you realise that the LLM isn't doing any focusing or introspection, right?


Any more context you're willing to share?


We really do love dirty laundry don't we? I'm sure whatever the context is, it is deeply personal. Do you also have your popcorn ready?


Thank you. Yes, I'm going to refrain from airing out my dirty laundry. I made a bad decision, now I'm living with it, and more context doesn't actually change the intent behind my message: these tools are dangerous. Getting better, but still dangerous.


> Yes, I’m going to refrain from airing out my dirty laundry. I made a bad decision, now I’m living with it, and more context doesn’t actually change the intent behind my message

That’s not entirely true, as it’s currently impossible to actually gauge the severity of what the LLM seemingly enabled you into doing. There’s a difference between “I uncritically accepted everything it told me because it lined up with what I was hoping to hear” and “it subtly nudged me towards a course of action that was going to be obviously unwise after some consideration, but managed to convince me to skip this”; and also between that and “I took a risk, which I knew to be a risk, and which I knew to potentially expect to go bad, and the LLM convinced me to take it where I otherwise wouldn’t have”, and ALSO between that and “I took a risk, which I knew to be a risk, and which I knew to potentially expect to go bad, and if I’m perfectly honest, I might’ve taken it anyway without the LLM”.

Without any indication as to how your situation maps to any of these (or more), the warning is, functionally, not particularly useful.


Exactly all of this - I didn't have my popcorn out, I was genuinely curious about the nature of the risk being discussed. I find the post basically worthless without context - a wood stove is dangerous if you place your hand on top of it while its hot, but not in the same way a grenade is dangerous if you accidentally remove the pin without understanding the consequences..


Yeah, my first thought (admittedly an absurd one) went to something along the lines of:

"I flipped a coin and the LLM called heads. I should have gone with tails..."


Was it a blatantly bad idea or was it some risk that triggered that would have been beyond your typical risk threshold otherwise?


If it’s too personal to share, maybe don’t mention it in the first place? People doing this online and IRL are attention seeking


This comment is Dunning-Kruger. Some overweight people are very unhealthy. Some thin people are very unhealthy. Some overweight people have genetics that prefer to store fat subcutaneously where its not very harmful. Some thin people have genetics which preferentially store fat in and around organs or muscles which is incredibly damaging, leads to chronic inflammation and eventually T2D and atherosclerosis, among others. Lets just say you can't judge a book by its cover and biology is complex. Unless you know a person, keep your mouth shut and your mind open!

There are sedentary thin people who live on doritos and active heavy people who eat salads. There are ALL KINDS!

Over 3 gigabases is a lot of room for genetic diversity, don't you think?


Are you being cute impersonating an LLM, or are you an LLM posting?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: