Hacker Newsnew | past | comments | ask | show | jobs | submit | Freebytes's commentslogin

I am often asked "What do you do?" when I meet someone new. I know they are asking about my job, but I throw off the expectations by saying, "Oh, I like to play video games and watch movies primarily." This is usually followed by, "Sorry, I meant what do you do for a living?" I will then, of course, tell them what they expect to hear; however, even the question "What do you do for a living?" implies that we live to work. I play video games and watch movies for living. I work merely to survive and buy the things that allow me to live my life the way I want.


maybe it implies some amount of "live to work" etymologically, but the word "living" in that context specifically refers to living in the financial sense (the act of living wouldn't really make sense in that sentence)

you live to play video games and watch movies, but you make a living by working


Using AI to write content is seen so harshly because it violates the previously held social contract that it takes more effort to write messages than to read messages. If a person goes through the trouble of thinking out and writing an argument or message, then reading is a sufficient donation of time.

However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.


This is very well put, and captures my feelings on it. I take it as disrespect that someone would have any expectation for me to read something they can’t be bothered to write. LinkedIn is a great example - my entire professional network is just spamming at this point, which drowns out others that DO put in any effort.


If it takes longer to read, it's not an AI problem, but the author failing to catch that the comment is too drawn out. I don't see how it is a problem to have AI write a comment if you agree with the content. If it is bad content, it will eventually reflect badly on the author anyway.


I skim 100 comments here everyday. Good comments/bad comments, overly long comments, whatever, time to read is low. I assume all those authors have a strong opinion / expertise on the subject that urged them to take the time to write that comment, which makes skimming hacker news to keep a pulse on the world (imho) a valuable task. If, instead, most of those comments are composed by molt-bots, then I'm not getting a "real" view of the world, I don't care how good and concise the comments are, I'd be wasting my time reading about news that may not matter to anyone and opinions that may not exist.


When I have AI write things for me, I'm spending a good amount of time on it - certainly longer than it takes to read. I'm also usually editing it quite a bit. Maybe I'm an outlier, but I still don't think it's appropriate to make a blanket statement about using AI to write content violating this social contract you described.


Where does the line fall? I can use an LLM to help form new and novel thoughts into prose, right? To structure and present it in conventional language rather than stream of thought. Is that disrespectful? It doesn't feel so.


> I can use an LLM to help form new and novel thoughts into prose, right? To structure and present it in conventional language rather than stream of thought.

Better to post your stream of thought.

Using LLMs to turn stream of thoughts into prose is mostly just adding fluff and expanding the text to make it look more like thoughtful prose. What you get looks nice to the creator because they agree with what it's saying, but it wastes other reader's time as they have to dissect the extra LLM prose to get back to the author's stream of thought.

Just post what you're thinking, even if it's not elegant prose. Don't have an LLM wrap it in structures and cliches that disguise it as something else.


I strive to be understood, and my streams of thought are often weird and generally intractable. Nobody really wants to read that; nobody wants the deep threads required to explain it.

I value reading novel and interesting thoughts and ideas. I don't feel "tricked" when I read something of substance or thought provoking, even if LLM generated and decorated with the platitudes and common forms for dull readers.


Something I try very hard to impress on my PhD students is that the process of writing is part of the process of thinking. We often have cool things in our head that don't sound right when we write them down, and that's usually because the thing in our head was more amorphous than we realized. The time you put in getting the written expression of it to work is actually helping you crystallize what you're thinking in the first place.


I guarantee you that I would endlessly rather read your streams of thought about amateur boat building than read another AI-generated Hacker News comment ever again. Don't sell yourself short.


Thank you for that.


I get that feeling, and I’ll echo my sibling comment: I’d much rather read your stream of thought and get on that brain train with you than see some fluffed up and sterilized version.

I also think that having that authentic voice, while it does open us up to criticism and maybe being misunderstood, also gives us a way to receive actionable feedback to improve.

I think we all want to be understood, and for me part of that understanding is seeing the person. How you write is a part of who you are, and I hope you don’t feel like you need to suppress that.


Feel bad for the people who used to do that for you. Many people have difficulty expressing what they're thinking in words. Those people always feel happy when they see someone else say what they're thinking. If AI can do that now then you don't need them. No point in coming onto Hacker News and using AI to participate in playing that role when you can just talk to the AI. If too many people do this then Hacker News won't even be able to play a vestigial role.


Is it really that dire?

Is it more awful to expect every reader to decipher my rambling, disjoint thoughts? Yes, it is. And, it undervalues the substance of what I'm trying to say because the willing audience dwindles to triviality.


You're being self-deprecating. You might believe the way you think and formulate ideas isn't good enough but it's at least you. The more you filter your thoughts through AI the more that signal is lost. If I'm not talking to you then I might as well talk to the robot myself, and honestly, that's what I spend most of my time doing these days. So when I come to Hacker News hoping for human connection the last thing I want is to talk to the robot even more. You should also show more respect for your peers whose writing talents you envy. People who are good at writing prose are usually good at deciphering it too.


I sucked at writing myself. It's been my experience that over time practicing to becoming a better writer helped me structure my thoughts into something cohesive on the page. And I got better over time.


Sorry, but I prefer original human streams of thought. I now have a pretty darn good filter for ignoring AI gen text just like a filter for skipping over page ads.


> Where does the line fall?

For now I would argue when ai edits for you instead of helping you edit. Take a look at the examples that Dang posted if you have not yet: https://news.ycombinator.com/item?id=47342616

The first 5 I looked at were pretty egregious and not subtle.


Yes, I have also done the search and found that the beta on "LLM!" objections is very high; often seeming wrong as right.


As of this comment which ones are you finding wrong? 5 of the first 7 are confessed ai users, the other 2 look like ai to me too.


When I said "I have also done the search" I meant this simple one: https://hn.algolia.com/?dateRange=all&page=1&prefix=false&qu...


Dang's search is much more clear cut and I think that is going to be better guide to what the enforcement will look like.

Looking at your search though I think we have to exclude today or at least this thread to get a fair look how llm generated is thrown around or not https://hn.algolia.com/?dateEnd=1773187200&dateRange=custom&...

Most of the comments I saw on the first page are not an accusation but there are some there 2 of the 3 I looked at looked pretty clear cut, while the 3rd was poorly written hype which looks like llm output, but I have seen similar from humans before at least from what I read, in either case it was flagged appropriately.


> Is that disrespectful

It is, by way of being extremely dishonest in at least two ways:

- there's no way you would do this if you were required to disclose that you used an LLM to write your comment.

- therefore, if your primary goal isn't communication, then you must be doing it to look smart and "win" the conversation

Same reason people desperately post links to scientific papers they don't understand in a frantic attempt to stay on top of some imaginary debate.


I guess, in theory, this can eventually be countered by people using LLM browser integrations to tell them whether comments are worth reading (and maybe to summarize long comments). Is anyone currently working on that? It might be interesting to see.


I don't believe that delegating reading comprehension to an LLM is really any better than delegating writing ability. In fact I'd argue it's worse to have an automation advising on what's worth reading or not.

There are a lot of people who have no time for something like Infinite Jest and even getting through the first few chapters is an effort. But at least they tried. An LLM excluding the possibility of reading this book because it is 1000 pages of postmodern absurdity effectively optimises away the fringes of human creativity and leaves only the average stuff behind.

AI slop detectors already exist and are no better than snake oil, because a person can have an LLM-smelling writing style without actually using AI. After all, LLMs were originally trained on human input.


First we would run into the spam-filter problem no different to email. Then we have to choose: do we concede to viewing the world through a lens of WhatEverAI, or train it locally on our own thoughts/views on the world, and hope that AI model is never compromised.


Well just have an AI read it for you then!

That reminds me of the gmail LLM usage where AI can writes your emails for you and also summarize incoming ones. Maybe we lost the thread somewhere...


It's not just about the increase in volume, it's about the delta between the prompt and the generation.

If the generation merely restates the prompt (possibly in prettier, cleaner language), then usually it's the case that the prompt is shorter and more direct, though possibly less "correct" from a formal language perspective. I've seen friends send me LLM-generated stuff and when I asked to see the prompt, the prompts were honestly better. So why bother with the LLM?

But if you're using the LLM to generate information that goes beyond the prompt, then it's likely that you don't know what you're talking about. Because if you really did, you'd probably be comfortable with a brief note and instructions to go look the rest up on one's own. The desire to generate more comes from either laziness or else a desire to inflate one's own appearance. In either case, the LLM generation isn't terribly useful since anyone could get the same result from the prompt (again).

So I think LLMs contribute not just to a drowning out of human conversation but to semantic drift, because they encourage those of us who are less self-assured to lean into things without really understanding them. A danger in any time but certainly one that is more acute at the moment.


This reads as an AI comment to me. Anybody else?


AI has not been used to write any comment that I have ever posted on Hacker News. You can observe my previous comments over the years, even prior to the adoption of modern LLMs, which demonstrate how I communicate.

(While the patterns may be similar, I have a tendency to be more loquacious due to my larger token limit! %)


Just goes to show I'm a poor judge of what is written by AI.


On 4chan, a long time ago, comments like these would invariably get the reply "not ur personal army"

Think about that for a minute. 4chan would make fun of the comment you just made.


<https://news.ycombinator.com/item?id=46832601>

Email mods instead: hn@ycombinator.com


This is such an impossible to solve problem that every advanced nation on Earth has already solved it, except the United States.


I thought this was a post about graphics.


There needs to be a way to see how much it is being used then and not simply the life of the Sprite.


You can. There’s a usage dashboard


Where is it?


Merely choosing lines to copy and paste from one file of your own code to another is a learning experience for your brain. AI is excellent for removing a lot of grunt work, but that type of work also reinforces your brain even if you think you are learning nothing. Something can still be lost even if AI is merely providing templates or scaffolding. The same can be said of using Google to find examples, though. You should try to come up with the correct function name or parameter list yourself in your head before using a search engine or AI. And that is for the moist simple examples, e.g. "SQL table creation example". These should be things we know off the top of our heads, so we should first try to type it out before we go to look for an answer.


Some models of the motorcycle are available right now and can charge to 80% in 10 minutes and go ~350 miles. Unless it is a scam, and you will not get your motorcycle... However, this seems legitimate.


The models with the new battery are preorder only, and with a price tag of $35k.

The tech needs to go to another company that can produce something more people are able and willing to buy, and that's going to take a few years before it has a meaningful impact on the market.


This reminds me of the way the Internet was in the past. And the random sites to which this site links. (If you have not seen Neocities, it is another similar place which is the predecessor of Geocities before Yahoo! bought it and killed it.)


This article talks about martinis about as much as it talks about the careers of lawyers being threatened by AI. The article provides no real justification for its claims outside of anecdotal opinions. The only value of this article is that it results in a discussion in the comments section that provide the actual credence to the claims.


Like so many of these articles about how "AI will/won't do X" it just feels like everyone is speculating.

The only thing I feel confident about is that people are bad at predicting the future. Why can't we just wait and see without all this overconfident guessing?


I thought they were talking about redesigning hardware from the ground up. There will always be history and baggage if you are working with the same computer instruction sets. From the very beginning at the level of assembly, there is history and baggage. This is not ambitious enough.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: