Hacker Newsnew | past | comments | ask | show | jobs | submit | thornewolf's commentslogin

I noticed this article is written by AI. Have you considered adding a disclosure?

this article comes back 100% ai (for the opening). i will not reveal the tell that made me churn


You mean the cadence where everything is presented in threes? Yeah I hate it too, but beats paying some blogger to do it


> You mean the cadence where everything is presented in threes? Yeah I hate it too, but beats paying some blogger to do it

You could also use your own words instead of an LLM. Would have been more interesting.


I think LLMs are incredible tools that I will continue to use unapologetically, but I’m also very particular and not going to be putting my name to AI slop. Those are my genuine thoughts on the matter, they just happened to be cleaned up by an automated stochastic parrot.


just disclose please. i churn on non-disclosed ai generated content as a matter of principle. i don't necessarily churn on other variants


Fair enough


I have swapped to using gemini over chatgpt for casual conversation and question answering. there are some lacking features in the app but i get faster and more intelligent responses.


For the comment from "Is Show HN dead? No, but it's drowning", the article author editorializes

"One of the great benefits of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.

One of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge."

into

"One of the great benefits of AI tools is they allow anyone to build stuff, even if they have no ideas or knowledge. One of the great drawbacks is they allow anyone to build stuff."

which removes the rhetorical effectiveness of the comment (and also breaks the promise of a quotation). I recommend that OP represents the source exactly.

____

I now see that this article contains multiple GPT-isms


It would be better if the article editorialized it thusly:

> One of the great benefits as well as one of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.

I'd paraphrase it with "said that" since the quotations indeed present as though verbatim.


Oh good catch, fixed.


dispell the hate from your heart


Questioning things is not 'hate' Mr Wolf.


it's because this argument of "what is $0.01 more?" can be extended forever, implying you are willing to pay an infinite amount of money for anything. since we know this is silly, we try to understand what our "real" maximum is. this is difficult to do for exactly the reasons you mention in your comment! surely $0.01 is negligible! there is a tension here.

and so, absolute max price is not a fantasy - the world would be absurd if it were - but instead its a real and difficult to construct value


It can be extended forever in theory, and sure, that is an interesting philosophical discussion, but it isn't in practice. We're discussing sniping. That means you make the choice once: do I send in a last-second bid that's $.01 more than my "max price", or do I not?


You're just trolling at this point.

Imagine someone wanting to pay $3.50 on an auction and them rounding up to $4 to account for cent sniping. You're saying they should bid $4.01, but the bid is already including half a hundred one cent increments beyond the price to avoid cent sniping.

You're saying it's only one cent out of 50 cents. Then you're saying it's only one cent out of 51 cents so you should keep bidding more.

The infinite budget of one cent increments that you're dreaming of is actually finite and probably easier to quantify than the absolute price itself, so you're taking a problem where the hard part has been solved and are now obsessed with the easy part that almost nobody bothers paying attention to.

Edit:

Maybe the context isn't obvious but eBay has an automated bidding system with coarse grained increments for automatic bidding like 25 cents. This means there is a finite number of increments that can be meaningfully cent sniped before getting into the next coarse grain increment. You can't actually win an auction by placing a one cent higher bid at the last minute in an unfair way. Sniping on eBay isn't about winning the item, it's about doing a sealed bid auction where others can't see your price to nibble it up since the automated bidding systems performs a snipe for you at the last nanosecond if you entered a higher bid. There is no meaningful situation where a cent or two is standing between you and the item.


If your time has no value for you, sure, keep glued to your machine sending countless counterbids $0.01 higher than the latest bid.



as someone who worked at the company, i understood the meaning behind the tweet without the additional clarification. i think she assumed too much shared context when making the tweet


A principal engineer at Google made a public post on the World Wide Web and assumed some shared Google/Claude-context. Do you hear yourself?


Working in a large scale org gets you accustomed to general problems in decision making that aren’t that obvious. Like I totally understood what she means and in my head nodded with “yeah that tracks”.


Maybe it helps them sleep at night.


People make mistakes, it's not that deep. The correct incentive to encourage is admitting, and understand and forgiving when necessary because you don't want to encourage people to hide mistakes out of shame. That only makes things worse.

Especially considering forgetting the delta between yours and someone else's shared context is extremely common. And the least egregious mistake you can make when writing an untargeted promo post.


My bad. I will be more mindful tomorrow when someone at a big tech company yet again make-a-mistake in the same direction of AI hyping. Maybe with a later addendum. Like journalists that write about a Fatal Storm In Houston and you read down to the eighth paragraph and it turns out the fatality were among pigeons.

> when writing an untargeted promo post.

lol.


> My bad. I will be more mindful tomorrow when someone at a big tech company yet again make-a-mistake in the same direction of AI hyping.

Are you mad at them for playing the game, or mad that that's the game they have to play to advance at their company?

> Like journalists that write about a Fatal Storm In Houston and you read down to the eighth paragraph and it turns out the fatality were among pigeons.

I don't know; I guess I hold people who post on twitter so they can self-promo, or who have attention because they work at $company, to a slightly different standard than I would hold a journalist writing a news article?


> I don't know; I guess I hold people who post on twitter so they can self-promo, or who have attention because they work at $company, to a slightly different standard than I would hold a journalist writing a news article?

I know. One of them has a higher salary.


Do you think people who work at Google are perfect?


while we can't trust their word as absolute truth, they did specifically say they still not do this in the article


They aren't going to do this right now, but they almost certainly will in the medium term. It would be legitimately shocking if they didn't continue to follow the same path as Google, Facebook, and pretty much every other big tech comp. In OpenAI's case they have even more incentive to abuse their users since they collect so much detailed personable data and have ways to make ads unblockable by including them in outputs and skewing model weights. I've seen absolutely nothing from the company, it's CEO, or investors that make me think they won't do the normal thing of gradually making the product worse in order to wring more value out of their users.


Oh, you sweet summer child. Promises like these are made to be broken [0][1][2]. They would need a mechanism for contractual or regulatory enforcement for these words to carry any weight at all. What makes you think we should give these promises any more weight than promises that OpenAI already[3][4][5] broke?

0: "Every ad on Google is clearly marked and set apart from the actual search results." https://archive.md/fiK4E#selection-219.13-219.95

1: "Every Google result now looks like an ad" (which means every ad looks like a search result) https://news.ycombinator.com/item?id=22107823

2: "Google breaks 2005 promise never to show banner ads on search results" https://news.ycombinator.com/item?id=6605312

3: (2024) "OpenAI is developing Media Manager, a tool that will enable creators and content owners to tell us what they own and specify how they want their works to be included or excluded from machine learning research and training." https://openai.com/index/approach-to-data-and-ai/

4: (2023) "OpenAI promised 20% of its computing power to combat existential risks from AI — but never delivered" https://fortune.com/2024/05/21/openai-superalignment-20-comp...

5: (2025) "REPORT: The OpenAI Files Document Broken Promises" https://techoversight.org/2025/06/18/openai-files-report/


I estimated that i was 1.2x when we only had tab completion models. 1.5x would be too modest. I've done plenty of ~6-8 hour tasks in ~1-2 hours using llms.


Indeed. I just did a 4-6 month refactor + migration project in less than 3 weeks.


some forms of meditation can be. it's a very general term


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: