I think LLMs are incredible tools that I will continue to use unapologetically, but I’m also very particular and not going to be putting my name to AI slop. Those are my genuine thoughts on the matter, they just happened to be cleaned up by an automated stochastic parrot.
I have swapped to using gemini over chatgpt for casual conversation and question answering. there are some lacking features in the app but i get faster and more intelligent responses.
For the comment from "Is Show HN dead? No, but it's drowning", the article author editorializes
"One of the great benefits of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.
One of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge."
into
"One of the great benefits of AI tools is they allow anyone to build stuff, even if they have no ideas or knowledge. One of the great drawbacks is they allow anyone to build stuff."
which removes the rhetorical effectiveness of the comment (and also breaks the promise of a quotation). I recommend that OP represents the source exactly.
____
I now see that this article contains multiple GPT-isms
It would be better if the article editorialized it thusly:
> One of the great benefits as well as one of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.
I'd paraphrase it with "said that" since the quotations indeed present as though verbatim.
it's because this argument of "what is $0.01 more?" can be extended forever, implying you are willing to pay an infinite amount of money for anything. since we know this is silly, we try to understand what our "real" maximum is. this is difficult to do for exactly the reasons you mention in your comment! surely $0.01 is negligible! there is a tension here.
and so, absolute max price is not a fantasy - the world would be absurd if it were - but instead its a real and difficult to construct value
It can be extended forever in theory, and sure, that is an interesting philosophical discussion, but it isn't in practice. We're discussing sniping. That means you make the choice once: do I send in a last-second bid that's $.01 more than my "max price", or do I not?
Imagine someone wanting to pay $3.50 on an auction and them rounding up to $4 to account for cent sniping. You're saying they should bid $4.01, but the bid is already including half a hundred one cent increments beyond the price to avoid cent sniping.
You're saying it's only one cent out of 50 cents. Then you're saying it's only one cent out of 51 cents so you should keep bidding more.
The infinite budget of one cent increments that you're dreaming of is actually finite and probably easier to quantify than the absolute price itself, so you're taking a problem where the hard part has been solved and are now obsessed with the easy part that almost nobody bothers paying attention to.
Edit:
Maybe the context isn't obvious but eBay has an automated bidding system with coarse grained increments for automatic bidding like 25 cents. This means there is a finite number of increments that can be meaningfully cent sniped before getting into the next coarse grain increment. You can't actually win an auction by placing a one cent higher bid at the last minute in an unfair way. Sniping on eBay isn't about winning the item, it's about doing a sealed bid auction where others can't see your price to nibble it up since the automated bidding systems performs a snipe for you at the last nanosecond if you entered a higher bid. There is no meaningful situation where a cent or two is standing between you and the item.
as someone who worked at the company, i understood the meaning behind the tweet without the additional clarification. i think she assumed too much shared context when making the tweet
Working in a large scale org gets you accustomed to general problems in decision making that aren’t that obvious. Like I totally understood what she means and in my head nodded with “yeah that tracks”.
People make mistakes, it's not that deep. The correct incentive to encourage is admitting, and understand and forgiving when necessary because you don't want to encourage people to hide mistakes out of shame. That only makes things worse.
Especially considering forgetting the delta between yours and someone else's shared context is extremely common. And the least egregious mistake you can make when writing an untargeted promo post.
My bad. I will be more mindful tomorrow when someone at a big tech company yet again make-a-mistake in the same direction of AI hyping. Maybe with a later addendum. Like journalists that write about a Fatal Storm In Houston and you read down to the eighth paragraph and it turns out the fatality were among pigeons.
> My bad. I will be more mindful tomorrow when someone at a big tech company yet again make-a-mistake in the same direction of AI hyping.
Are you mad at them for playing the game, or mad that that's the game they have to play to advance at their company?
> Like journalists that write about a Fatal Storm In Houston and you read down to the eighth paragraph and it turns out the fatality were among pigeons.
I don't know; I guess I hold people who post on twitter so they can self-promo, or who have attention because they work at $company, to a slightly different standard than I would hold a journalist writing a news article?
> I don't know; I guess I hold people who post on twitter so they can self-promo, or who have attention because they work at $company, to a slightly different standard than I would hold a journalist writing a news article?
They aren't going to do this right now, but they almost certainly will in the medium term. It would be legitimately shocking if they didn't continue to follow the same path as Google, Facebook, and pretty much every other big tech comp. In OpenAI's case they have even more incentive to abuse their users since they collect so much detailed personable data and have ways to make ads unblockable by including them in outputs and skewing model weights. I've seen absolutely nothing from the company, it's CEO, or investors that make me think they won't do the normal thing of gradually making the product worse in order to wring more value out of their users.
Oh, you sweet summer child. Promises like these are made to be broken [0][1][2]. They would need a mechanism for contractual or regulatory enforcement for these words to carry any weight at all. What makes you think we should give these promises any more weight than promises that OpenAI already[3][4][5] broke?
3: (2024) "OpenAI is developing Media Manager, a tool that will enable creators and content owners to tell us what they own and specify how they want their works to be included or excluded from machine learning research and training." https://openai.com/index/approach-to-data-and-ai/
I estimated that i was 1.2x when we only had tab completion models. 1.5x would be too modest. I've done plenty of ~6-8 hour tasks in ~1-2 hours using llms.
reply