Hacker Newsnew | past | comments | ask | show | jobs | submit | legostormtroopr's commentslogin

The author themselves disagree with you.

https://www.washingtonpost.com/opinions/2021/10/29/schools-a...

So does the National Council of Teachers of English. https://ncte.org/teaching-maia-kobabe/

Just because can't believe that people would promote a comic with explicit texting and sexual imagery to children doesn't mean it doesn't happen.


I realize I'm coming into a back-and-forth that grew organically, but... how does this intent tie back to a What Justifies Censorship argument? It sounds like:

1. If I think what they say is bad for youths

2. And it seems the original author thought it would influence youths in the way I don't like

3. Then it can be censored

Is that it? Because if so... well, I've got some bad news about the Bible, and that's not even getting into the trustworthiness of the agency making determination 1 and 2.


Firstly - you are going to get sued by Salesforce. You can't just make a Slack-clone, use their trademarked name and just add "Open" to the front.

Secondly, this is pure AI slop. There are 2 commits - "Initial commit" and "Trim feature table" where you changed the README.

Why would anyone use this? Has anyone read any of the code? Do you even understand how (or if) it even works?


But if AI can maintain code bases so easily, why does it matter if there are 3? People use electron to quickly deploy non-native apps across different systems.

Surely, it would be a flex to show that your AI agents are so good they make electron redundant.

But they don’t. So it’s reasonable to ask why that is.


No, it is completely unreasonable to ask why a company is not putting three times the resources into solving a problem than one times the resources.


1. Anthropic has no problem burning tens of thousands of dollars of tokens on things that have zero real-world value, such as implementing a C compiler that as far as I can tell they don't intend to be used in the real world - for example, they announced it on Feb 5, promising "Over the coming days, I’ll continue having Claude push new changes if you want to follow along with Claude’s continued attempts at addressing these limitations" but there have been zero code commits since Feb 5 (the day they announced it). Wouldn't it make far more sense for a company to invest tokens into their own product than burning them for something that may be abandoned within hours of launching, with zero ongoing value to their company or their customers?

2. Why do you think it requires "three times the resources" - wouldn't it normally be an incremental amount of work to support additional targets, but not an additional 100% of work for each additional target?


What resources? it's supposedly a solved problem. Anthropic just needs to spend tokens.


Are tokens not resources?


Not to Anthropic


How so?


But the one times the resources didn't solve the problem, clearly, since we are talking about it. And they claim that AI makes it trivial to port to do this sort of things so it would not be 3x the resources.


> But if AI can maintain code bases so easily, why does it matter if there are 3? People use electron to quickly deploy non-native apps across different systems.

Because then their competition would work faster than they could and any amount of slop/issues/imperfections would be amplified threefold.

Also there would inevitably be some feature drift - I got SourceTree for my Mac and was surprised to discover that it's actually somewhat different from the Windows version, that was a bit jarring.

I hope that in the next decade we get something like lcl (https://en.wikipedia.org/wiki/Lazarus_Component_Library), but for all OSes and with bindings for all common languages - so we don't have to rely on the web platform for local software, until then developing native apps is a hard sell.


More than anything, I would love to know the software this is built in.

This is a goregous way of presenting a book, and what looks like subscriber only chapter previews.


Why is it xenophobic to be concerned that non-registered drivers in one country are being allowed to drive remotely in a different country.


As far as I understand it, they aren't being allowed to drive. They are doing the equivalent of "ignore that, it's not a real obstacle" or "try to go around this way", and then the car takes that input into account and does the actual driving (steering, control of throttle/brake) on it's own as usual.


You're saying they don't interpret road signs/markings/etc.? Or need to know if e.g. a right or left turn on red is legal in a given intersection?


I don't need, legally, to demonstrate any knowledge of this to drive on US roads currently (or even, strictly speaking, to know what side of the road I should drive on).


It's been quite a while, but I'm pretty sure there was a written part back when I did the driving test for my first license.


Yeah (at least, that's probably the case in some parts of the US), but I didn't pass my test in the US.


No, I'm saying that no one should be "concerned that non-registered drivers in one country are being allowed to drive remotely in a different country" because they aren't driving.


It might be for non road code level issues, like physics / crowd ambiguity, where a normal human could fill the missing gaps, US citizen or not.


Unless I am wildly misreading this, this is actually worse that both GUIs and LLMs combined.

LLMs offer a level of flexibility and non-determinism that allow them to adapt to different situations.

GUIs offer precision and predictability - they are the same every time. Which means people can learn them and navigate them quickly. If you've ever seen a bank teller or rental car agent navigate a GUI or TUI they tab through and type so quickly because they have expert familliarity.

But this - with a non-determinstic user interface generated by AI, every time a user engages with a UI its different. So they a more rigid UI but also a non-deterministic set of options every time. Which means instead of memorising what is in every drop down and tabbing through quickly, they need to re-learn the interface every time.


I don't think you have to use this if it's not working in your case. I think the idea is to try to anticipate the next few turns of the conversation, so you can pick the tree you want to go down in a fast way. If the prediction is accurate, I could see that being effective.


It’s intended for conversations that are probably different every time too. It’s like a more expressive form of what Claude Code already does with the “AskUserQuestion” interface.


> GUIs offer precision and predictability - they are the same every time.

Except after an update everything is in a different place.


Yep - I'm looking at you MS office ribbon. Just as I learnt where things are some update decides to move stuff around.

The people responsible for stuff like this should be put in stocks in public squares and pelted with tomatoes ;-)


> If you are correct, that implies to me that LLMs are not intelligent and just are exceptionally well tuned to echo back their training data.

Yes.

This is exactly how LLMs work. For a given input, an LLM will output a non-deterministic response that approximates its training data.

LLMs aren’t intelligent. And it isn’t that they don’t learn, they literally cannot learn from their experience in real time.


There is some intellegence. It can figure stuff out and solve problems. It isnt copy paste. But I agree with your point. They are not intellegent enough to learn during inference. Which is the main point here.


Wolves (and all dogs) could be vegetarians as they aren't obligate omnivores - and in certain conditions where pray is sparse they do eat berries to surviven. Cats on the other hand are obligate carnivores and can't produce taurine amino acids, so they have to eat meat to survive.


We can chemically synthesize taurine just fine.


Are you a wolf (or a dog)?


I think the implications is that cats could eat veggies laced with synthetic taurine...?


I thought the implication is that people should feed themselves to cats?

Consent removes a bunch of ethical issues.

What's eating you, Earthman: https://m.youtube.com/watch?v=5HLy27bK-wU


Current state AI doesn’t have hands. How can it possibly be better at installing electrics than anyone?

Your post reads like AI precisely because while the grammar is fine, it lacks context - like someone prompted “reply that AI is better than average”.


An electrician with total knowledge/understanding, but only the average dexterity of a non-professional would still be very useful.


Finally, I can give up this life of a programmer and live my dream of being a moisture farmer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: