Hacker Newsnew | past | comments | ask | show | jobs | submit | wnmurphy's commentslogin

I agree with your categories. The majority of the usage for me is (1) and (3).

(1) LLMs are basically Stack Overflow on steroids. No need to go look up examples or read the documentation in most cases, spit out a mostly working starting point.

(3) Learning. Ramping up on an unfamiliar project by asking Antigravity questions is really useful.

I do think it makes devs faster, in that it takes less time to do these two things. But you're running into the 80% of the job that does not involve writing code, especially at a larger company.

In theory, this should allow a company to do more with fewer devs, but in reality it just means that these two activities become easier, and the 80% is still the bottleneck.


> LLMs are basically Stack Overflow on steroids

That, and I've never had to beg an LLM for an answer, or waste 5 minutes of my life typing up a paragraph to pre-empt the XY Problem Problem. Also never had it close my question as a duplicate of an unrelated question.

The accuracy tends to be somewhat lower than SO, but IMO this is a fair tradeoff to avoid having to potentially fight for an answer.


Tangential, but you used to be able to use custom instructions for ChatGPT to respond only in zalgotext and it would have insane results in voice mode. Each voice was a different kind of insane. I was able to get some voices to curse or spit out Mint Mobile commercials.

Then they changed the architecture so voice mode bypasses custom instructions entirely, which was really unfortunate. I had to unsubscribe, because walking and talking was the killer feature and now it's like you're speaking to a Gen Z influencer or something.


If you're a coder then it sounds like you could use the API to get around that and once again utilize your custom prompt with their tech.


I do it sometimes (even just through the openai playground on platform.openai.com) because the experience is incredible, but it's expensive. One hour of chatting costs around 20-30$.


I think the subscriptions tend to be a significant discount over paying for tokens yourself


Did you record this? Sounds deranged enough to be amusing.


...voice mode bypasses custom instructions? But why? Without a custom prompt it's both unreliable and obnoxious.


Not saying that the stock isn't a meme stock, but my car literally drives itself everywhere. Tesla has many business models.


I heard he used to be _The_ Whitney Brown.


Our understanding of the world is overfit to the macro level, where we project concepts onto experience to create the illusion of discrete objects, which is evolutionally beneficial.

However, at the quantum level, identity is not bound to space or time. When you split a photon into an entangled pair, those "two" photons are still identical. It's a bit like slicing a flatworm into two parts, which then yields (we think) two separate new flatworms... but they're actually still the same flatworm.

Experiments like this are surprising precisely because they break our assumption that identity is bound to a discrete object, which is located at a single space, at a single time.


Depends on your interpretation of quantum mechanics. In Bohmian Mechanics, there is a discrete particle guided by a wave described by the wave function. Also, macro discrete objects are not illusions, they're the result of decoherence. The superposition is suppressed from view, assuming the wave function isn't collapsed or just a mathematical prediction tool.


> Essentially all intelligent life is a pachinko machine that takes a bunch of sensory inputs, bounces electricity around a number of neurons, and eventually lands them as actions, which further affect sensory inputs.

This metaphor of the pachinko machine (or Plinko game) is exactly how I explain LLMs/ML to laypersons. The process of training is the act of discovering through trial and error the right settings for each peg on the board, in order to consistently get the ball to land in the right spot-ish.


I recognized the word "glymphatic" from recent articles about the discovery of the brain's self-cleaning system, and then understood from the headline that these authors identified that the mechanism by which this occurs is driven by norepinephrine.


> I know how to plug this black box into this other black box and return the result as JSON!

To be fair, most of software engineering is this.


Tbf- most of [any] engineering is like this.


But most 'engineering' is not engineering.


Care to explain your perspective? "Engineering" can be a bit of a fuzzy definition. To some it means "building something". To others, it requires the application (and understanding!) of mathematic and scientific principles to build something.

I would disagree that most engineering is not involved in building something...whether most engineers understand the math/science behind it is debatable.


Okay, now take a slightly imbalanced stance: What is most software engineering?


I don't know, but if I say it's about working with things you don't fully understand people seem to trust me.


At some point, we will have no idea that the majority of the commenters we're interacting with are actually just generative AI.

Related: I've found that the internet becomes significantly better when I use a Chrome extension to hide all comment sections. Comments are by far the most significant source of toxicity.


> Introduce intermediate variables with meaningful names

Abstracting chunks of compound conditionals into easy-to-read variables is one of my favorite techniques. Underrated.

> isValid = val > someConstant

> isAllowed = condition2 || condition3

> isSecure = condition4 && !condition5

> if isValid && isAllowed && isSecure { //...


I treat it a lot like english. Run-on sentences, too much technical jargon, and too many fragmented short sentences all make it harder to read. There's analogies to writing code.


> Abstracting chunks of compound conditionals into easy-to-read variables is one of my favorite techniques. Underrated.

Same. Or alternatively I will just put the English version in comments above each portion of the conditional, makes it easy to read and understand.


So you like procedural code.

I mean, at least western people seem to think in recipes, todo lists, or numbered instructions. Which is what procedural code is.

Dogma will chop up those blocks into sometimes a dozen functions, and that's in stuff like Java, functional is even worse for "function misdirection / short term memory overload".

I don't really mind the hundred line method if that thing is doing the real meat of the work. I find stepping through code to be helpful, and those types of methods/functions/code are easy to track. Lots of functions? You have to set breakpoints or step into the functions, and who knows if you are stepping into a library function or a code-relevant function.

Of course a thousand line method may be a bit much too, but the dogma for a long time was "more than ten lines? subdivide into more functions" which was always weird to me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: