Hacker Newsnew | past | comments | ask | show | jobs | submit | more code51's commentslogin

they were not joking but... joking? when big tech is all rolled into a giant ball, wrapped with money and duct tape, it's hard to get these jokes.


Weird thing is, Gemini didn't crack speech-to-speech yet. They have a product but both Anthropic and Google are visibly suffering in speech-to-speech vs OpenAI voice mode.


Actually Sourcegraph's "Amp Code" is testing out a free ad-supported coding agent. Here is a video showing how it works: https://ampcode.com/news/amp-free

"Supported by ads from developer tool partners we’ve carefully chosen"

It's not trying to secretly insert tools into LLM output but directly present the product offering inside the agent area.

At one point, I speculate that Cursor will test this out as well, probably in a more covert way so that tool use paths get modified. Once the industry realizes tool-use-ads, then we're toast.


I'm surprised these pockets of job security still exist.

Know this: someone is coming after this already.

One day someone from management will hear about a cost-saving story at a dinner table, the words GPT, Cursor, Antigravity, reasoning, AGI will cause a buzzing in her ear. Waking up with tinnitus the next morning, they'll instantly schedule a 1:1 to discuss "the degree of AI use and automation"


> Know this: someone is coming after this already.

Yesterday, GitHub Copilot declared that my less-AI-weary friend’s new Laravel project was following all industry best-practices for database design as it storing entities as denormalized JSON blobs in a MySQL 8.x database with no FKs, indexes, constraints, all NULL columns (and using root@mysql as the login, of course); while all Laravel controller actions’ DB queries were RBAR loops that did loaded all rows into memory before doing JSON deserialisation in order to filter rows.

I can’t reconcile your attitude with my own personal lived experience of LLMs being utterly wrong 40% of the time; while 50% of the time being no better or faster than if I did things myself; another 5% of the time it gets stuck in a loop debating the existence of the seahorse emoji; and the last 5% of the time genuinely utterly scaring me with a profoundly accurate answer or solution that it produced instantly.

Also, LLMs have yet to demonstrate an ability to tackle other real-world DBA problems… like physically installing a new SSD into the SAN unit in the rack.


Lowballing contracts is nothing new. It has never ever worked out.

You can trow all AI you want, but at the end of the day you get what you pay for.


Exactly and this is hell for programming.

You don't know whose style the LLM would pick for that particular prompt and project. You might end up with Carmack or maybe that buggy, test-failing piece of junk project on Github.


You can tell it who's style to copy, it's actually decent at following instructions like that.


It's not bad at following my own style. I have longstanding quirks like naming any string that will end up in a DB query with a "q_" in front of the variable name, and shockingly Claude picks up on those and mimicks them. Wouldn't trust it to write anything without thorough review, but it's great at syntax.


this isn't shocking, they are very good at repeating patterns in the immediate context. they're just not very good at anything else. your quirk is part of the immediate pattern


My first experiments with LLM chat was to ask to produce text mimicking the style of a distinct, well-known author. It was also quite good at producing hybrid fusions of unique fictional styles, A + B = AB.


Can you just tell it it’s Carmack? :P


I doubt he's letting LLM creep in to his decision-making in 2025, aside from fun side projects (vibes). We don't ever come across Karpathy going to an LLM or expressing that an LLM helped in any of his Youtube videos about building LLMs.

He's just test driving LLMs, nothing more.

Nobody's asking this core question in podcasts. "How much and how exactly are you using LLMs in your daily flow?"

I'm guessing it's like actors not wanting to watch their own movies.


Karpathy talking for 2 hours about how he uses LLMs:

https://www.youtube.com/watch?v=EWvNQjAaOHw


Vibing, not firing at his ML problems.

He's doing a capability check in this video (for the general audience, which is good of course), not attacking a hard problem in ML domain.

Despite this tweet: https://x.com/karpathy/status/1964020416139448359 , I've never seen him citing an LLM helped him out in ML work.


You're free to believe whatever fantasy you wish, but as someone who frequently consults an LLM alongside other resources when thinking about complex and abstract problems, there is no way in hell that Karpathy intentionally limits his options by excluding LLMs when seeking knowledge or understanding.

If he did not believe in the capability of these models, he would be doing something else with his time.


One can believe in the capability of a technology but on principle refuse to use implementations of it built on ethically flawed approaches (e.g., violating GPL licensing laws and/or copyright, thus harming open source ecosystem).


AI is more important than copyright law. Any fight between them will not go well for the latter.

Truth be told, a whole lot of things are more important than copyright law.


Important for whom, the copyright creators? Being fed is more important than supermarkets, so feel free to raid them?


Conflating natural law -- our need to eat -- with something we pulled out of our asses a couple hundred years ago to control the dissemination of ideas on paper is certainly one way to think about the question.

A pretty terrible way, but... certainly one way.


I am sure it had nothing to do with the amount of innovation that has been happening since, including the entire foundation that gave us LLMs themselves.

It would be crazy to think the protections of IP laws and the ability to claim original work as your own and have a degree of control over it as an author fostered creativity in science and arts.


Innovation? Patents are designed to protect innovation. Copyright is designed to make sure Disney gets a buck every time someone shares a picture of Mickey Mouse.

The human race has produced an extremely rich body of work long before US copyright law and the DMCA existed. Instead of creating new financial models which embrace freedoms while still ensuring incentives to create new art, we have contorted outdated financial models, various modes of rent-seeking and gatekeeping, to remain viable via artificial and arbitrary restriction of freedom.


Patents and copyright are both IP. Feel free to replace “copyright” with “IP” in my comment. Do you not agree that IP laws are related to the explosion of innovation and creativity in the last few hundred years in the Western world?

Furthermore, claiming “X is not natural” is never a valid argument. Humans are part of nature, whatever we do is as well by extension. The line between natural and unnatural inevitably ends up being the line between what you like and what you don’t like.

The need to eat is as much a natural law as higher human needs—unless you believe we should abandon all progress and revert to pre-civilization times.

IP laws ensure that you have a say in the future of the product of your work, can possibly monetise it, etc., which means a creative 1) can fulfil your need to eat (individual benefit), and 2) has an incentive to create it in the first place (societal benefit).

In the last few hundred years intellectual property, not physical property, is increasingly the product of our work and creative activities. Believing that physical artifacts we create deserve protection against theft while intellectual property we create doesn’t needs a lot of explanation.


What you see as copyright violation, I see as liberation. I have open models running locally on my machine that would have felled kingdoms in the past.


I personally see no issue with training and running open local models by individuals. When corporations run scrapers and expropriate IP at an industrial scale, then charge for using them, it is different.


What about Meta and the commercially licensed family of Llama open-weight models?


I have not researched closely enough but I think it falls under what corporations do. They are commercially licensed, you cannot use them freely, and crucially they were trained using data scraped at an industrial scale, contributing to degradation of the Web for humans.


Since Llama 2, the models have been commercially licensed under an acceptable use policy.

So you're able to use them commercially as you see fir, but you can't use them freely in the most absolute sense, but then again this is a thread about restricting the freedoms of organizations in the name of a 25-year-old law that has been a disgrace from the start.

> contributing to degradation of the Web for humans

I'll be the first to say that Meta did this with Facebook and Instagram, along with other companies such as Reddit.

However, we don't yet know what the web is going to look like post-AI, and it's silly to blame any one company for what clearly is an inevitable evolution in technology. The post-AI web was always coming, what's important is how we plan to steward these technologies.


The models are either commercial or not. They are, and as such they monetise the work of original authors without their consent, compensation, and often in violation of copyleft licensing.

> The post-AI web was always coming

“The third world war was always coming.”

These things are not a force of nature, they are products of human effort, which can be ill-intentioned. Referring to them as “always coming” is 1) objectively false and 2) defeatist.


> Continuing the journey of optimal LLM-assisted coding experience. In particular, I find that instead of narrowing in on a perfect one thing my usage is increasingly diversifying

https://x.com/karpathy/status/1959703967694545296



Junior jobs will come back when blitz-pricing of AI coding products end. Current bosses think these prices with 200/mo to "leave it and auto-code for the whole month, day and night" will stay like this. Of course it won't.

Typical startup play but in massive scale. Junior jobs might come back but not in bulk, still selective, very slowly.


If that’s your thesis, $200/mo has a lot of room for price increases before you start reaching Jr dev salary/mo.


Anthropic is actually a good point to focus on since Claude is very good proof that it's not about the scaling. We are not quite there yet but we are "programming" through how we shape and filter the input data for training it seems. With time, we'll understand the methods to better represent.

Current situation doesn't sound too good for "scaling hypothesis" itself.


> Current situation doesn't sound too good for "scaling hypothesis" itself.

But the “scaling hypothesis” is the easiest, fastest story to raise money. So it will be leveraged until conclusively broken by the next advancement.


The underlying assumption is that language and symbols are enough to represent phenomena. Maybe we are falling for this one in our own heads as well.

Understanding may not be a static symbolic representation. Contexts of the world infinite and continuously redefined. We believed we could represent all contexts tied to information, but that's a tough call.

Yes, we can approximate. No, we can't completely say we can represent every essential context at all times.

Some things might not be representable at all by their very chaotic nature.


I did think that human mental modeling of the world is also quite rough and often inaccurate. I don't see why AI can't become human like in it's abilities but accurately modeling all the relativistic quarks in an atom is a bit beyond anything just now.


High probability your v2 voice will break with this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: