Hacker Newsnew | past | comments | ask | show | jobs | submit | bobbruno's commentslogin

One could argue that NVidia's advantage comes from a similar vision epiphany that led to them developing CUDA years before it was viable. The result is similar.

I'm tempted to call that pure luck. As far as they knew, crypto would be the killer app.

However, if you start with the assumption that at some point, people are going to need a lot of fast parallel compute for something, you could rationally justify their long-term strategy. They skated where the proverbial puck was going. They couldn't see the puck, but they were pretty sure there was one. In hindsight that really does look like a safe bet.


Nvidia subsidized machine learning research for years (both with CUDA, hardware donations and developing what was a very niche product line just for them) before deep learning became big, much less the advent of LLMs.

Certainly Jensen seemed to have an extremely long view on this burgeoning machine learning market in the early 2010's.


It didn’t hurt that they had a two companies named Intel and Microsoft that completely missed the boat where GPUs or mobile computing were concerned both are currently the top two companies in tech today by market cap?

CUDA came out of the need for running parallel cores in their GPUs. This is not luck, it's product evolution. They did it first, they did it best, and they are reaping the benefits. The alternative here is to Not have CUDA and continue writing sub-optimal code for GPUs.

People were (ab)using OpenGL to run compute on GPUs in 2004-2006, doing stuff like rendering 2 triangles covering the whole screen and then doing the actual compute in the pixel shaders, getting 10x speedups over CPUs for some problems.

NVIDIA just had their eyes open to an obvious market demand and made it easier by creating CUDA.


I played with NNs in the late 80's/early 90s, with little more than a copy of Hinton's paper, a PC and a C compiler. Obviously, I got no practical results. But I got the intuition of how they worked and what they could potentially do.

Cut to 2008-9,and I started to see smartphones, grid (then cloud) computing and social networks emerging. My MBA dissertation, finished in 2011, was about how that would change the world, because the requirements for meaningful AI were coming along - data and compute. The theory was already there, Hinton, LeCun, Schmidhuber,etc.

That got me back into the Data Science field, after years working in Data Engineering. Too bad I lived in Brazil back then and couldn't find a way to join the emerging scene in California and other top places. I'd be rich now...


There's a saying in Germany: if 10 people are at a table, a nazi sits at the table and they don't leave, 11 nazis are sitting at the table.

Therefore constant vigilance and effort against it is required.


I don't know. Many of these ideas sound like "give me more of the same", reinforcing my current tastes and beliefs. The thing about going out there and interacting with stuff you don't know is that it had a chance of pushing your boundaries. If these agents are "good" as defined in the article, everyone ends up in an echo chamber.

Also, it may sound great for someone transitioning from a world before these agents were created, but how should the new generations coming in be handled? What is the starting state? Who decides that? Social media was not that bad when it started, but iterations over the algorithm and the incoming new natives to it are having devastating effects a very short time after. Do we really understand the consequences of living in a world where everything is curated for you?

I don't know that I want my life to be made so easy, that I want something to remove the need for choosing, thinking, criticizing and exposing myself to stuff out of my comfort/interest zone.


> society as a whole is in agreement that minors are better off without access to pornography

Once a significant part of said society can't (or won't) differentiate sexual education and intimacy from pornography, I don't think your statement holds true anymore.


It's not a matter of life and death, I agree - to some extent. Startups have very limited resources, and ignoring inconclusive results in the long term means you're spending these resources without achieving any bottom line results. If you do that too much/too long, you'll run out of funding and the startup will die.

The author didn't go into why companies do this (ignoring or misreading test results). Putting lack of understanding aside, my anecdotal experience from the time I worked as a data scientist boils down to a few major reasons:

- Wanting to be right. Being a founder requires high self-confidence, that feeling of "I know I'm right". But feeling right doesn't make one right, and there's plenty of evidence around that people will ignore evidence against their beliefs, even rationalize the denial (and yes, the irony of that statement is not lost on me); - Pressure to show work: doing the umpteenth UI redesign is better than just saying "it's irrelevant" in your performance evaluation. If the result is inconclusive, the harm is smaller than not having anything to show - you are stalling the conclusion that your work is irrelevant by doing whatever. So you keep on pushing them and reframing the results into some BS interpretation just to get some more time.

Another thing that is not discussed enough is what all these inconclusive results would mean if properly interpreted. A long sequence of inconclusive UI redesign experiments should trigger a hypothesis like "does the UI matter"? But again, those are existentially threatening questions for the people in the best position to come up with them. If any company out there were serious about being data-driven and scientific, they'd require tests everywhere, have external controls on quality and rigour of those and use them to make strategic decisions on where they invest and divest. At the very least, take them as a serious part of their strategy input.

I'm not saying you can do everything based on tests, nor that you should - there are bets on the future, hypothesis making on new scenarios and things that are just too costly, ethically or physically impossible to test. But consistently testing and analysing test results could save a lot of work and money.


Excellent response. Thank you!


If the law has specific clauses about this that the contract disrespects, these conditions are not worth the paper they are written on.

At least in Brazil you can't enforce something the law doesn't allow in a contract - that clause would be considered void without nullifying the contract. And Labour law in Brazil leans (or used to lean) more in favor of the employee,so yes, the law would win. Another aspect there is that unions are more common than in the US, and they will help in such cases.


>If the law has specific clauses about this that the contract disrespects, these conditions are not worth the paper they are written on

Unless the law also has severe penalty for including such terms, of course they are. They don't need to dissuade 100% of people from breaking 100% of the terms to be of use to the company.


I am responding to this from my living room in Berlin, sitting on a sofa that belonged to my father, after having dined on a table he inherited from my grandfather. Both were brought with us when we moved from Brazil.

So yes, people do want to inherit the old stuff. I have some IKEA stuff (the beds were just too big, and mattress sizes are different), it just can't compare.


Actually the split between Arthur Andersen and Andersen Consulting (which later became Accenture) happened years before the Enron thing.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: