You misunderstood. All geohot is saying is the same thing Scott Galloway constantly says - your job is to create surplus value. Provide more value than you take, over your lifetime, not over any specific one period, either.
The argument is that if you do that, returns will naturally come your way.
The issue is that many people never provide surplus value at all; some can't, and that is obviously completely acceptable (people who are disabled, have medical conditions, or who for some other reason cannot). But those who are able and choose not to provide surplus value are who he's talking about.
You may not agree, and that's okay, but that's the argument.
I wish that argument was trivially true. Yet we see tons of disadvantaged people working the real tough jobs helping the elderly or sick and they are getting precious little in return.
And to a lesser degree, I have been doing nothing but providing value. All my projects are free/libre, yet returns have not come my way at all. In fact people who could make returns come my way, for example by offering me a job that I am clearly well suited for, refuse to take a look at these projects.
Perhaps the argument is also about non-financial returns, and things like friendships, but I don't feel especially well connected either, even though I try to help anyone I can help in the areas I am active in.
I don't think the argument matches reality, unfortunately.
The "real tough jobs" pay little because the marginal job of that kind does not really create that much value. That in turn happens because the most disadvantaged tend to crowd into these jobs, to the neglect of other, more value-creating activities - yet another issue that might be handily addressed by UBI.
Yet these were the "essential workers" during the pandemic. Not the VCs, not the hedge fund managers, not the industrialists or bankers or rich housewives.
And all they got for their efforts were applauds.
Reality is that without their work all our societies would have failed and fallen.
Almost any common folks agrees that for example nurses aren't paid enough.
The real issue is that our "valuation" scheme is controlled by the wealthy not by the people and the only metric is what makes the rich richer.
Phew, I am having a real hard time agreeing with you there. I mean, just imagine what would happen, if those social and tough jobs were not performed by people dedicated specifically to doing those jobs. Then we would all have to take care of our family's elderly and that can easily turn into a full time job itself. Let just one relative have Alzheimers or they for some reason cannot move any longer, or even less drastic conditions, that still require you to watch over them, and you will have all hands full taking care of them. This is the reason, why in many societies we decided to outsource this to people whose sole job it is to take care of other people.
Or take nurses for example. You really think they provide low value? Tell me more, when you are seeing a hospital from the inside at some point. Yet they are not paid much.
That's why I stated that the marginal job is what sets the reward. We actually have a lot more people wanting to do these jobs than we reasonably have a use for. Your mention of hospital nursing is actually a case in point: actual Registered Nurses are quite scarce, often do highly valuable, specialized work, and get paid a lot.
What on earth are you talking about? In the US (which seems to be the context in question), Actual Registered Nurses™ are not by any means "scarce" and in fact make up the clear majority of all nurses. Nor do they get "paid a lot" compared to the demands of their jobs, especially considering this is a country that throws the same salaries at people for the mighty skill of writing JavaScript for a SaaS.
That's not what he's saying. At a company level he's saying that if they make more profit than they add value they have an indefensible business model and will eventually lose to bigger players.
At a personal level you can live your life similarly, add value where you can. You can do that by joining an organization that adds value as well.
I'll notice that the trading model will filter out bear down trends which is very, very helpful but it doesn't trade short. I'll ask the coding agent to find several academic research papers about trading once intraday during a down trend -- a single scalping. It will return with ~10 references. It will recreate the model, do statistical analysis, and create a search grid backtest. This will immediately give information if there is any alpha. If there is, it will iterate integrating the concept into the existing trading model.
It has enough information that it will continue to iterate for the next several hours.
It's all happening in a black box. I have no idea. My concern isn't trading but rather to get it to continuously improve unsupervised without lying or hallucinating.
I’ve never used them first hand, but crackpots sure do love claiming to solve Riemann hypothesis, P vs NP, Collatz conjecture etc and then peddle out some huge slop. My experience has solely been curiously following what the LLM’s have been generating.
You have to be very, VERY careful. With how predisposed they are to helping, they’ll turn to “dishonesty” rather than just shut down and refuse. What I tend to see is they get backed into a corner, and they’ll do something like prove something different under the guise of another:
They’ll create long pattern matching chains as to create labyrinths of state machines.
They’ll keep naming functions, values and comments to seem plausible, but you have to follow these to make sure they are what they say. A sneaky little trick is to drop important parameters in functions, they appear in the call but not in the actual body.
They’ll do something like taking a Complex value, but only working with the real projection, rounding a number, creatively making negatives not appear by abs etc etc
So even when it compiles, you’ve got the burden of verifying everything is above board which is a pretty huge task.
And when it doesn’t work, introducing an error or two in formal proof systems often means you’re getting exponentially further away from solving your problem.
I’ve not seen a convincing use that tactics or goals in the proof assistant themselves don’t already provide
>So even when it compiles, you’ve got the burden of verifying everything is above board which is a pretty huge task.
Is this true?
e.g. the Riemann hypothesis is in mathlib:
def RiemannHypothesis : Prop :=
∀ (s : ℂ) (_ : riemannZeta s = 0) (_ : ¬∃ n : ℕ, s = -2 * (n + 1)) (_ : s ≠ 1), s.re = 1 / 2
If I construct a term of this type without going via one of the (fairly obvious) soundness holes or a compiler bug, it's very likely proved, no? No matter how inscrutable the proof is from a mathematical perspective. (Translating it into something mathematicians understand is a separate question, but that's not really what I'm asking.)
I'm writing C for microcontrollers and ChatGPT is very good at it. I don't let it write any code (because that's the fun part, why would I), but I discuss with it a lot, asking questions, asking to review my code and he does good. I also love to use it to explain assembly.
It's also the best way to use llms in my opinion, for idea generation and snippets, and then do the thing "manually". Much better mastery of the code, no endless loop of "this creates that bug, fix it", and it comes up with plenty of feedback and gotchas when used this way.
This is a funny one because on the one hand the answer is obviously no, it's very fiddly stuff that requires a lot of umming and ahhing, but then weirdly they can be absurdly good in these kinds of highly technical domains precisely because they are often simple enough to pose to the LLM that any help it can give is actually applicable immediately whereas in a comparatively boring/trivial enterprise application there is a vast amount of external context to grapple with.
From my experience, it's just good enough to give you a code overview of a codebase you don't know and give you enough implementation suggests to work from there.
It's been nine years since the chain split, which happened within the first year. No irregular changes have been made since then. Two major hacks caused over a hundred million dollars in losses to Parity, a company founded by one of the core devs. That dev lobbied heavily for rescue, and the community refused.
Bitcoin also made an irregular change, a year and a half into its history.
Listen, this is all code running on computers. At the end of the day everyone could choose to shut it down or replace it entirely and they criticism would still be: See not immutable! Eventually entropy makes everything mutable.
>> More hands is usually better than simpler systems for reasons that have nothing to do with technical proficiency.
If you are working on open source databases, or something close to the metal I agree with antirez, if you are working at some established tech business (e.g: a very old ecommerce site), I agree with you
To be clear, I'm not disagreeing with antirez at all. I feel his argument in my bones. I am a smart programmer. I want simple, powerful systems that leave the kid gloves in the drawer.
The unfortunate reality is that a large cadre of people cannot handle such tools, and those people still have extremely valuable contributions to make.
I say this as a full-time research engineer at a top-10 university. We are not short on talent, new problems, or funding. There is ample opportunity to make our systems as simple/"pure" as possible, and I make that case vigorously. The fact remains that intentionally limiting scope for the sake of the many is often better than cultivating an elite few.
I'm really curious about the shape of your problem. I was an early hire at a tech unicorn and helped build many high scale systems and definitely found that our later stage hires really had a harder time dealing with the complexity tradeoff than earlier hires (though our earlier hires were limited by a demand to execute fast or lose business which added other, bad constraints.) I'm curious what your iteration of the problem looks like. We managed it by only trusting the bedrocks of our systems to engineers who demonstrated enough restraint to architect those.
DOM navigation for fetching some data is for tryhards. Using a regex to grab the correct paragraph or div or whatever is fine and is more robust versus things moving around on the page.
reply