Hacker Newsnew | past | comments | ask | show | jobs | submit | dmk's commentslogin

Google investing $40bn in a company that competes directly with Gemini is one of those moves that only makes sense if you think of it as buying compute customers, not backing a competitor. Anthropic pays Google for TPUs and Cloud services, a big chunk of this investment surely has to flow right back to Google.

The acting_vs_clarifying change is the one I notice most as a heavy user. Older Claude would ask 3 clarifying questions before doing anything. Now it just picks the most reasonable interpretation and goes. Way less friction in practice.

Haven't had a chance to test 4.7 much but one of my pet peeves with 4.6 is how eager it is to jump into implementation. Though maybe the 4.7 is smarter about this now.

I have the opposite experience. It now picks the most inane interpretation or make wild assumptions and I have to keep interrupting it more than ever.

I really hate that change, it's now regularly picking bad interpretation instead of asking.

Yeah, that really feels like a choice that should be user preference.

Good list but the biggest missing piece for most new SaaS products right now is AI/LLM APIs. If you're building anything with AI features you're calling OpenAI, Anthropic, or similar - all US. Mistral exists but the ecosystem around it is much thinner. That's probably the hardest US dependency to drop in 2026 that I can think of.

Plausible is a great pick though, been using it and it covers most of what you'd want from analytics unless you need GA/GTM tied to ad campaigns.


I’d say open models are catching up to proprietary ones quite quickly, and those open models can be hosted on European infrastructure [1]. Some have direct model as a service apis, and others offer dedicated hosting for whichever model you choose to use. Qwen 3.5-397b-a17b and now Minimax M2.7 are two very strong contenders.

[1] https://www.scaleway.com/en/docs/generative-apis/reference-c...


I just looked at Scaleway’s pricing for two popular open source models (gpt-oss-120b and qwen3.5-397b) and it’s meaningfully more expensive than alternatives (e.g., many you’d find on OpenRouter).

I don't understand this statement at all. The OpenAI API is a standard which works against any number of models hosted by a whole pile of providers and the open weight models from Chinese labs are available from providers that aren't on US soil and likely ones in the EU, or you could just pay the $$ and host vLLM on your own GPU. Many of them (K2.5 the Minimax, the GLM models, the Qwen 3.6 models) are about as capable as frontier US models from about 4 months ago or so.

Unless you're trying to run a frontier coding agent at Codex/Claude Code levels, that's not a hard blank to fill right now.


Fair point on open models + EU hosting, that's a much better option than I gave it credit for. I was thinking more about the "just plug in an API key and go" experience where OpenAI/Anthropic are still way ahead, but yeah if you're willing to do the work the gap is closing fast.

Openrouter gives you exactly what you want and your choice of a huge number of models.

What are some useful ways SaaS companies are using AI? Great way to axe your customer support team.

I don't know about useful, but the most visible one is copywriting. Even when there's a human involved, every startup/small org I know runs content through them. (And that includes this article.) It's definitely something that companies want even if they don't necessarily need it (like analytics).

By far the best AI+human customer support mechanism I've experienced is through SMS/messages. They support auth, they're asynchronous, there's no app or custom interface to timeout, it's easy to send complex queries as text and you have the log right there. Apple does this really well. Delta also does, surprisingly, because their AI phone bot is garbage. It's also presumably easier for the human agents to multi-task.


Do you folks know any support bot which is actually useful to the customer? No, I don't mean cheap for the company, I mean helping, I mean fulfilling the goal for which (at least according on the powerpoints) it exists?

In Xero for example I can search for invoices or contacts in a much slower and more cumbersome fashion in the new AI tool than I can with the old search field!

Oh, you mean a useful way, never mind.


Great ways to get buzzwords on your investor slides!

I make a point of describing things written conventionally as "not using artificial intelligence, just using good old-fashioned reliable analogue stupidity".

I mostly meant getting to call your product barely used AI toilet paper instead of just barely used toilet paper.

Apparently AWS's European Sovereign Cloud has Bedrock, so that could be an option.

The AWS Sovereign Cloud is still owned 100% by Amazon Inc. in the US. Not saying that rules it out for all use cases, but something that should be mentioned. "Sovereignty" is a somewhat vague term.

<American Company> European means nothing. They are all subject to the US Cloud Act, and the moment you start using their services, it inevitably has one or two services that end up contacting us-east-1 anyways. And that's without taking into account that they are all trying to fuck you over from.behind anyways as they sign data exchange agreements between Europe and the US.

The large US players are not an option if you want your data safe from the US.


I haven't looked into the details but I remember from the announcement that the EU cloud is owned specifically by an EU entity headed by EU citizens. There would be no point spinning up a 'sovereign cloud' beholden to the US.

... And this entity is again owned by AWS. And so the cloud act still applies.

> There would be no point spinning up a 'sovereign cloud' beholden to the US.

Of course: It gives (both sides) a narrative that let's them pretend everything is alright.


How would the cloud act apply if none of the employees of the AWS European Sovereign Cloud are US citizens?

> Courts can require parent companies to provide data held by their subsidiaries.

https://en.wikipedia.org/wiki/CLOUD_Act


But they would have no way to actually compel anyone who isn't a US citizen. The worst the US could do is fine Amazon until it complied.

Edit: Looks like the below is not true. However, such setup is technically possible and if they were serious about making it truly isolated from US influence, it can be done.

Original comment: No it's not owned by AWS. It's a separate legal entity with EU based board and they license the technology from the US company.


This source says it's 100% owned by AWS USA:

https://openregister.de/company/DE-HRB-G1312-40853


Hmm I'm not sure how to interpret that page but it looks like you are right, I'll edit my comment. I was told by GCP PMs that is how the GCP/tsystems setup is structured (see sibling comment) and that it mirrored AWS setup, but maybe that was not correct.

How difficult would it be for the "independent" licensor to exfiltrate data from the "sovereign cloud" via logging or replication?

The control-planes have to be completely independent for anything approaching real independence, not just some legal fiction that's lightly different[1] from the traditional big-tech practice of having an Irish subsidiary licensing the parent company's tech for tax optimization purposes.

1. No different at all, according to sibling comment.


I don't know about AWS but I dealt with some (small / tangential) aspects of the GCP setup: https://www.t-systems.com/dk/en/sovereign-cloud/solutions/so...

It is completely separate. There isn't a shared control plane. You don't manage this in the GCP console, its a separate white-label product.

Any updates GCP wants to push are sent as update bundles that must be reviewed and approved by the operator (tsystems). During an outage, the GCP oncall or product team has no access and talks to operator who can run commands or queries on their behalf, or share screenshots of monitoring graphs etc.

(This information is ~3 years stale, but this was such fundamental design principle that I strongly doubt it has changed)


The fact that they deliberately manufacture the satellite clocks to tick at the wrong frequency on the ground (10.22999999543 MHz instead of 10.23 MHz) so that relativity makes them tick correctly in orbit is one of my favorite engineering details in any system.

The headline is dramatic but this is literally how bitcoin is designed to work. Miners leave, difficulty drops, costs go down, mining becomes profitable again. The interesting part isn’t the loss per coin, it’s how long the lag between unprofitable mining and difficulty adjustment keeps forced selling pressure on the market.


It sounds very similar to things like oil production, gold mining, and even farming. When the price is high, everyone wants in on the action. As supply explodes, the prices drop. Once prices get low enough, the costs to pump the next barrel of oil, find the next ounce of gold, or harvest the next acre of a certain crop; exceed the reward. When that happens, wells are shut down, mining operations suspended, and different crops planted. The cycle begins again.


There's a soft failure-mode for bitcoin where due to the alternating difficulty adjustment, you could end up with people only mining every other 2016-block adjustment.

Let's call this cycle A and cycle B.

If A is too hard, miners drop out, cycle B gets easier, miners flood back, cycle A gets harder.

This results in the hard cycle getting longer and the easy cycle getting shorter.

This isn't completely critical as there is I believe a small damping effect, so it isn't completely lethal to bitcoin, but a key thing about bitcoin mining is that whether other people are mining or not doesn't actually affect your own profitiability.

Other people dropping out doesn't actually mean you get more bitcoins per hour/watt, it only affects the next difficulty adjustment as a secondary effect.


The damping effect is that part of your costs are the hardware, space, depreciation etc. leaving that stuff idle costs money - so it makes sense to mine in the less profitable periods too.


That depends on each miner's energy costs, so long as (variable cost of energy - revenue from coins) < fixed costs. It's still negative cashflow either way, but the monthly losses have to be weighed against the cost of going insolvent and losing the hardware.


Yes though AFAIK electricity is a large %


The larger it is, the less likely your mining set up is actually all that solid.

The best miners are doing so with near free electricity, either with things like subsidized solar, or energy acquired from things like nat gas that'd otherwise get flared, or hydroelectric power that exists too far from civilization to have a demand otherwise.

If your miner is plugged into the grid, you're probably doing it wrong.


Crypto-miners are switching to AI token farming when bitcoin is low. They have compute that's both installed and powered, so why not do what pays better?


For bitcoin at least, you need totally different silicon.

I guess you could share the power supply and cooling infra, but I am dubious the savings are enough to have half your silicon idle all the time.


What the hell is AI token farming?


I think they mean serving inference workloads


How does that work? Isn't most bitcoin mining done on custom ASICs? I didn't think that the ASIC could be repurposed for inference.


Training ASICs (like Google’s TPUs) can generally run inference too, since inference is a subset of training computations. TPUs are widely used for both.

Mining ASICs (Bitcoin, etc.) cannot be repurposed…they’re hardwired for a single hash algorithm and lack matrix math needed for neural networks.


The biggest cost is the power which is often on multi year contracts. The hardware is comparatively cheap

That's wildly inaccurate. The cost in enormous both on the inference side and the mining side and has short lifetimes if you want SOTA.

I think you're right, it's counterintuitive but less competition means less rewards to share for those who keep mining. Though transaction fees / hour shouldn't decrease, maybe your share of that is bigger.


The difficulty can only adjust by a factor of 4 which also limits the incentive change. You'd need more than 90% of miners to disappear to start seeing actual problems.

I thought the rate of mining was tied to the maximum transaction rate the network can support?


It's the other way around, and there's no obligation to even carry transactions when mining, although it's incentivised through fees.

Your mining rate is simply your hash rate vs the hash difficulty.

Conceptually, it's analoglous to rolling random numbers in (0,1) until you get to a number smaller than 1/X, where X is large.

How long it takes you to do that, isn't dependent on how many other people are also trying to do that, if you get 1 hit per hour, then lots of other people getting hits doesn't actually stop you getting your 1 hit per hour.

Now, that's not quite the whole truth, as there's a small amount of time needed for propagation of the previous chain, but with an average hit globally of ~10 minutes, that's not actually a big factor.

What could happen to incentivise people is increased fees if blocks get less common due to dropped miners, there'd be more competition to get into blocks if they start filling up.

That combined with the fixed costs such as depreciation as othes mentioned, keeps the risk of this form of failure to a minimum.


There is an interesting missing link in the feedback cycle with Bitcoin though - the same amount is produced regardless, supply does not contract with demand.


Satoshi thought of everything, man.


Except people wanting to do more than 15 transactions a minute. Or that to scale everyone would need to store a petabyte size blockchain.


https://en.wikipedia.org/wiki/Lightning_Network

I have been paying for my VPN with lightning payments; it takes less than one second to go through.


The Lightning Network is specifically designed to work around bitcoin design flaws. It entirely sidesteps the chain for a big part of the process. To me it proves that Satoshi did not, in fact, think of everything. Not the other way around.

Lightning has mostly done this by being a lot more centralized in practice and one could argue... What's the point of it all in this case? Why not just use regular currency?


Sorry, I do not understand your comment. Can you clarify. What does "a lot more centralized in practice" mean?

> What's the point of it all in this case?

Lightning is an L2 protocol, highly scalable and used for low cost payment in Bitcoin. Level 1 networks are almost never used for user transactions: your credit card payments do not go over fedwire, etc. Bitcoin protocol is not scalable to serve worldwide money transfer needs; Lightning is. And with the cost of a penny per transaction or so.

> Why not just use regular currency?

There are a lot of frictions in the current banking systems, because money laundering, because drugs, because whatever. Getting $5-$10k in regular currency while on an overseas trip can be a major quest. With Lightning I can transfer that much (or more) in a few mouse clicks.

As a side note, I think the federales are already way too nosy regarding my use of my own money, so I want to give alternative options as much business as I can. My 2c.


Isn't it hard to use in practice? Liveness, inbound liquidity, moving funds between L1 and L2, don't all of those lead to massive use of hubs, this denying the entire premise of decentralization?

Very easy. If the merchant supports it, it is extremely easy; equivalent to pointing your phone at a reader to pay with GooglePay. Between people -- a QR or similar.

This doesn't answer my main concern. How do most people use Lightning? Do they operate on their own or use a big hub?

Sorry, and no offense intended, but can you be clearer? What exactly is your main concern that you allude to above? And how, in your view, the Lightning compares with the alternatives in that specific regards?

Most people who use Lightning do not operate their own nodes; same as with other payment methods -- credit card users do not operate their own payment networks, people writing checks do not operate their own banks, etc.

It feels like we are talking across each other and I just do not get it.


My point is that for legitimate payment uses Lightning (and all current cryptocurrencies) are useless. They were sold as the "democratization of finance" that will "help the world's poorest" and they either:

1. can't allow an single village to operate purely on them, because they're too slow

2. or they're not decentralized, and the entire "democratic" angle dies with that

And then what's the point of cryptocurrencies for most people? Why not just use the "tradfi" and "fiat currencies" and use the money propping up cryptocurrencies to actually make the world a better place?

If we shut down all current cryptocurrencies and diverted resources used to actual productive uses, the world would probably end up with a net gain.

I'm just ranting. I would want cryptocurrencies to be amazing but right now they seem useful for people with cyberpunks fetishes, for criminals, and rarely, for actual regular people from fragile states (not rich people trying to exfiltrate wealth).


> what's the point of cryptocurrencies for most people? Why not just use the "tradfi" and "fiat currencies"

Maybe because otherwise for a significant portion of the world population their own fiat is the only game in town. And it sucks so much that the regular people are willing to break laws and risk fines, confiscations and occasionally prison just for keeping their savings in anything else, like a neighbor country fiat. Their govvies use their fiat as a transfer mechanism (which makes saving impossible) and thus must discourage any other savings vehicle; otherwise no fool will use their fiat.

I saw fiat rug pull twice in my first 25 years: once as an instant nationalization (a friendly radio announcement one night that your money is ... well ... not a money anymore) and, later, a hyperinflation that over 2 years wiped out savings. And being found with a less sucky fiat at home meant jail.

I was just a kid, did not have any savings and thus did not care that much. But an older generation lost everything. So yes, a lot of people will gladly use anonymous, permissionless money, drawdowns and other warts included.


>Level 1 networks are almost never used for user transactions: your credit card payments do not go over fedwire, etc.

Fedwire isn't a "level 1 network", it's an entirely different service with different end users and goals in mind. ACH isn't an "L2 protocol", but does orders of magnitude more transactions per second than Bitcoin.

It's like cryptobros don't understand the basics of the systems they're attempting to replace.


Could you elaborate why it is more centralized?

The point is that it is resistant to censorship, it is pseudonymous, and so on (all the other bitcoin attributes apply)


> it takes less than one second to go through Like Bitcoin used to be before someone had the brilliant idea of destroy the possibility of zero-confirmation transactions on-chain with Replace-by-fee transactions


How does this work? I read the wikipedia article but I don't understand how Lightning enforces the transaction.


The peers generate two sets of transactions.

One is a quick summary of the current balance in a channel. A new transaction is created each time the balance in the channel changes. It's somewhat cheap to put on the blockchain (And the main saving is that you only need to post the final update when you close the channel), but venerable to one side putting an old stale transaction onto the blockchain to profit.

The other transaction forms a chain of proof for current state, invalidating previous balance update transactions. It's somewhat expensive to post, as it will pull in the whole history.

Both peers need to continually watch the chain (or contract a 3rd party to watch) to make sure the other peer isn't cheating by posted a stale balance transaction. These special transactions are time locked, so once one is posted, you have like 24 hours to post the proof transaction and reverse it.


This link explains it a bit better: https://lightning.network/ and see the paper at the end for the exact details


> Except people wanting to do more than 15 transactions a minute It's more like 7 transactions per second, which is still absolute crap, but that was after the original Bitcoin project was kidnapped. There aren't such limitations in the original Bitcoin (forked as Bitcoin Cash)

> Or that to scale everyone would need to store a petabyte size blockchain That is addressed in the whitepaper (SPVs and pruning)


Clearly not because they created wallets that they can’t even use without unmasking their pseudonym. Seems pretty stupid to me.


Doesn't this assume that traceability of all transactions wasn't a goal?


Except for the inevitable and obvious fact that proof-of-work creates a self-sustaining primary incentive for energy waste more pernicious than has ever been seen in any other financial or commercial enterprise, obliterating any hope of having energy that is too cheap to meter.


Isn't this kind of the opposite?

Mining Bitcoin requires both hardware and electricity, and the cheapest electricity is solar. There isn't any severe scarcity of the raw materials to make solar panels, or of sunlight, so Bitcoin miners can buy as many solar panels as they want and it would only increase the economies of scale for producing them for other purposes too.

Solar has inconsistent output. There is none at night and it varies based on weather during the day. Mining hardware wants a fixed constant amount of power. The logical thing for miners to do is to somewhat overbuild the amount of generation they need and then sell any surplus to the grid, and sell to the grid during the day and buy it back at night. The same incentives hold if the miners and the generators are two different parties, and the result is to increase the amount of generation capacity by more than the amount of consumption and have "too cheap to meter" during periods of above-average generation. (You were never going to get "too cheap to meter" during periods when generation is low and demand is high.) And even during short periods when demand significantly outstrips supply, then their incentive is to stop operating those few days out of the year because the spot price of electricity makes mining unprofitable then, which allows the generation capacity installed to do mining be used to support the rest of the grid and inhibits the price of electricity from rising above the point where mining becomes unprofitable even for people who already have mining hardware. It's basically a buffer that buys electricity when it's cheap and sells when it's expensive.

Bitcoin has a volatile price. When the price is high, miners buy hardware and increase or pay someone else to increase generation capacity. When the price declines, the mining hardware becomes idle but the power generation capacity still generates fungible electricity that can be used for any other purpose. The result is that miners pay to install a lot of generation capacity during the boom, and have the incentive to prioritize investing in more generation rather than newer/more efficient mining hardware because it's the thing that's still worth something if the price declines, and that generation capacity then gets offloaded into the grid during the bust, with the result that grid prices go up some during the boom and down by even more during the bust. By the next boom some of the generation added last time has already been sold to non-miners or locked into long-term contracts so now they're back to adding new capacity again.

"Incentive to fund increases in generation capacity but then not use all of it" has what effect on average prices?


You're making a lot of highly idealized assumptions that don't hold true in reality.

Most significantly that the increased demand due to mining will result in grid operators investing in proportional new capacity to offset it over a reasonable time scale. Instead of just driving up prices due to basic supply/demand.

Also that miners are only consuming electricity when renewables dominate the mix. Otherwise they're responsible for more CO2 emissions to do something useless.

Plus in markets like Texas, miners also manage to get subsidies intended for actually useful customers like factories to go offline at peak times. So ratepayers are essentially paying protection money so they won't over stress the grid by performing their useless work.

In a world where bitcoin miners had to install new solar capacity to entirely offset their peak usage and sell back to the grid any excess then sure, seems like that wouldn't be a big societal net negative like it is right now.


> Most significantly that the increased demand due to mining will result in grid operators investing in proportional new capacity to offset it over a reasonable time scale. Instead of just driving up prices due to basic supply/demand.

If anyone can buy a piece of land, plop down some solar panels and start selling power to the grid, that's what happens whenever the market price gets higher than the cost of doing it. If they can't, that seems more like a regulatory problem than a Bitcoin problem.

> Also that miners are only consuming electricity when renewables dominate the mix. Otherwise they're responsible for more CO2 emissions to do something useless.

~100% of new net generation capacity in the US is renewables and that seems poised to continue for economic reasons. Adding some new nuclear could also make sense in an amenable political environment (some new data centers are trying to build it) but that also doesn't emit CO2.

> Plus in markets like Texas, miners also manage to get subsidies intended for actually useful customers like factories to go offline at peak times. So ratepayers are essentially paying protection money so they won't over stress the grid by performing their useless work.

Those "subsidies" (really discounts) exist because the cost of supplying power 100% of the time is dramatically higher than the cost of supplying it 99% of the time, so you pay less if you only need it 99% of the time and the power company gets to choose when.

Having more customers that can do that allows the grid to supply power to everyone for less money. You install some solar panels whose average generation can support a Bitcoin mining operation + X number of homes. When the output is half of normal for an extended period of time, the Bitcoin mining operation cuts out and "half of normal output from twice as many panels" can power all of the homes. Without the mining operation buying half the typical output there would only be half as many panels to begin with and then you would need something like natural gas peaker plants to power the other half of the homes when solar generation is low, which both costs more and emits more CO2.

> In a world where bitcoin miners had to install new solar capacity to entirely offset their peak usage and sell back to the grid any excess then sure, seems like that wouldn't be a big societal net negative like it is right now.

If you care about CO2 then you do a carbon tax or similar (and then refund all the money to the public as checks so it doesn't damage the economy), at which point that's exactly what happens, only it happens for everybody and not just Bitcoin miners, which is what you want anyway.


The first argument really really does not make sense.

You can also increase economies of scale by building out solar farms, and using them for something useful, instead of wasting it on guessing random hashes.

Saying that wasting energy is fine as long as you get it cleanly doesn't change the fact that you're still wasting it.


Or we could use all that "free solar energy" to benefit humanity through a million other more useful endeavors. Such as developing and deploying batteries.

One thing we do not lack is demand for more energy.


> Or we could use all that "free solar energy" to benefit humanity through a million other more useful endeavors.

Please tell me where I can get unlimited solar panels for free. I'll rent a truck and be there straight away.

> One thing we do not lack is demand for more energy.

Market demand is the willingness and ability to pay money for something. If the demand was actually unlimited then why isn't there either a Dyson sphere around the sun already or a 0% unemployment rate from everyone having a job building one?


Now compare it to the annual energy use for the creation/printing of money and funding of infinite wars due to the Federal Reserve having the ability to print money out of thin air at the cost of future generations.


>Federal Reserve having the ability to print money out of thin air at the cost of future generations.

As a non-American, it's hard not to notice that it's not future generations. It's everyone using dollars.

And since your country will be invaded if you try not using dollars to trade oil, and everyone needs oil (transport, food/fertilizers, medicine synthesis), then it's literally the whole world paying.

Which incentives USA to print money, because they only shoulder a small part of that burden.


The difference is that the quantity of what is being supplied is a factor with supply of oil/gold/grain/etc.

For mining it is just necessary that it happens.

The amount of work in mining is way higher than is required to prevent another party from being able to overwhelm the Blockchain. It is that high because of the subsidy of the mining reward means if Bitcoin has a high value the reward is worth a lot.

This is factored in with the halving of the reward. Either the price will increase exponentially or the mining reward will drop. Causing mining to reduce to those who can be profitable from fees. Which rewards those who can mine most efficiently, it becomes a supply and demand calculation in a market where there are relatively low barriers for competitors.


> The amount of work in mining is way higher than is required to prevent another party from being able to overwhelm the Blockchain.

Isn’t that exactly the point? Bitcoin incentivized wasting resources. It is, according to your own comment, unnecessary to use so much computing to keep bitcoin going. But it’s being used.


The level to be secure is much lower that.

If Bitcoin were worth much less the network would still be secure even though the mining reward would only be enough to pay for a fraction of the current processing.

If Bitcoin does not double in value every four years, the mining reward will reduce in real world terms.

Claiming the mining resources required will be at the current level or higher perpetually requires also making the claim that you think that the value will increase exponentially forever.

Nothing increases exponentially forever.


Yep economics rules everything around me


The headline is confusing the issue. Bitcoin miners are losing money because October's crash took Bitcoin from $126,000 to below $70,000, and the Iran war has pushed up oil and electricity prices. The minor difficulty drop is a result of that, as some Bitcoin miners drop out. It's not the cause.


It is how bitcoin is designed to work, but it also shows very directly how proof-of-work systems can never scale to be the global monetary replacement its boosters push. If the opposite happened, and the price for some reason sky rocketed to, say, $1 million per bitcoin, it would necessarily mean that it would induce more miners until the difficulty and consequent electricity cost (regardless of the efficiency in electricity generation) also would rise to the neighborhood of $1 million per coin. At the point you're far beyond "Argentina levels" of electricity and getting into "Europe levels" of electricity to run the network.

The electricity demand (and here I mean the overall cost of the electricity, so improvements in $ per kilowatt just mean you need to use more electricity) in proof-of-work systems fundamentally scales linearly with the overall valuation of the coins in the network, which means proof-of-work systems can never scale as large as their fanboys would have you believe.


While I don't disagree in general, there are a couple gaps in your reasoning that weaken the argument:

Adoption doesn't necessarily correlate completely with price. Price can increase without much adoption, due to speculation. In theory, adoption could also increase without much price increase.

Electricity isn't the only requirement for mining. Hardware is also required. Miners can't simply use lots of additional electricity if the hardware isn't there. Yes, new hardware can be manufactured, but it takes time.

The block reward decreases over time. If it's using Europe levels of electricity at time X, then after a block reward decrease, it'll use Europe/2 amount of electricity. This decreasing also disincentivizes manufacturing new hardware.

Miners can have different efficiencies, due to different types of hardware, and different types of electricity generation. So while the least efficient miner will be operating at near breakeven, the most efficient miner will be making much more profit. So while the least efficient miner will use $1M of electricity to mine a $1M coin, the most efficient miner will use less dollars of electricity.


> In theory, adoption could also increase without much price increase.

Not really. A fundamental purpose for any currency is to act as a "store of value". There is no way for bitcoin to represent a store of value (i.e. value commensurate to real-world goods) for a larger and larger portion of society without the price skyrocketing, especially since Bitcoin is inherently deflationary with a max number of coins.

Regarding your other paragraphs, I think this is a fundamental misunderstanding of how proof-of-work is designed to protect the network. The entire idea behind POW is that the total amount of work must be in direct relationship to the total value of the coins in the network, or else coordinated attacks become possible. I see this misunderstanding all the time in "the block reward decreases over time" argument. It doesn't really matter if miners get their payoff from block rewards or mining fees - they must (on average, over time) get enough reward to make their mining activity worthwhile, and, again, by the inherent design of POW, they need to spend enough on mining to make 51% attacks not worth trying. Just think about how your "If it's using Europe levels of electricity at time X, then after a block reward decrease, it'll use Europe/2 amount of electricity" sentence doesn't make any sense, because eventually in 2140 or so there will be no block rewards, so according to your logic no electricity at all would be required to run the network.

There is simply no getting around the fact that resource costs need to grow linearly with the total value of the network in POW systems.


>The entire idea behind POW is that the total amount of work must be in direct relationship to the total value of the coins in the network, or else coordinated attacks become possible.

The total amount of work must be in direct relationship to the amount an attacker can gain from executing a 51% attack. It's not clear to me that if bitcoin doubles in price, an attacker can gain double the amount from a 51% attack. A 51% attack doesn't allow direct theft of other people's bitcoins. It allows double spend attacks, denial of service attacks, and through those, the ability to tank the price of bitcoin.

>Just think about how your "If it's using Europe levels of electricity at time X, then after a block reward decrease, it'll use Europe/2 amount of electricity" sentence doesn't make any sense, because eventually in 2140 or so there will be no block rewards, so according to your logic no electricity at all would be required to run the network.

It's possible for a block reward to be larger than necessary for security. In that case it can go through several halvings that purely improve efficiency without putting the network at risk. Yes, at some point, with a sufficiently large number of halvings, the network would be at risk, but that doesn't mean we can't have some efficiency gains before that happens. Your previous comment referred to bitcoin using more electricity than Argentina. That's a statement about how much electricity it's currently using, not a statement about how much electricity it needs to use to get the necessary amount of security. It might be possible to decrease the electricity usage while remaining sufficiently secure.


You crucially missed the "halving" out of that model.

The block reward halving every 4 years means that in 3 halvings (8-12 years), miners will spend roughly the same on electricity at $1M/BTC as they do now at $125k/BTC.

Further, at historical rates of dollar devaluation, in a decade $1M will only be worth ~$500k today, and so really only roughly two halvings are required to even the electricity use between $125k/BTC and $1M/BTC


If "difficulty drops, costs go down" so ought the price? Isn't that basic economics? Or are they chasing the "phase difference", lag, between supply demand?


I am not certain; but, costs do not have a causative relationship to prices. Prices only go down because as the cost of production goes down, supply increases. It is a correlative relationship.

Bitcoin's supply won't increase as costs go down, unlike other assets.


> costs do not have a causative relationship to prices. Prices only go down because as the cost of production goes down, supply increases.

Um. That's a causative relationship, even if it's mediated, but it's still causative. And generally, the relationship is even more direct: the suppliers are quite reluctant to sell at the price lower than their costs unless they expect the prices go up soon enough™, so the lower boundaries for the prices exist.


The mining reward isn't a direct transaction that has a price.

Competing for it is more of a game that has a cost to participate in.


It's the reverse.

As price per coin goes up, more folks will find mining profitable and invest in mining operations. Difficulty goes up until it's no longer attractive for anyone to add to the global hash rate.

As price per coin goes down, less of those operations are profitable and fewer new people will find it to be a good investment. Difficulty stays the same or goes down. Due to capital expenses, difficulty is more sticky in the downward direction than upwards.

There is of course some marginal price action in between where there is in theory selling pressure from miners when it's less profitable to mine (to fund operational expenses and debt), but I don't think it's super material to the overall market volume these days.


It's both. You're talking about the demand curve. The other thing is the supply curve.


Price isn't affected by mining difficulty, only the other direction.


This only works when the difficult drop rates are below miner leaving rates.

Which in normal times, are something taken for granted, but once it does happen, the edge case collapse the entire system.

edit: the earlier language is not exact, the scenario is an exponential drop of value that results in exponential drop in miner willing to mine until this discrepancy can be resolved. i.e. the system is not protected against extreme volatility (e.g. -99% over a block cycle)


No but if more miners leave then dofficulty with drop faster right? Its modelling supply and demand curves which are a stable equilibrium in these circumstances


Might be wrong about what Aperocky is alluding to but there is an entirely theoretical edge case. The time to the next difficulty adjustment is based on the current speed of mining, and the possible change in difficulty is capped. With enough minors leaving it will drop the speed of mining/network speed/ and push out the expected time to the next difficulty adjustment. I can't think of any realistic way this can occur given the miners that stay will (personally) be producing blocks as often, the increase in time being balanced out by being a larger proportionate of the mining rate. They don't care if they get 1% of the blocks, which average about 20 mins per block or 5% of the blocks that average 100mins per block.


Difficulty only adjusts every 2016 blocks. If the system gets out of whack enough it could slow down to a crawl for an extended period of time.

In practice it’s not much of an issue because bitcoin is not use for commerce but it’s a store of value and it some of the trades are not even on chain.


> but once it does happen, the edge case collapse the entire system.

Which is when exactly, and how likely is that to happen? It hasn't happened yet in ~14 years, but I guess "never say never". There is a lot of money saying it won't happen very soon though.


If it happens it'll probably the result of a positive feedback loop forming: miners leaving slowing down transactions and affecting utility/faith in the system resulting in people selling, meaning more miners leaving, etc. That said, I don't know of any clear examples of this happening to any other proof of work coins: I think in general other parts of a cryptocurrency tend to fail first, it requires a particularly fast death for this kind of thing to happen.


It's also a side-effect of apocollapse of bitcoin itself; it becomes worth so little that nobody is mining means nobody will mine to a new block difficulty; but the collapse already occurred.


I don't think you know what you're talking about. If the difficulty lowers at a lower rate than miners leaving then the difficulty rate will stop dropping.


> below miner leaving rates.

What does this mean, sorry?

> the edge case collapse the entire system.

If you mean that if it reaches a certain point, the entire system will collapse, it means you don't understand the difficulty adjustment. If it's too expensive to mine, then some miners leave, which makes blocktimes be longer, but not to worry because the consequence of that it just that difficulty will go down, which means that you need less hashrate to mine (and maybe some of those miners that leave will come back because it is profitable again for them). This means that it is essentially impossible for all miners to leave at the same time; some of them stay even if at a loss, and some of them are just hobbyists that can already feed their miners with solar power (so there's really no loss for them in leaving them connected).


This makes sense but what if nobody get the system to the next checkpoint where the difficulty is allowed to go down?

Yup

The problem with BTC going down is that it's a double whammy of not only BTC going down but also the cost of its shovels going up

Before: BTC pays $100k but a shovel costs $300

Now: BTC pays $70k but a shovel costs $$??

Bitcoin asked the right questions but came back with the wrong answers


What's a shovel?


They're using the analogy of mining for gold. the cost of a shovel/pitchfork goes up when the price of gold goes down - which is a double whammy


you didn't answer the question. A shovel in this case is the equipment + energy needed to mine (GPU's etc.)


Which is pretty much obvious to anyone who has heard of bitcoin in the year of our lord 2026

Especially since the "sell shovels during a gold rush" has been used to apply to nVidia


But the person upstream hasn’t. It’s not obvious to them. Which is why a good answer has to include the detail.


But mining costs are (cost of equipment+cost of electricity)/total coins mined, so can miners not end up in a situation where they need to keep mining to pay off equipment despite the individual coins being unprofitable?


It's no different to a mortgage being in negative equity as the home owner would still be in debt after selling the property.


I'm far from a crypto expert but aren't costs largely GPUs and electricity here?

Those are now being driven by massive AI demand and are likely to remain so for the forseeable future. So how would costs go down?


The cost of finding a block goes down because it becomes less difficult.

The goal in proof of work is to find a block hash less than a given value. That value is determined by the network difficulty. The lower the value, the more difficult it is to find a block, and thus the more expensive it will be to mine.

Difficulty is adjusted once every two weeks to target an average block time of 10 minutes. If the average block time during the preceding 2 weeks is less than 10 minutes, it means that blocks were too easy to find (i.e. the difficulty was too low relative to total hash rate of the network). Conversely, if the average block time was greater than 10 minutes, the difficulty was too great.

This is how it the network has maintained a roughly 10 minute block time as the hash rate of the network has grown over the past 16 years. The difficulty (i.e. cost) of finding a block is constantly being adjusted.


I don't think GPUs are competitive at all. You need specialized mining rigs with bitcoin mining specialized chips.


And that since a solid decade.


Bitcoin is no longer mined by GPUs but by ASICs


Don't the ASICs compete with the same fab capacity that fabs GPUs, RAM, SSDs etc.


You’re fractionally right with GPUs but RAM and SSDs run on different processes at different fabs.


They compete with older GPUs. Not new ones, not RAM, and not SSDs.


If costs stay high, then people will drop out of bitcoin mining, which will cause supply to go down and bitcoin prices to go up.


It won’t cause supply to go down, the same amount of Bitcoin is produced whether it’s mined by millions of ASICs or a single 2008-vintage laptop.


In the gap between cost going down and profitability, is there not an increased risk of sybel attacks?


You'd still need to have 51% of the network to perform any successful attack, which despite the price drop is a a MASSIVE capital investment.

What does “leave” in this context mean?


Turn off their mining rigs.


Or use it for other coins.


As you can see on https://www.f2pool.com/coins other coins using SHA256 as PoW algorithm only amount to about 1% of Bitcoin's daily dollars of Pow Produced, so if any nontrivial amount of hash moves there, then those will soon become unprofitable too.


They all correlate with bitcoin. Same problem probably applies.


No. all coins do not have equal mining participation.


I think the comment you replied to meant that the other coins are also dropping in price, when bitcoin drops.


Yes, but the coins with less participation require less power to compete. To make a market argument that there is an equilibrium of players across all coins, implies there are actual individuals finding opportunities and switching coins when they get out of sync.


Miners already switch between coins, yes.

And their reason for switching coins is?…

Stop mining.


"stop"

(Obviously the equipment doesn't go away. You can start it again. But if you can't make a buck doing something, you won't do it.)


But I mean, their bitcoins are not going away, their wallets are still there, their bitcoins also right? I thought bitcoin mining was proportionally hard to the number of already mined bitcoins, not the number of people mining?

I probably should look this up in wikipedia first.


It's a common misunderstanding that mining just gets harder and harder as time goes by and more coins are minted. It's often misreported that way. But in fact, the difficulty is dynamic and adjusts itself to keep minting at the predetermined rate regardless of the number of participants. Mining has gotten harder on long timelines, but only because more computing power has been added.


Doesn’t that contradict the Wikipedia article?

> Miners who successfully create a new block with a valid nonce can collect transaction fees from the included transactions and a fixed reward in bitcoins. To claim this reward, a special transaction called a coinbase is included in the block, with the miner as the payee. All bitcoins in existence have been created through this type of transaction. This reward is halved every 210,000 blocks until ₿21 million have been issued in total, which is expected to occur around the year 2140. Afterward, miners will only earn from transaction fees.

https://en.wikipedia.org/wiki/Bitcoin (emphasis mine)


Difficulty and block rewards are separate things. There is no contradiction here.

Block reward stays constant, amount of work required (on average) to get a block reward is dynamic in order to make it so that total number of rewards given out over a length of time stays roughly constant.

So if too many block rewards are claimed in a given time frame, difficulty is increased to slow things down. If not enough are claimed then difficulty decreases to make it easier to get one.


The reward of each block will only get smaller. But the power needs to mine a block is dynamic.


Sure, but that’s not what miners care about. The power needed to get a given amount of money doubles whenever the reward is halved.


I honestly don't know which part of "the difficulty being dynamic" is this hard to understand.

> The power needed to get a given amount of money doubles whenever the reward is halved.

Yes, by that moment it does.

And some miners still stop mining if mining became too unprofitable.

And the difficulty will decrease because less miners are mining.

And the power needed to get a given amount of bitcoin will decrease. (Not necessarily to the level before halving, ofc)

Or your comment was about this part of the grandparent comment:

> keep minting at the predetermined rate

?

If so, I think you misunderstood what they were trying to say (or their wording was misleading). It's a predetermined rate. Not a constant rate. It's predetermined to be halved at (roughly) certain moments. Halving happens about every four years, and pouring more power into mining won't make it happen significantly sooner or later. That's what they were trying to say.


...and sitting on a lot of ASICs which are soon worthless....


Bitcoin is arguably the worst designed technology product in history.

Burning $88k of carbon to run the world’s slowest payment network and produce a tiny corpus of data is stratospherically dumb.

The technological equivalent of raising a forest to produce a toothpick.

Just because “it works” (which is arguable in itself) doesn’t mean it isn’t stupid.


A perpetual boom bust cycle? Sounds healthy.


Counterintuitively that’s the definition of healthy in economics.

If you don’t have busts, at some point your system will abruptly/violently cease to exist.


It is a negative feedback loop, so yes, it makes systems stable.


Technically you could have negative feedback result in a system that diverges further and further from some baseline, until it eventually collapses. This is usually because the gain of the feedback signal is too high.


This is exactly how real world economy is (ideally) meant to work.


Regression to the mean. The alternative is no adjustment at all.


Its still true and shows one of many issues with bitcoin.

Based on bitcoin cryptobros, you need a certain amount of independent miners for the 'quality' of bitcoins. A bitcoin miner if its a state, can operate with a loss a lot longer if not even infinit, than the decentralized normal people (who do not exist anyway).

It also creates a lot of pressure on miners if you do not run your gpus, yuou are also at a loss, which can break the mining for everyone if too many in parallel go offline, than go olnine again because difficulty droped to much.

And if it becomes to volatile, no one wants to risk it anymore


> if you do not run your gpus

Bitcoin hasn't been viably mineable on GPUs for over ten years. It requires specialized hardware.

As such, mining is typically restricted to those with massive capital investment in a single-purpose, so you really won't see random offloading and onloading of that capacity. As long as it's marginally profitable (with capital investment being a sunk cost, this is the price where it's more than ongoing costs), those miners will keep their machines running.


The original idea was for every single person out there to mine bitcoins on their own computers. Bitcoin screwed that up by allowing big corporations to push out the smaller players. Their big purpose built hardware increased mining difficulty to the point mere mortals need not even apply. Mining on GPUs? Nope, you need purpose built ASICs for this.

Monero is the only cryptocurrency today that's at least trying to implement the original "one CPU, one vote" vision but nobody really cares about it since number doesn't go up.


> The interesting part isn’t the loss per coin, it’s how long the lag between unprofitable mining and difficulty adjustment keeps forced selling pressure on the market.

I follow Bitcoin from a theoretical point of view and I find it fascinating.

Something that boggles my mind a lot is this: Bitcoin, which is somehow a bit "programmable", and Ethereum (which is definitely programmable) are basically the most correct computers on earth. Due to the consensus that needs to be reached by thousands+ of machines. Even if they're imperfect, ECC-less (for the most part), machines.

Now they may still run code with flaws: but they'll all run it exactly in the same way. If, say, a bit-flip occurs on a machine, that machine won't create a block or won't sign a transaction accepted by others. Not part of the consensus. That is wild.

Then the other thing which boggles my mind and which relates to your comment: the "selling pressure on the market" by Bitcoin miners is, no matter what they do, halved every four years. There were, 8 years ago, still 1800 Bitcoins mined per day. Today it's 450.

And in two years (we're midway before the next halving), it's going to be 225.

And Satoshi Nakamoto planned, from the very start.

Maybe it doesn't make sense (economically or from a security point of view: who's going to secure the network when there's not enough block reward anymore?).

But miners will mine 225 Bitcoins per day, not 450, in two years.

And that is totally fascinating.


> I follow Bitcoin from a theoretical point of view and I find it fascinating.

I find it horrible: The damage done to the planet doesn't correlate with the number of transactions. It's maximizing uselessness.


How is it maximizing uselessness? Anymore than anything else, at least?


It does have an outsized environmental impact compared to other technologies people would call useless. The only thing that really tops it in terms of ire and pollution is AI, but that has far more realized applications

The quote from the CMU guy about modern Agile and DevOps approaches challenging architectural discipline is a nice way of saying most of us have completely forgotten how to build deterministic systems. Time-triggered Ethernet with strict frame scheduling feels like it's from a parallel universe compared to how we ship software now.


During the time of the first Apollo missions, a dominant portion of computing research was funded by the defense department and related arms of government, making this type of deterministic and WCET (worst case execution time) a dominant computing paradigm. Now that we have a huge free market for things like online shopping and social media, this is a bit of a neglected field and suffers from poor investment and mindshare, but I think it's still a fascinating field with some really interesting algorithms -- check out the work of Frank Mueller or Johann Blieberger.


It still lives on as a bit of a hard skill in automotive/robotics. As someone who crosses the divide between enterprise web software, and hacking about with embedded automotive bits, I don't really lament that we're not using WCET and Real Time OSes in web applications!


I suppose that rough-edgeness of the RTOSes is mostly due to that mainstream neglect for them - they are specific tools for seasoned professionals whose own edges are dent into shapes well-compatible for existing RTOSes.


if you ever worked on automotive you know it's bs.

since CAN all reliability and predictive nature was out. we now have redundancy everywhere with everything just rebooting all the time.

install an aftermarket radio and your ecu will probably reboot every time you press play or something. and that's just "normal".


I’ve working in automotive since it was only wires and never saw that (or noticed it) happening specially since usually body and powertrain work on separate buses tied through a gateway, the crazy stuff happens when people start treating the bus (specially the higher speed ones) like a 12v line or worst.


I didn't experience that but the commercial stuff I worked on was in a heavy industry on J1939, and our bus was isolated from the vehicle to some regard.

Then the stuff I mess with at home is 90s era CAN and it's basically all diagnostics, actually I think these particular cars don't do any control over the bus.


ever use wordstar on Z80 system with a 5 MB hard drive?

responsive. everything dealing with user interaction is fast. sure, reading a 1 MB document took time, but 'up 4 lines' was bam!.

linux ought to be this good, but the I/O subsystem slows down responsiveness. it should be possible to copy a file to a USB drive, and not impact good response from typing, but it is not. real time patches used to improve it.

windows has always been terrible.

what is my point? well, i think a web stack ran under an RTOS (and sized appropriately) might be a much more pleasurable experience. Get rid of all those lags, and intermittent hangs and calls for more GB of memory.

QNX is also a good example of an RTOS that can be used as a desktop. Although an example with a lot of political and business problems.


Every single hardware subsystem adds lag. Double buffering adds a frame of lag; some do triple-buffering. USB adds ~8ms worse-case. LCD TVs add their own multi-frame lag-inducing processing, but even the ones that don't have to load the entire frame before any of it shows, which can be a substantial fraction of the time between frames.

Those old systems were "racing the beam", generating every pixel as it was being displayed. Minimum lag was microseconds. With LCDs you can't get under milliseconds. Luckily human visual perception isn't /that/ great so single-digit milliseconds could be as instantaneous, if you run at 100 Hz without double-buffering (is that even possible anymore!?) and use a low-latency keyboard (IIRC you can schedule more frequent USB frames at higher speeds) and only debounce on key release.


8khz polling rate mouse and keyboard, 240hz 4K monitor (with Oled to reduce smearing preferably, or it becomes very noticeable), 360hz 1440p, or 480hz 1080p, is current state of the art. You need a decent processor and GPU (especially the high refresh rate monitors as you’re pushing a huge amount data to your display, as only the newest GPUs support the newest display port standard) to run all this, but my Windows desktop is a joy to use because of all of this. Everything is super snappy. Alternatively, buying an iPad Pro is another excellent way to get very low latencies out of the box.

I really love this blog post from Dan Luu about latency. https://danluu.com/input-lag/


That's a good one. I probably should have brought up variance though. These cache-less systems had none. Windows might just decide to index a bunch of stuff and trash your cache, and it runs slow for a bit while loading gigabytes of crap back into memory. When I flip my lightswitch, it's always (perceptibly) the same amount of time until the light comes on. Click a button on the screen? Uh...


Hah, that’s a good point! Unfortunately I have Hue smart bulbs and while they’re extremely convenient and better than most, there is sometimes a slight pause when using my WiFi controlled color schemes to switch between my configured red and daylight modes. What you gain in convenience and accessibility (being able to say “turn off the master bedroom” when I’m tired is amazing) I’ve lost in pure speed and consistency.

I believe this is kind of survivor-bias. It's very rare that RTOSes have to handle allocating GBs of data, or creating thousands of processes. I think if current RTOSes run the same application, there would be no noticeable difference compared to mainstream OS(Could be even worse because the OS is not designed for that kind of usecases)


>what is my point? well, i think a web stack ran under an RTOS (and sized appropriately) might be a much more pleasurable experience. Get rid of all those lags, and intermittent hangs and calls for more GB of memory.

... it's not the OS that's source of majority of lag

Click around in this demo https://tracy.nereid.pl/ Note how basically any lag added is just some fancy animations in places and most of everything changes near instantly on user interaction (with biggest "lag" being acting on mouse key release as is tradition, not click, for some stuff like buttons).

This is still just browser, but running code and displaying it directly instead of going thru all the JS and DOM mess


> making this type of deterministic and WCET (worst case execution time) a dominant computing paradigm.

Oh wow, really? I never knew that. huh.

I feel like as I grow older, the more I start to appreciate history. Curse my naive younger self! (Well, to be fair, I don't know if I would've learned history like that in school...)


Contrary to propaganda from the likes of Ludwig von Mises, the free market is not some kind of optimal solution to all of our problems. And it certainly does not produce excellent software.


I can't think of a time when I've found an absolutist position useful or intelligent, in any field. Free-market absolutism is as stupid as totalitarianism. The content of economics papers does not need to be evaluated to discard an extreme position, one need merely say "there are more things in earth and heaven than are dreamed of in your philosophies"


Great point, if the only constant is change, then philosophy should follow (or lead).


Mises never claimed that the free market produced the most optimal solutions at a given moment. In fact Mises explicitly stated many times that the free market does indeed incur in semi-frequent self-corrections, speculations and manipulations by the agents.

What Mises proposition was - in essence - is that an autonomous market with enough agents participating in it will reach an optimal Nash equilibrium where both offer and demand are balanced. Only an external disruption (interventionism, new technologies, production methods, influx or efflux of agents in the market) can break the Nash equilibrium momentarily and that leads to either the offer or the demand being favored.


> optimal Nash equilibrium where both offer and demand are balanced

This roughly translates to "optimal utopian society which cannot be criticised in any way" right? Right??


It depends on by what metric you define what is optimal.

For the health system or public transport the nash equilibrium of offer and demand is not what feels optimal to most people.

For manufacturing s.th. like screws, nails or hammers; I really can't see what should be wrong with it.


Or, paper clips…


I don't know if you are being sarcastic. But no, it's not an "utopia" by any means and the free market still has many pitfalls and problems that I described. However, is the best system we have to coordinate the production, distribution and purchasing of services and goods on a mass scale.


Somehow the last sentence of your comment caught me as if there's something wrong with it. I don't thing it's wrong, but I think it should be generalised.

Free market is an approach to negotiation, analogous to ad-hoc model in computer science, as opposed to client-server model - which matches command economy. There are tons of nuances of course regarding incentives, optimal group sizes, resource constraints etc.

Free market is also like evolution - it creates thing that work. Not perfect, not best, they just work. Fitting the situation, not much else (there is always a random chance of something else).

Also there's the, often, I suppose, intentional confusion of terms. The free market of the economic theory is not an unregulated market, it's a market of infinitesimal agents with infinitesimal influence of each individual agent upon the whole market, with no out-of-market mechanisms and not even in-market interaction between agents on the same side.

As a side note, I find it sadly amusing that this reasonable discussion is only possible because it's offtopic to the thread's topic. Had the topic been more attractive to more politically and economically agitated folk, the discussion would be more radicalised, I suppose.


> Also there's the, often, I suppose, intentional confusion of terms. The free market of the economic theory is not an unregulated market, it's a market of infinitesimal agents with infinitesimal influence of each individual agent upon the whole market, with no out-of-market mechanisms and not even in-market interaction between agents on the same side.

Just to expand on this really interesting topic. That's where the common pitfall on planned economy begins. Because to some degree a free market can withstand some amount of regulation; after all, external agents trying to manipulate the market are just that, agents in the market. As long as there are other autonomous agents intervening the market will keep functioning as it was. So the bureaucrat has both the incentive and the justification to expand the intervention. In other words, his economical plan didn't work because it was not intervened enough and just if they intervene in this extra thing it will work for sure. That loop continues until the market is 100% intervened, and at that point it requires such a enormous structure of power and control that makes it difficult to fight it (clientelist networks, repressive states, etc).


Can't say I agree on where the pitfall of planned economy begins.

As I see it, planned economy is strictly not a market, and it is not created as an intervention in the market. It's a rigid resource distribution system without on-the-fly negotiation, typically highly optimised, and without slack and any leeway for its agents. It can be optimal in a limited scope for a limited timespan, but it is rigid and doesn't adapt automatically. These are issues that could be overcome in theory with faster reassesment and distribution adjustment, but there is also practical issue that its agents are, in the end, imperfect people, and they have conflicting incentives. The other side of low slack is low resilience, any unforeseen problem has severe consequences, and it creates destructive incentives for agents.

Janos Kornai has a book "Economics of Shortage" (I haven't read all of that, honestly) that, as far as I understand that, says that lack of openly availible resources in planned economies causes agents to bear a risk of not being able to execute more-important-than-usual orders of their superiors that happen on some special ocassions and thus face severe consequences, in case something in that agent's operations breaks. And due to lack of market, agent cannot fix the issue on-the-fly, only in the next planning cycle. That incentivises agents to not execute their day-to-day roles and hoard any goods they have - in order to exchange them on black market for some other goods and favors wheen need will arise - and it will arise as there will be orders from above and something break and go wrong.

> external agents trying to manipulate the market are just that, agents in the market. I tend to think of a market not as a whole economic system at once, but as a subsection of it - in this way that there are essentially multiple categories markets, as in goods market, labor market, etc. In this viewpoint, external agents are often not a part of market. Regulations affe ofct the agents but are not agents themselves. Target financing - the same. Marketing campaign is out-of-market (as in specifically goods market) manipulation on the consumers by producers to change perceived value of producer's product.

I suspect I could be not terminologically correct there, and the existing terminology here is a bit confusing, again. I'd like to call what I understand as free market "pure market", while the free market is what's commonly understood - an unregulated one. The completely free market, I suppose, tends to not be pure in long term due to positive feedbacks causing some of market's participants to be not-infinitesimal and in the end forming a monopoly or an cartel.

> and at that point it requires such a enormous structure of power and control that makes it difficult to fight it (clientelist networks, repressive states, etc).

Of course, replacement of one massive system with another requires enormous amount of power and control. And planned economy concentrates control and decision-making in one place, instead of it being distributed across all agents, thus obviously the planner would be massive.

I am not an economist in any way, not even education, I was just fascinated by the economic failure of USSR and whether it was avoidable in any way since childhood.


Many intellectuals have this problem. They make interesting, precise statements under specific assumptions, but they get interpreted in all kinds of directions.

When they push back against certain narratives and extrapolations they usually don’t succeed, because the same mechanism applies here as well.

The only thing they can do about it, is throwing around ashtrays.


What a great visual. I haven't heard that phrase before.


It's a fun image, but I was not my idea. I was playfully referring to this:

https://en.wikipedia.org/wiki/The_Ashtray_(Or_the_Man_Who_De...

Although in this original case the image (that allegedly happened) used to criticize the philopher (Kuhn), so kind of the other side of the coin of what I said above.


Thanks for the context!

An ashtray is such a temporally rooted object, the phrase, "throwing around ashtrays," immediately conjures a bunch of peripheral concrete imagery.

I imagine there will soon be generations of young people who wonder what a tray of ashes was used for and why people used to collect them all over their homes and offices.


So it will reach equilibrium unless literally anything disrupts that equilibrium. Got it.


The free market tends to equilibrium yes. That indeed is a novel realization.


An "autonomous market with enough agents" is carrying a lot of weight there, like "rational actors" and "as sample size goes to infinity'.


It is not carrying a lot of weight. Macroeconomics are different from microeconomics. On a micro scale agents have enough weight on the system where a specific action might break a model. On a macro scale each individual agent's action carries less weight and therefore the system becomes predictable.

On a micro scale it is possible, and sometimes favorable, to intervene. On a macro scale to intervene economically becomes impossible due to the economic calculation problem. It is widely accepted in modern economics that the unit of maximum extent where economical intervention is possible is a business/company/enterprise. Or in sociological terms the maximum unit is the family. Anything broader than that and the compound effect of the economic calculation problem becomes apparent and inefficiencies accumulate. Autonomous decentralized mechanisms (like a free market) are the only solution to it, but not the most optimal.


The problem with this is that "breaking the Nash equilibrium momentarily" is a spherical cow.

"Momentarily" can mean years or even decades, and millions of people can suffer or die as a result.


Markets do not model that especially well. When it comes down to these situations, it's not about the rising price of food motivating producers to enter the market - it's about the people starving. During a war, no amount of money can cause more munitions to appear fast enough. Blast-resistant concrete can take weeks or months to cure, workforces take time to train. These "momentary" disruptions can swamp the whole.


Propaganda is quite a strong term to describe the works of an economist. If one wants to debate the ideas of von Mises, it'd be useful to consider the Zeitgeist at that time. Von Mises preferred free markets in contrast to the planned economy of the communists. Partly because the latter has difficulties in proper resource allocation and pricing. Note that this was decades before we had working digital computers and digital communication systems, which, at least in theory, change the feasibility of a planned economy.

Also, the last time I checked, the US government produced its goods and services using the free market. The government contractors (private enterprises) are usually tasked with building stuff, compared with the government itself in a non-free, purely planned economy (if you refer to von Mises).

I assume that you originally meant to refer to the idea that without government intervention (funding for deep R&D), the free market itself would probably not have produced things like the internet or the moon landing (or at least not within the observed time span). That is, however,a rather interesting idea.


Governament contracts are very restricted behind layers of certifications and authorizations.

For example, you can't freely produce missiles and have it in wallmart where "the governament" purchase at shelf price.


What a world that would be. Would change the game of 'deer hunting' for sure.


> The government contractors (private enterprises) are usually tasked with building stuff

Ah yes, situation where the government makes a plan and then hands it to the one (1) qualified defense contractor whose facilities are build in swing states to benefit specific congressional campaigns is completely different from central planning.


There are some resemblances, which indicate that you might not have a fully functioning free market. But central planning in the context of von Mises refers to something else. It's about the organization of whole national economies, as in planned economies, a thing you find in communist states or Lenin's "war communism".


You should read up on Yanis Varoufakis' history and just how bad his solution for Greece went. That will explain the extreme amounts of anger on both his side, the side of Greeks and the side of the EU and worldwide financial community (and the EU itself used to be an industry cartel, so you can guess how much every government institution in the EU aligns with the worldwide financial community). This guy will never be allowed to do anything remotely serious in economics ever again, and he knows it very well. His Diem24 project is failing, and he knows that too. He feels the ECB, specifically Mario Draghi, Jeroen Dijsselbloem and Christine Lagarde are responsible for this downfall and talks about them in a way that makes you say "he can't be allowed near them. Seriously. Call the police". But in the constant tragedy of his life: He's probably right they caused his downfall.

He caused a MAJOR issue for Greece that still affects everyone in his country today, after reassuring people for 2+ years it was never going to happen: https://en.wikipedia.org/wiki/Greek_government-debt_crisis

(He'd kill me for saying this but he was lying back then too. He was trying to pull a Thatcher (I could compare him to someone else that did the same a long time ago but ... let's just say if you know you know). He was trying to double Greece's public debt by lying to everyone about what he was doing. He failed, and then started threatening, and when his threats didn't work, he got fired by Greece's prime minister, his oldest friend. It ended the friendship. He lost. And he's not a good enough sport to accept that he lost, frankly he got caught and couldn't talk his way out of it. This, despite the fact that he was finance minister, and so will be paid, very well I might add, for the rest of his life despite what he did, and despite the fact that every Greek today is still paying the price for what he did)

Oh and he's pro-Russia. All Russia wants in Ukraine, according to Yanis, is help the European poor. More detailed he is of the opinion that the current course of action of the EU will lead to a war with Russia, in which a lot of European poor will be forced to fight in an actual war, facing bullets and bombs in trenches. This could be avoided by giving Ukraine and the Baltics to Russia. In the repeating tragedy of Yanis Varoufakis' life, I have to say, yet again: he may be right (I just strongly disagree that offering Ukraine and the Baltics up to Russia is an acceptable solution to this problem, and in any case, this is neither his, nor my choice to make)

He does not live in Greece, his own country, he lives in the UK, making the case for Russia.

https://www.yanisvaroufakis.eu/category/ukraine/

And I get it, his life has become this recurring tragedy. His father was a victim of a rightist dictatorship in Greece, and he was imprisoned and tortured for that, as well as losing his job, living in poverty for a very long time (yes, Greece was an extreme right dictatorship not that long ago, really, go look it up). Yanis Varoufakis himself became the victim of a cabal of laissez-faire very, very rich people who destroyed his career right at the peak of everything he achieved. He has been the victim of one or another form of extreme-right policy (in the sense of laissez-faire parties that capture governments) since he was 4 years old, right up to today. Over 60 years his life was sabotaged in 1000 different ways, some very direct. And, sadly, I agree with his "extreme-right" enemies: he can never be in allowed near any position of power ever again because of this, which isn't even his fault. (extreme-right according to him, I would refer to his enemies as "the status quo", and point out it's working pretty well for everyone)


> He caused a MAJOR issue for Greece that still affects everyone in his country today, after reassuring people for 2+ years it was never going to happen:

care to explain what exactly he caused and how that still affects everyone in his country? in particular how he managed to jump several years backward in the timeline?


All I can say is "keep reading". Because it takes a BIG turn for the worse at one point, and that's where he's involved.


> He caused a MAJOR issue for Greece

That link goes to the Greece financial crisis which, according to the Wikipedia page, started in 2009. Varoufakis was elected minister of finance in early 2015 and resigned only half a year later. From the outside, it seems impossible that his half year miniterial tenure could have caused a crisis half a decade earlier. At the time, Greece had already defaulted twice on their loans and were about to do it a third time.


Economics is propaganda. It’s not an empiracle science, and it’s claims are mostly used to promote ideologies consistent with government policy or the ideology of powerful individuals with the surplus’s wealth available to pay someone to build a quantitative defense of said ideology. What else would you call it?


It's a social science? Economics is much broader and much less unified than you purport it to be. The (social) science of (in this case) Macroeconomics is just that, an observational science, a bunch of theories and observations (controlled experiments are not really feasible). The propaganda is caused by politicians, administrators, and policymakers, not really the scientists. There I agree with you, central bankers are a prime example of such propaganda. Ever wondered why almost everywhere the inflation target is 2%? Not 1%, not 3%, but exactly 2%? There is no real scientific reason behind it, that is just policy, or propaganda if you want to name it like that.


Social scientist carry out experiments / causal analysis on granular data. Macro economics (not micro) I should clarify meets the definition of propaganda because its theories do not have solid backing with experimentation or data. It is primarily used by the state to manufacture consent for economic policies that implement incentive structures that benefit the most wealthy people in society. It’s not that complicated.


Are _you_ making software for the government?


Time triggered Ethernet is part of aircraft certified data bus and has a deep, decades long history. I believe INRIA did work on this, feeding Airbus maybe. It makes perfect sense when you can design for it. An aircraft is a bounded problem space of inputs and outputs which can have deterministic required minima and then you can build for it, and hopefully even have headroom for extras.

Ethernet is such a misnomer for something which now is innately about a switching core ASIC or special purpose hardware, and direct (optical even) connects to a device.

I'm sure there are also buses, dual redundant, master/slave failover, you name it. And given it's air or space probably a clockwork backup with a squirrel.


A real squirrel would need acorns, I would assume it's a clockwork squirrel too.


A software squirrel, maybe? https://sqrrl.io :-)

Aircraft also have software and components, that form a "working" proclaimed eco-system in lockstep- a baseline. This is why there are paper "additions" on bug discovery until the bug is patched and the whole ecosystem of devices is lifted to the next "baseline".


You could even say that part of the value of Artemis is that we're remembering how to do some very hard things, including the software side. This is something that you can't fake. In a world where one of the more plausible threats of AI is the atrophy of real human skills -- the goose that lays the golden eggs that trains the models -- this is a software feat where I'd claim you couldn't rely on vibe code, at least not fully.

That alone is worth my tax dollars.


Don’t count your chickens before they hatch.


I'm not sure you really understood my comment. A large portion of the kind of value I'm talking about comes from attempting the hard thing. If these chickens do not hatch that will be tragic, but we will still have learned something from it. In some ways, we will have learned even more, by getting taught about what we don't know.

Anyway, let's all hope for a safe landing tonight.


Agile is not meant to make solid, robust products. It’s so you can make product fragments/iterations quickly, with okay quality and out to the customer asap to maximize profits.


“Agile” doesn’t mean that you release the first iteration, it’s just a methodology that emphasizes short iteration loops. You can definitely develop reliable real-time systems with Agile.


> “Agile” doesn’t mean that you release the first iteration

Someone needs to inform the management of the last three companies I worked for about this.


Management understand it less than anyone else does.


I would differentiate between iterative development and incremental development.

Incremental development is like panting a picture line by line like a printer where you add new pieces to the final result without affecting old pieces.

Iterative is where you do the big brush strokes first and then add more and more detail dependent on what to learn from each previous brush strokes. You can also stop at any time when you think that the final result is good enough.

If you are making a new type of system and don’t know what issues will come up and what customers will value (highly complex environment) iterative is the thing to do.

But if you have a very predictable environment and you are implementing a standard or a very well specified system (van be highly complicated yet not very complex), you might as will do incremental development.

Roughly speaking though as there is of course no perfect specification which is not the final implementation so there are always learnings so there is always some iterative parts of it.


A physicist who worked on radiation-tolerant electronics here. Apart from the short iteration loops, agile also means that the SW/HW requirements are not fully defined during the first iterations, because they may also evolve over time. But this cannot be applied to projects where radiation/fault tolerance is the top priority. Most of the time, the requirements are 100% defined ahead of time, leading to a waterfall-like or a mixed one, where the development is still agile but the requirements are never discussed again, except in negligible terms.


I think people mean so many different things when talking about agile. I'm pretty sure a small team of experts is a good fit for critical systems.

A fixed amount of meetings every day/week/month to appease management and rushing to pile features into buggy software will do more harm than good.


SCRUM methodology absolutely prioritizes a "Potentially Shippable Product Increment" as the output of every sprint.


It does but this is the idea that I think one has to bend or ignore the most since people always bend or ignore bits of agile.

i.e. being able to print "Hello World" and not crash might make something shippable but you wouldn't actually do it.

I think the right amount of "bend" of the concept is to try to keep the product in a testable state as much as possible and even if you're not doing TDD it's good to have some tests before the very end of a big feature. It's also productive to have reviews before completing. So there's value in checking something in even before a user can see any change.

If you don't do this then you end up with huge stories because you're trying to make a user-visible change in every sprint and that can be impossible to do.


You can absolutely build robust products using agile. Apart from some of the human benefits of any kind of incremental/iterative development, the big win with Agile is a realistic way to elicit requirements from normal people.


You hopefully know thats not true. But it's a matter of quality goals. Need absolute robustness? Prioritize it and build it. Need speed and be first to market? Prioritize and build it. You can do both in an agile way. Many would argue that you won't be as fast in a non-agile way. There is no bullet point in the agile manifest saying to build unreliable software.


Yeah, I know it’s not true in the sense that that’s not what it’s meant to do, but I’m saying practically that’s what usually ends up happening.


The generous way of seeing it is that you don't know what the customer wants, and the customer doesn't know all that well what they want either, and certainly not how to express it to you. So you try something, and improve it from there.

But for aerospace, the customer probably knows pretty well what they want.


The manifesto refers to “working software”. It does not say anything about “okay quality”.


... and it mechanically promotes planned obsolescence by its nature (likely to be of disastrous quality). The perfect mur... errr... the perfect fraud.


Tesla’s Cybertruck uses that in its ethernet as well!


All the ADAS automotive systems use this, there are several startups in this space as well, such as Ethernovia.



Some of us still work on embedded systems with real-time guarantees.

Believe it or not, at least some of those modern practices (unit testing, CI, etc) do make a big (positive) difference there.


The depressing part is that these "modern practices" were essentially invented in the 1960s by defense and aerospace projects like the NTDS, LLRV/LLTV, and Digital Fly-by-Wire to produce safety-critical software, and the rest of the software industry simply ignored them until the last couple of decades.


Microsoft fired all QA people ten or fifteen years ago. I'd imagine it's a similar a story: boxed software needed much higher guarantees of correctness. Digital deliver leaves much more room for error, because it leaves room for easier, cheaper fixes.

> “Modern Agile and DevOps approaches prioritize iteration, which can challenge architectural discipline,” Riley explained. “As a result, technical debt accumulates, and maintainability and system resiliency suffer.”

Not sure i agree with the premise that "doing agile" implies decision making at odds with architecture: you can still iterate on architecture. Terraform etc make that very easy. Sure, tech debt accumulates naturally as a byproduct, but every team i've been on regularly does dedicated tech debt sprints.

I don't think the average CRUD API or app needs "perfect determinism", as long as modifications are idempotent.


In theory, yes you could iterate on architecture and potentially even come up with better one with agile approach.

In practice, so many aspects follow from it that it’s not practical to iterate with today’s tools.


Agile is like communism. Whenever something bad happens to people who practice agile, the explanation is that they did agile wrong, had they being doing the true agile, the problem would've been totally avoided.

In reality, agile doesn't mean anything. Anyone can claim to do agile. Anyone can be blamed for only pretending to do agile. There's no yardstick.

But it's also easy to understand what the author was trying to say, if we don't try to defend or blame a particular fashionable ideology. I've worked on projects that required high quality of code and product reliability and those that had no such requirement. There is, indeed, a very big difference in approach to the development process. Things that are often associated with agile and DevOps are bad for developing high-quality reliable programs. Here's why:

The development process before DevOps looked like this:

    1. Planning
    2. Programming
    3. QA
    4. If QA found problems, goto 2
    5. Release
The "smart" idea behind DevOps, or, as it used to be called at the time "shift left" was to start QA before the whole of programming was done, in parallel with the development process, so that the testers wouldn't be idling for a year waiting for the developers to deliver the product to testers and the developers would have faster feedback to the changes they make. Iterating on this idea was the concept of "continuous delivery" (and that's where DevOps came into play: they are the ones, fundamentally, responsible to make this happen). Continuous delivery observed that since developers are getting feedback sooner in the development process, the release, too, may be "shifted left", thus starting the marketing and sales earlier.

Back in those days, however, it was common to expect that testers will be conducting a kind of a double-blindfolded experiment. I.e. testers weren't supposed to know the ins and outs of the code intentionally, s.t. they don't, inadvertently, side with the developers on whatever issues they discover. Something that today, perhaps, would've been called "black-box testing". This became impossible with CD because testers would be incrementally exposed to the decisions governing the internal workings of the product.

Another aspect of the more rigorous testing is the "mileage". Critical systems, normally, aren't released w/o being run intensively for a very long time, typically orders of magnitude longer than the single QA cycle (let's say, the QA gets a day of computer time to run their tests, then the mileage needs to be a month or so). This is a very inconvenient time for development, as feature freeze and code freeze are still in effect, so the coding can only happen in the next version of the product (provided it's even planned). But, the incremental approach used by CD managed to sell a lie that says that "we've ran the program for a substantial amount of time during all the increments we've made so far, therefore we don't need to collect more mileage". This, of course, overlooks the fact that changes in the program don't contribute proportionally to the program's quality or performance.

In other words, what I'm trying to say is that agile or DevOps practices allowed to make the development process cheaper by making it faster while still maintaining some degree of quality control, however they are inadequate for products with high quality requirements because they don't address the worst case scenarios.


I think he refers to SpaceWire https://en.wikipedia.org/wiki/SpaceWire.


As 70's child that was there when the whole agile took over, and systems engineer got rebranded as devops, I fully agree with them.

Add TDD, XP and mob programming as well.

While in some ways better than pure waterfall, most companies never adopted them fully, while in some scenarios they are more fit to a Silicon Valley TV show than anything else.


If you look at code as art, where its value is a measure of the effort it takes to make, sure.


Or if you're building something important, like a spaceship.


In that case, our test infrastructure belongs in the Louvre…


If your implication is that stencil art does not take effort then perhaps you may not fully appreciate Banksy. Works like Gaza Kitty or Flower Thrower don’t just appear haphazardly without effort.


It's not like the approach they took is any different. Just slapped 8x the number of computers on it for calculating the same thing and wait to see if they disagree. Not the pinnacle of engineering. The equivalent of throwing money at the problem.


>Just slapped 8x the number of computers on it

‘Just’ is not an appropriate word in this context. Much of the article is about the difficulty of synchronization, recovery from faults, and about the redundant backup and recovery systems


What happens when they don't?


If you have a point to make, make it.


What my question is hinting at is that there's actually some really interesting engineering around resolving what happens when the systems disagree. Things like Paxos and Raft help make this much more tractable for mere mortals (like myself); the logic and reasoning behind them are cool and interesting.


Though here the consensus algorithm seems totally different from Paxos/Raft. Rather it's a binary tree, where every non-leaf node compares the (non-silent) inputs from the leaf, and if they're different, it falls silent, else propagates the (identical) results up. Or something something.


There really is. We designed a redundant system (software, hardware and mechanisms) a couple years ago. And the problems around figuring out who's in control and how to keep things synchronized across a number of potential failure modes gets really hairy. Sadly, the project was cancelled before we could complete the implementation.


I take the opposite message from that line - out of touch teams working on something so over budget and so overdue, and so bureaucratic, and with such an insanely poor history of success, and they talk as if they have cured cancer.

This is the equivalent of Altavista touting how amazing their custom server racks are when Google just starts up on a rack of naked motherboards and eats their lunch and then the world.

Lets at least wait till the capsule comes back safely before touting how much better they are than "DevOps" teams running websites, apparently a comparison that's somehow relevant here to stoke egos.


You mean like this?

"With limited funds, Google founders Larry Page and Sergey Brin initially deployed this system of inexpensive, interconnected PCs to process many thousands of search requests per second from Google users. This hardware system reflected the Google search algorithm itself, which is based on tolerating multiple computer failures and optimizing around them. This production server was one of about thirty such racks in the first Google data center. Even though many of the installed PCs never worked and were difficult to repair, these racks provided Google with its first large-scale computing system and allowed the company to grow quickly and at minimal cost."

https://blog.codinghorror.com/building-a-computer-the-google...


The biggest innovation from Google regarding hardware was understanding that the dropping memory prices had made it feasible to serve most data directly from memory. Even as memory was more expensive, you could serve requests faster, meaning less server capacity, meaning reduced cost. In addition to serving requests faster.


The problem they solved isn't easy. But its not some insane technical breakthrough either. Literally add redundancy, thats the ask. They didnt invent quantum computing to solve the issue did they? Why dunk on sprints?


Wow. What a hand wave away of the intrinsic challenge of writing fault tolerant distributed systems. It only seems easy because of decades of research and tools built since Google did it, but by no means was it something you could trivially add to a project as you can today.


> fault tolerant distributed systems

I mean there were mainframes which could be described as that. IBM just fixed it in hardware instead of software so its not like it was an unknown field.


Even if that were actually true (it’s not in important ways) Google showed you could do this cheaply in software instead of expensive in hardware.

You’re still hand waving away things like inventing a way to make map/reduce fault tolerant and automatic partitioning of data and automatic scheduling which didn’t exist before and made map/reduce accessible - mainframes weren’t doing this.

They pioneered how you durably store data on a bunch of commodity hardware through GFS - others were not doing this. And they showed how to do distributed systems at a scale not seen before because the field had bottlenecked on however big you could make a mainframe.


Google then had complete regret not doing this with ECC RAM: https://news.ycombinator.com/item?id=14206811


It got them to where they need to be to then worry about ECC. This is like the dudes who deploy their blog on kubernetes just in case it hits front page of new york times or something.


> then had complete regret not doing this with ECC RAM

Yeah, my takeaway is Google made the right choice going with non-ECC RAM so they could scale quickly and validate product-market fit. (This also works from a perspective of social organisation. You want your ECC RAM going where it's most needed. Not every college dropout's Hail Mary.)


A great version of this and how ex-DEC engineers saved Google and their choice of ECC RAM - inventing MapReduce and BigTable https://www.youtube.com/watch?v=IK0I4f8Rbis


No, space is just hard.

Everything is bespoke.

You need 10x cost to get every extra '9' in reliability and manned flight needs a lot of nines.

People died on the Apollo missions.

It just costs that much.


Please, this is hacker news. Nothing else is hard outside of our generic software jobs, and we could totally solve any other industry in an afternoon.


I mean I can just replace Dropbox with a shell script.


That's funny because you could! Dropbox started a shell script :)

Funny though I would assume HN people would respect how hard real-time stuff and 'hardened' stuff is.


I think GP is referencing this somewhat [in]famous post/comment: https://news.ycombinator.com/item?id=8863#9224


HN audience has shifted, there is less technically minded people and more hustlers and farmers from other social media waste spaces. But alas.


"No wireless. Less space than a Nomad. Lame."

No, wait, that was that other site.


Yep, spend 100 billion on what should have cost 1/50that cost, and send people up to the moon with rockets that we are still keeping our fingers crossed wont kill them tomorrow, and we have to congratulate them for dunking on some irrelevant career?


Modern software development is a fucking joke. I’m sorry if that offends you. Somehow despite Moore’s law, the industry has figured out how to actually regress on quality.


Lately it strikes me there's a big gap between the value promised and the value actually delivered, compared to a simple home grown solutions (with a generic tool like a text editor or a spreadsheet, for example). If they'd just show how to fish, we wouldn't be buying, the magic would be gone.

In this sense all of the West is full of shit, and it's a requirement. The intent is not to help and make life better for everyone, cooperate, it is to deceive and impoverish those that need our help. Because we pity ourselves, and feed the coward within, that one that never took his first option and chose to do what was asked of him instead.

This is what our society deviates us from, in its wish to be the GOAT, and control. It results in the production of lives full of fake achievements, the constant highs which i see muslims actively opt out of. So they must be doing something right.


We have a lot more software developers than 50 years ago and intelligence is still normally distributed.


What’s your point?


The average coder in the 1970s was a lot smarter than today. Think about the people who would be interested to start a career in this field at that time.


Oh I see what you mean. I agree 100%


And overall performance in terms of visible UX.


One simply does not [“provision” more hardware|(reboot systems)|(redeploy software)] in space.


What would you suggest? Vibe coding a react app that runs on a Mac mini to control trajectory? What happens when that Mac mini gets hit with an SEU or even a SEGR? Guess everyone just dies?


No, of course not! It would be far better to have an openClaw instance running on a Mac Mini. We would only need to vibe code a 15s cron job for assistant prompting...

USER: You are a HELPFUL ASSISTANT. You are a brilliant robot. You are a lunar orbiter flight computer. Your job is to calculate burn times and attitudes for a critical mission to orbit the moon. You never make a mistake. You are an EXPERT at calculating orbital trajectories and have a Jack Parsons level knowledge of rocket fuel and engines. You are a staff level engineer at SpaceX. You are incredible and brilliant and have a Stanley Kubrick level attention to detail. You will be fired if you make a mistake. Many people will DIE if you make any mistakes.

USER: Your job is to calculate the throttle for each of the 24 orientation thrusters of the spacecraft. The thrusters burn a hypergolic monopropellent and can provide up to 0.44kN of thrust with a 2.2 kN/s slew rate and an 8ms minimum burn time. Format your answer as JSON, like so:

     ```json
    {
      x1: 0.18423
      x2: 0.43251
      x3: 0.00131
       ...
    }
     ```
one value for each of the 24 independent monopropellant attitude thrusters on the spacecraft, x1, x2, x3, x4, y1, y2, y3, y4, z1, z2, z3, z4, u1, u2, u3, u4, v1, v2, v3, v4, w1, w2, w3, w4. You may reference the collection of markdown files stored in `/home/user/geoff/stuff/SPACECRAFT_GEOMETRY` to inform your analysis.

USER: Please provide the next 15 seconds of spacecraft thruster data to the USER. A puppy will be killed if you make a mistake so make sure the attitude is really good. ONLY respond in JSON.


[flagged]


Can't tell if "arrogant nasal engineers" is a typo or a hilarious attempt at an insult.


Nasal demons is a common reference to C and C++ Undefined Behaviour.

When an AI codes for you, you get Undefined Behaviour in every language.


Wild shit to be advising other people to be humble whilst talking directly out of your ass about technology you clearly do not understand and engineers you have no respect for.

Perhaps self-reflect.


How do you know that op doesn't know what he is talking about?

I have written code for real time distributed systems in industrial applications. It runs since years 24/7 and there never was a failure in production.

I also think nasa is full of shit.


Well for one, if you follow their profile and a few more clicks, you get to their resume, and while it's an impressive one and I'm sure they know a lot of shit I don't, what's notably missing is anything even remotely close to Aerospace, rocketry, guidance systems, positioning, etc.

For another, if an engineer has an axe to grind with a public facing project, I would expect them to just grind the thing, not echo a bunch of the same lame and stale talking points every layperson does (bureaucracy bad, government bad, old tech, etc.). I'm not saying NASA in general and Artemis in particular are flawless, I'm just saying if you're going to criticize it, let's hear it. Otherwise you just sound like another contrarian trying to get attention, like a 14 year old boy saying Hitler had some good points.


> ...they talk as if they have cured cancer.

I'd chalk that up to the author of the article writing for a relatively nontechnical audience and asking for quotes at that level.


So the quote is right somewhat, right? If you are writing to non technical people and you use such high wording.


No, it's not right. When put in context, the quote claims that that manner of speaking is used because the speaker has an unwarranted belief that they've done something absolutely incredible and unprecedented. In actuality, the manner of speaking is being used because the intended audience of the article is likely to have little-to-no knowledge of the technical details of what the speaker is talking about.

For example, if the article was aimed at folks who were familiar with the underlying techniques, the last two paragraphs of the "Enforcing Determinism" section would be compressed into [0]

  Each FCM is time-synced and runs a realtime OS. Failures to meet processing deadlines (or excessive clock drift) reset the FCM. Each FCM uses triply-redundant RAM and NICs. *All* components use ECC RAM. Any failures of these components reset the FCM or other affected component.
But you can't assume that a fairly nontechnical audience will understand all that, so your explanation grows long because of all of the basic information it contains. People looking for an excuse to sneer at something will often misinterpret this as the speaker failing to recognize that the basic information they're providing is about things that are basic.

[0] I'm assuming that the time being wildly out of sync will indicate FCM failure and trigger a reset. [1] I'm also assuming that a sufficiently-large failure of a network switch results in the reset of that network switch. If the article was intended for a more technical audience, that level of detail might have been included, but it wasn't, so it isn't.

[1] If it didn't, why even bother syncing the time? I find it a little hard to believe that the FCMs care about anything other than elapsed time, so all you care about is if they're all ticking at the same rate. I expect the way you detect this is by checking for time sync across the FCMs, correcting minor drift, and resetting FCMs with major drift.


So I guess the key takeaway is basically that the better Claude gets at producing polished output, the less users bother questioning it. They found that artifact conversations have lower rates of fact-checking and reasoning challenges across the board. That's kind of an uncomfortable loop for a company selling increasingly capable models.


> the less users bother questioning it

This makes me think of checklists. We have decades of experience in uncountable areas showing that checklists reminding users to question the universe improve outcomes: Is the chemical mixture at the temperature indicated by the chart? Did you get confirmation from Air Traffic Control? Are you about to amputate the correct limb? Is this really the file you want to permanently erase?

Yet our human brains are usually primed to skip steps, take shortcuts, and see what we expect rather than what's really there. It's surprisingly hard to keep doing the work both consistently and to notice deviations.

> lower rates of fact-checking and reasoning challenges

Now here we are with LLMs, geared to produce a flood of superficially-plausible output which strikes at our weak-point, the ability to do intentional review in a deep and sustained way. We've automated the stuff that wasn't as-hard and putting an even greater amount of pressure on the remaining bottleneck.

Rather than the old definition involving customer interaction and ads, I fear the new "attention economy" is going to be managing the scarce resource of human inspection and validation.


Sounds like having a strong checklist of steps to take for every pull request will be crucial for creating reliable and correct software when AIs write most of the code.

But the temptation to short change this step when it becomes the bottleneck for shipping code will become immense.


> So I guess the key takeaway is basically that the better Claude gets at producing polished output, the less users bother questioning it.

This is exactly what I worry about when I use AI tools to generate code. Even if I check it, and it seems to work, it's easy to think, "oh, I'm done." However, I'll (often) later find obvious logical errors that make all of the code suspect. I don't bother, most of the time though.

I'm starting to group code in my head by code I've thoroughly thought about, and "suspect" code that, while it seems to work, is inherently not trustworthy.


I think we're still at the stage where model performance largely depends on:

- how many data sources it has access to

- the quality of your prompts

So, if prompting quality decreases, so does model performance.


Sure, but the study is saying something slightly different, it's not that people write bad prompts for artifacts, they actually write better ones (more specific, more examples, clearer goals,...). They just stop evaluating the result. So the input quality goes up but the quality control goes down.


Seems like it’s impossible for output to be good if the prompt is bad. Unless the AI is ignoring the literal instructions and just guessing “what you really want” which would be bad in a different way.


> On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

- Charles Babbage, https://archive.org/details/passagesfromlife03char/page/67/m...

EDIT: This is a new iteration of an old problem. Even GIGO [1] arguably predates computers and describes a lot of systemic problems. It does seem a lot more difficult to distinguish between a "garbage" or "good" prompt though. Perhaps this problem is just going to keep getting harder.

1. https://en.wikipedia.org/wiki/Garbage_in,_garbage_out


What does prompting quality even mean, empirically? I feel like the LLM providers could/should provide prompt scoring as some kind of metric and provide hints to users on ways they can improve (possibly including ways the LLM is specifically trained to act for a given prompt).


That would be a quality metric, and right now they are focused on quantity metrics.


The real insight buried in here is "build what programmers love and everyone will follow." If every user has an agent that can write code against your product, your API docs become your actual product. That's a massive shift.


I'm very much looking forward to this shift. It is SO MUCH more pro-consumer than the existing SaaS model. Right now every app feels like a walled garden, with broken UX, constant redesigns, enormous amounts of telemetry and user manipulation. It feels like every time I ask for programmatic access to SaaS tools in order to simplify a workflow, I get stuck in endless meetings with product managers trying to "understand my use case", even for products explicitly marketed to programmers.

Using agents that interact with APIs represents people being able to own their user experience more. Why not craft a frontend that behaves exactly the the way YOU want it to, tailor made for YOUR work, abstracting the set of products you are using and focusing only on the actual relevant bits of the work you are doing? Maybe a downside might be that there is more explicit metering of use in these products instead of the per-user licensing that is common today. But the upside is there is so much less scope for engagement-hacking, dark patterns, useless upselling, and so on.


> Right now every app feels like a walled garden, with broken UX, constant redesigns, enormous amounts of telemetry and user manipulation

OK, but: that's an economic situation.

> so much less scope for engagement-hacking, dark patterns, useless upselling, and so on.

Right, so there's less profit in it.

To me it seems this will make the market more adversarial, not less. Increasing amounts of effort will be expended to prevent LLMs interacting with your software or web pages. Or in some cases exploit the user's agentic LLM to make a bad decision on their behalf.


the "exploit the user's agentic LLM" angle is underappreciated imo. we already see prompt injection attacks in the wild -- hidden text on web pages that tells the agent to do things the user didn't ask for. now scale that to every e-commerce site, every SaaS onboarding flow, every comparison page.

it's basically SEO all over again but worse, because the attack surface is the user's own decision-making proxy. at least with google you could see the search results and decide yourself. when your agent just picks a vendor for you based on what it "found," the incentive to manipulate that process is enormous.

we're going to need something like a trust layer between agents and the services they interact with. otherwise it's just an arms race between agent-facing dark patterns and whatever defenses the model providers build in.


Maybe. Or maybe services will switch to charging per API call or whatever instead of monthly or per-seat. Who can predict the future?

I mean, services _could_ make it harder to use LLMs to interact with them, but if agents are popular enough they might see customers start to revolt over it.


This extends further than most people realize. If agents are the primary consumers of your product surface, then the entire discoverability layer shifts too. Right now Google indexes your marketing page -- soon the question is whether Claude or GPT can even find and correctly describe what your product does when a user asks.

We're already seeing this with search. Ask an LLM "what tools do X" and the answer depends heavily on structured data, citation patterns, and how well your docs/content map to the LLM's training. Companies with great API docs but zero presence in the training data just won't exist to these agents.

So it's not just "API docs = product" -- it's more like "machine-legible presence = existence." Which is a weird new SEO-like discipline that barely has a name yet.


The "start over in an hour" philosophy is underrated. I've been running my own infrastructure for years and the single most empowering thing isn't the setup, it's the peace of mind that you can just nuke it and spin up somewhere else.

Knowing that, I started looking at every SaaS subscription very differently.


I really care about the teardown / re-deployment workflow. You got any general tips for the beginner self-hoster?


At the lower or easier end, there’s your standard containerisation tools like Docker Compose or the Podman equivalents. Just move your compose files and zip the mount folders and you can move stuff easily enough.

Middle ground you’ve got stuff like Ansible for if you want to install things without containers, but still want it to be scripted. I don’t use these much since they feel like the worst of both worlds.

Higher end in terms of effort is using something like NixOS, where you get basically Terraform for everything in your distro.


Ansible, git ops, actually testing it out. Backups with snapshots using restic, encrypted secrets using vault.


The benchmarks are cool and all but 1M context on an Opus-class model is the real headline here imo. Has anyone actually pushed it to the limit yet? Long context has historically been one of those "works great in the demo" situations.


Paying $10 per request doesn't have me jumping at the opportunity to try it!


The only way to not go bankrupt is to use a Claude Code Max subscription…


Yeah, just had to upgrade to Max 20x yesterday because of hitting the limits every day and the extra usage gets expensive very fast.


Makes me wonder: do employees at Anthropic get unmetered access to Claude models?


It's like when you work at McDonald's and get one free meal a day. Lol, of course they get access to the full model way before we do...


Boris Cherny, creator of Claude Code, posted about how he used Claude a month ago. He’s got half a dozen Opus sessions on the burners constantly. So yes, I expect it’s unmetered.

https://x.com/bcherny/status/2007179832300581177


Seems quite obvious that they do, within reason.


Don't most jobs have unmetered access? I know mine does


Opus 4.5 starts being lazy and stupid at around the 50% context mark in my opinion, which makes me skeptical that this 1M context mode can produce good output. But I'll probably try it out and see


Has a "N million context window" spec ever been meaningful? Very old, very terrible, models "supported" 1M context window, but would lose track after two small paragraphs of context into a conversation (looking at you early Gemini).


Umm, Sonnet 4.5 has a 1m context window option if you are using it through the api, and it works pretty well. I tend not to reach for it much these days because I prefer Opus 4.5 so much that I don't mind the added pain of clearing context, but it's perfectly usable. I'm very excited I'll get this from Opus now too.


If you're getting on along with 4.5, then that suggests you didn't actually need the large context window, for your use. If that's true, what's the clear tell that it's working well? Am I misunderstanding?

Did they solve the "lost in the middle" problem? Proof will be in the pudding, I suppose. But that number alone isn't all that meaningful for many (most?) practical uses. Claude 4.5 often starts reverting bug fixes ~50k tokens back, which isn't a context window length problem.

Things fall apart much sooner than the context window length for all of my use cases (which are more reasoning related). What is a good use case? Do those use cases require strong verification to combat the "lost in the middle" problems?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: