Hacker Newsnew | past | comments | ask | show | jobs | submit | js8's commentslogin

Actually, it is. You have been blinded by capitalism to consider it ethical.

The tribes usually treat the members as a family. While kicking someone from a tribe can happen, it's considered to be a harsh punishment.

In a tribe, when hard times come, people usually redistribute. That's a normal, human way of dealing with that situation. Not a layoff.

The other aspect is the economic crises. When a central bank decides to increase interest rates, it decreases lending to new investments in favor of lower inflation. This can lead to layoffs, instead of having inflation inflicted on everyone (especially the rich with huge savings). So that decision is essentially some random guys get kicked out of economic (and societal) participation in order to prevent more redistribution of existing wealth.

If you think about it, yes layoffs are deeply immoral. But we can understand, why they happen in capitalism, as a sort of big tragedy of the commons.


It's a job. Not a tribe.

The role an employer plays in societies varies from culture to culture, but note that in many cultures, it is "just a job".


Yes, that's what people tell themselves to deal with it psychologically. That it's just a job, not a community, and you better not make friends in the workplace (despite spending majority of your life there). And that when you're unemployed, life just goes on, as if it doesn't mean much.

Like when a traumatised kid never loved by the parents concludes that life is harsh and love doesn't exist, so better be tough.


> Yes, that's what people tell themselves to deal with it psychologically. That it's just a job, not a community, and you better not make friends in the workplace (despite spending majority of your life there). And that when you're unemployed, life just goes on, as if it doesn't mean much.

That's a lot of stuff you're saying. Not what I'm saying.


Sure. Also the profitability of a company is just a number, and shareholders dividend is just fiduciary fictions, and company hierarchy is just arbitrary title attaching this or that person to this or that loosely defined role.

Drama is just in the head of people melted in the ambient narrative, sure.


My employer is not my “tribe”. That is crazy. We have a contract saying I do X units of work and they pay me Y in return. Either of us end it at any time.

At least this is in the case in the US. What you are saying might be true in other cultures.


What we have in the USA is not necessarily the final and best form of all interactions, as much as it pains me to say it.

Most people's reactions to large-scale movements like this seem to imply that we feel there should be something more than a simple "money duty" between employer and employee, and we seem to also have respect for companies that act that way (e.g, some Japanese companies perhaps, or baseball teams keeping a sick player on the payroll so they get healthcare even though they never play another game).

Attempting to realize that duty and at the same time abscond it to the state or the family may be an aspect of the failing.


And yet, employers love to use the "we're a family", "we're a team", and other such messaging, especially in the tech industry. They elide the transactional nature of the entire relationship.

> layoffs are deeply immoral

It's no more immoral than you deciding to buy from Safeway, even though you'd been buying from Fred Meyer before.


Safeway won’t starve and die if I decide to buy from Fred Meyer. You really don’t see that an individual is not on equal footing with multibillion company? It is absolutely immoral. And I’m not even talking about charity, those people were hired and did actual job for the fucking trillion dollar company.

Several grocery stores in Seattle have closed recently. The same with local Starbucks outlets. Locations that don't make money get closed, even if the rest of the company is doing well.

Also, employees can quit anytime, no notice required. Nobody is obliged to work.


> Several grocery stores in Seattle have closed recently. The same with local Starbucks outlets. Locations that don't make money get closed, even if the rest of the company is doing well.

Irrelevant to the topic at hand. Don’t give me a sob story about mom and pop shop, we’re talking about a trillion dollar company.

> Also, employees can quit anytime, no notice required. Nobody is obliged to work.

Okay? What’s your point?


> Don’t give me a sob story about mom and pop shop

The grocery stores were run by national chains. Starbucks is global.

> What’s your point?

It's symmetric. Companies employ at will, and workers work at will.


> The grocery stores were run by national chains. Starbucks is global.

So you’re confirming my point that billion dollar companies (like Starbucks killing mom and pop shop) have disproportionately more power over individuals or what are you saying?

> It's symmetric. Companies employ at will, and workers work at will.

Workers don’t work at will. Last time I checked UBI is not there, so workers work to pay the bills and put food on the table.


Yeah because marxists systems "take such good care" off people in comparison.

Marxist systems don’t exist in real life.

They do in some peoples heads as an utopian dream.

No it wasn't. Look at Joseph Stiglitz (Globalization and Its Discontents) and Ha-Joon Chang (Bad Samaritans, Kicking Away the Ladder) for counter-examples.

> There's no doubt, I think, testing will remain important and possibly become more important with more AI use, and so better testing is helpful, PBT included.

Given Curry-Howard isomorphism, couldn't we ask AI to directly prove the property of the binary executable under the assumption of the HW model, instead of running PBTs?

By no means I want to dismiss PBTs - but it seems that this could be both faster and more reliable.


Proofs are a form of static analysis. Static analysis can find interesting bugs, but how a system behaves isn't purely a property of source code. It won't tell you whether the code will run acceptably in a given environment.

For example, if memory use isn't modelled, it won't tell you how big the input can be before the system runs out of memory. Similarly, if your database isn't modelled then you need to test with a real database. Web apps need to test with a real web browser sometimes, rather than a simplified model of one. Databases and web browsers are too complicated to build a full-fidelity mathematical model for.

When testing with real systems there's often the issue that the user's system is different from the one you use to test. You can test with recent versions of Chrome and Firefox, etc, which helps a lot, but what about extensions?

Nothing covers everything, but property tests and fuzzers actually run the code in some test environment. That's going to find different issues than proofs will.


> Databases and web browsers are too complicated to build a full-fidelity mathematical model for.

I disagree - thanks to Curry-Howard isomorphism, the full-fidelity mathematical model of a database or web browser are their binaries themselves.

We could have compilers provide theorems (with proof) of correctness of the translation from source to machine code, and library functions could provide useful theorems about the resource use.

Then, if the AI can reason about the behavior of the source code, it can also build the required proof of correctness along with it.


I'm not sure either of us really knows how Curry-Howard works, but my understanding is that it's a compile-time type system thing. In certain proof languages, a function that returns an int proves that an int exists (type is inhabited). And that's just not very interesting - you need more sophisticated types than we commonly use. Also, it only works for total functions, so it's not true in most ordinary programming languages.

So I'm skeptical that the code we write in ordinary programming languages proves anything interesting. Why do you think that?


> thanks to Curry-Howard isomorphism, the full-fidelity mathematical model of a database or web browser are their binaries themselves.

Maybe I'm misunderstanding you, but Curry-Howard is a mapping between mathematical jargon and programming jargon, where e.g. "this is a proof of that proposition using foo logic" maps to "this program has that type in programming language foo".

I don't see how that makes "binaries" a "full-fidelity mathematical model": compilation is (according to Curry-Howard) translating a proof from one system of logic to another. For a binary, the resulting system of logic is machine code, which is an absolutely terrible logic: it has essentially one type (the machine word), which makes every proposition trivial; according to Curry-Howard, your database binary is proof of the proposition corresponding to its type; since the type of every binary is just "some machine words", the proposition that your database binary is a "full-fledged mathematical model" of is essentially just "there exists a machine word". Not very useful; we could optimise it down to "0", which is also a proof that there exists a machine word.

If we assume that you want to prove something non-trivial, then the first thing you would need to do is abstract away from the trivial logic of machine code semantics, by inferring some specific structures and patterns from that binary, then developing some useful semantics which captures those patterns and structures. Then you can start to develop non-trivial logic on those semantics, which will let you state worthwhile propositions. If we apply the Curry-Howard lens to that process, it corresponds to... decompilation into a higher-level language!

tl;dr Curry-Howard tells us that binaries are literally the worst possible representation we could hope for.


> Given Curry-Howard isomorphism, couldn't we ask AI to directly prove the property of the binary executable under the assumption of the HW model, instead of running PBTs?

Yes, in principle. Given unlimited time and a plentiful supply of unicorns.

Otherwise, no. It is well beyond the state of the art in formal proofs for the general case, and it doesn't become possible just because we "ask AI".

And unless you provide a formal specification of the entire set of behavior, it's still not much better than PBT -- the program is still free to do whatever the heck it wants that doesn't violate the properties formally specified.


And how do you know if it has proven the property you want, instead of something that's just complicated looking but evaluates to true?

The AI would build a proof of correctness, which would be then verified in a proof checker (not AI).

And how do you prove that the proof of correctness is not just a proof that 1=1? LLMs "cheating" on things is rather common.

> AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.

Of course! But that's what makes them so powerful. In 99% of cases that's what you want - something that is conventional.

The AI can come up with novel things if it has an agency, and can learn on its own (using e.g. RL). But we don't want that in most use cases, because it's unpredictable; we want a tool instead.

It's not true that this lack of creativity implies lack of intelligence or critical thinking. AI clearly can reason and be critical, if asked to do so.

Conceptually, the breakthrough of AI systems (especially in coding, but it's to some extent true in other disciplines) is that they have an ability to take a fuzzy and potentially conflicting idea, and clean up the contradictions by producing a working, albeit conventional, implementation, by finding less contradictory pieces from the training data. The strength lies in intuition of what contradictions to remove. (You can think of it as an error-correcting code for human thoughts.)

For example, if I ask AI to "draw seven red lines, perpendicular, in blue ink, some of them transparent", it can find some solution that removes the contradictions from these constraints, or ask clarifying questons, what is the domain, so it could decide which contradictory statements to drop.

I actually put it to Claude and it gave a beautiful answer:

"I appreciate the creativity, but I'm afraid this request contains a few geometric (and chromatic) impossibilities: [..]

So, to faithfully fulfill this request, I would have to draw zero lines — which is roughly the only honest answer.

This is, of course, a nod to the classic comedy sketch by Vihart / the "Seven Red Lines" bit, where a consultant hilariously agrees to deliver exactly this impossible specification. The joke is a perfect satire of how clients sometimes request things that are logically or physically nonsensical, and how people sometimes just... agree to do it anyway.

Would you like me to draw something actually drawable instead? "

This clearly shows that AI can think critically and reason.


> This is, of course, a nod to the classic comedy sketch by Vihart

As a big fan of Vi Hart I was surprised to read that she wrote or was involved in that "classic comedy sketch".

As far as I can tell, after a few minutes searching, she was not.


That shows it knew this bit of satire more than anything. Also, the problem as stated isn't actually constrained enough to be unsolvable: https://youtu.be/B7MIJP90biM

Feel free to ask Claude about any other contradictory request. I use Claude Code and it often asks clarifying questions when it is unsure how to implement something, or or autocorrects my request if something I am asking for is wrong (like a typo in a filename). Of course sometimes it misunderstands; then you have to be more specific and/or divide the work into smaller pieces. Try it if you haven't.

I have. In fact, I've been building my own coding agent for 2 years at this point (i.e. before claude code existed). So it's fair to say I get the point you're making and have said all the same stuff to others. But this experience has taught me that LLMs, in their current form, will always have gaps: it's in the nature of the tech. Every time a new model comes out, even the latest opus versions, while they are always better, I always eventually find their limits when pushing them hard enough and enough times to see these failure modes. Anything sufficiently out of distribution will lead to more or less nonsensical results.

The big flagship AI models aren't just LLMs anymore, though. They are also trained with RL to respond better to user requests. Reading a lot of text is just one technique they employ to build the model of the world.

I think there are three different types of gaps, each with different remedies:

1. A definition problem - if I say "airplane", who do I mean? Probably something like jumbo jet or Cesna, less likely SR-71. This is something that we can never perfectly agree on, and AI will always will be limited to the best definition available to it. And if there is not enough training data or agreed definition for a particular (specialized) term, AI can just get this wrong (a nice example is the "Vihart" concept from above, which got mixed up with the "Seven red lines" sketch). So this is always going to be painful to get corrected, because it depends on each individual concept, regardless of the machine learning technology used. Frame problem is related to this, question of what hidden assumptions I am having when saying something.

2. The limits of reasoning with neural networks. What is really happening IMHO is that the AI models can learn rules of "informal" logical reasoning, by observing humans doing it. Informal logic learned through observation will always have logical gaps, simply because logical lapses occur in the training data. We could probably formalize this logic by defining some nice set of modal and fuzzy operators, however no one has been able to put it together yet. Then most, if not all, reasoning problems would reduce to solving a constraint problem; and even if we manage to quantize those and convert to SAT, it would still be NP-complete and as such potentially require large amounts of computation. AI models, even when they reason (and apply learned logical rules) don't do that large amount of computation in a formal way. So there are two tradeoffs - one is that AIs learned these rules informally and so they have gaps, and the other is that it is desirable in practice to time limit what amount of reasoning the AI will give to a given problem, which will lead to incomplete logical calculations. This gap is potentially fixable, by using more formal logic (and it's what happens when you run the AI program through tests, type checking, etc.), with the mentioned tradeoffs.

3. Going back to the "AI as an error-correcting code" analogy, if the input you give to AI (for example, a fragment of logical reasoning) is too much noisy (or contradictory), then it will just not respond as you expect it to (for example, it will correct the reasoning fragment in a way you didn't expect it to). This is similar to when an error-correcting code is faced with an input that is too noisy and outside its ability to correct it - it will just choose a different word as the correction. In AI models, this is compounded by the fact that nobody really understands the manifold of points that AI considers to be correct ideas (these are the code words in the error-correcting code analogy). In any case, this is again an unsolvable gap, AI will never be a magical mind reader, although it can be potentially fixed by AI having more context of what problem are you really trying to solve (the downside is this will be more intrusive to your life).

I think these things, especially point 2, will improve over time. They already have improved to the point that AI is very much usable in practice, and can be a huge time saver.


You had me at "fuzzy", but lost me at "clean up" - because that's what I usually have to do after it went on another wild refactoring spree. It's a stochastic thing, maybe you're lucky and it fuzzy-matches exactly what you want, maybe the distributions lead it astray.

On the line test, I guess it's highly probable that the joke and a few hundred discussions or blog pieces about it were in it's training data.


I only have experience with Claude Code. If it goes on a spree, the task you are giving it is too big IMHO.

It's not a SAT solver (yet) and will have trouble to precisely handle arbitrarily large problems. So you have to lead it a bit, sometimes.


Was recently optimizing an old code base. If I tell it to optimize it does stupid stuff but if I tell it to write profiler first and then slowly attack each piece one at a time then it does really well. Only a matter of time before it does it automatically.

That skit has nothing to do with Vihart ... Claude hallucinated that.

> This clearly shows that AI can think critically and reason.

No it doesn't ... Claude regurgitated human knowledge.


Don't forget the line in the shape of a kitten!

Don't wait for feedback from "real users", become a user!

This tayloristic idea (which has now reincarnated in "design thinking") that you can observe someone doing a job and then decide better than them what they need is ridiculous and should die.

Good products are built by the people who use the thing themselves. Doesn't mean though that choosing good features (product design and engineering) isn't a skill in itself.


Too often that isn't possible. There is a lot of domain knowledge in making a widget there is a lot of domain knowledge in doing a job. when e complex job needs a complex widget often there isn't enough overlap to be experts in both.

sure 'everyone' drives so you can be a domain expert in cars. However not everyone can be an astronaught - rockets are complex enough to need more people than astronaughts and so most people designing spaceships will never have the opportunity to use one.


I find that this argument is used too often to refrain from using your own product.

Yes you're right not anyone can be a domain expert. But anyone in the company needs to at least try to use the product as much as possible.

I worked in companies where even the CEO had never used the product but was telling us what to implement.


I am not asking anybody to be an expert in both (although I am sure such people exist, however rare); I am saying people should ideally have some skill in both. Also, people can collaborate, and learn new skills.

If you're bottle-necked by waiting for the users of your product to give a feedback, you clearly need to spend more time learning how to be a user yourself. Or hire people with some domain skill who can also code.


> LLMs have lowered the bar for the unskilled person to create shit software.

So? Demand the source code. Run your own AI to review the quality of the code base. The contracting company doesn't want to do it? Fine, find one that will.


Add another layer of jank to review the original jank? That doesn't sound like a very helpful solution. But the companies selling AI will love it!

Technical Supervision of the Investor is a thing, for a reason. The fact that IT industry doesn't have it is ridiculous.

And more importantly, think of the funding we’ll get

> The purpose of the Department of Defense should be to defend America and Americans.

Should be, but right now, it isn't. So the name is apt, I am afraid.


I have an explanation (or rationalization, if you wish) for this.

The AI caused the developer productivity to increase (similar to other two big SW engineering productivity jumps - compilers and open source), which gives them more leverage over employers (capital). Things that you needed a small team to build (and thus more capital) you can now do in a single person.

In the long run, this will mean more software being written, possibly by even larger number of people (shift on the demand curve - as price of SW goes down demand increases). But before that happens, companies have a knee-jerk reaction to this as they're trying to take back control over developers, while assuming total amount of software will stay constant. Hence layoffs. But I think it's shortsighted, the companies will hurt themselves in the long run, because they will lay off people who could build them more products in the future. (They misunderstood - developers are not getting cheaper, it's the code that will.)


> as price of SW goes down demand increases

I see this view very often being pulled into the debate but demand is not only driven through a (low) cost. Demand obviously cannot grow infinitely so the actual question IMO is when and how do we reach the market saturation point.

First hypothesis is that ~all SWEs will remain employed (demand will proportionally rise with the lower cost of development).

Second hypothesis is some % of SWEs will loose their jobs - over-subscription of SWE roles (lower cost of development will drive the demand but not such that the market will be able to keep all those ~30M SWEs employed).

Third hypothesis is that we will see number of SWEs growing beyond ~30M - under-subscription of SWE roles (demand will be so high and development cost so low that we will enter the era of hyperinflation in software production).

At this point, I am inclined to believe that the second hypothesis is the most likely one.


I agree there is a dichotomy. I personally think AIs are better debaters than humans, at the very least in their ability to make less logical mistakes and have wider knowledge. I would suggest everyone should run their thoughts through an AI to get a constructive critique, it would certainly reduce lot of time wasted.

And I find the decision to "ban" AI slightly ironic, when HN has a disdain (unlike its predecessor Slashdot) for funny or sarcastic comments, which require the reader to think more, rather than having a clear argument handed on a silver platter. I mean, it is what truly human communication is like - deliberately not always crystal clear.

I suspect that HN will eventually be replaced by an AI-moderated site, because it will have more quality content.


There are huge advantages to AI-moderation. TBD what the unintended consequences are. But I think it's worth trying.

I believe banning AI is a temporary solution. Even today it is very hard to tell human from AI. In the future it will be impossible. We are in the Philip Dick future of "Do Androids Dream" (the book, not the movie). Does it matter if we can't tell human from AI? The book proposes that how we feel about the piece we're reading is the only thing that matter. How the piece got created is irrelevant.


I think what would be nice (but won't happen until cost of AI somewhat decreases):

1. Pre-moderation - AI looks at your comment before you submit it, and suggests changes for clarity, factuality and argumentative strength. You can decide whether to accept these (individual) changes or not. It will also automatically flag if it breaks moderation guidelines too much.

2. Discussion summary - AI will periodically edit main debate points and supporting sources into a comprehensive document, which you can further add to with your comment. This will help to steer the discussion and make it easier to consume in the future. It can also make discussions less ephemeral, which is a huge problem.


There's a way to measure "entropy" of a codebase. Take something like the binary lambda calculus or the triage calculus, convert your program (including libraries, programming language constructs, operating system) into it, and measure the size of the program in it in bits.

You can also measure the crossentropy, which is essentially the whole program entropy above minus entropy of the programming language and functions from standard libraries (i.e. abstractions that you assume are generally known). This is useful to evaluate the conformance to "standard" abstractions.

There is also a way to measure a "maximum entropy" using types, by counting number of states a data type can represent. The maximum entropy of a function is a crossentropy between inputs and outputs (treating the function like a communication channel).

The "difference" (I am not sure how to make them convertible) between "maximum entropy" and "function entropy" (size in bits) then shows how good your understanding (compared to specification expressed in type signature) of the function is.

I have been advocating for some time that we use entropy measures (and information theory) in SW engineering to do estimation of complexity (and thus time required for a change).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: