Hacker Newsnew | past | comments | ask | show | jobs | submit | throwanem's commentslogin

> Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.

I remember a lot of Steve Yegge's impressive claims from back when he and Zed Shaw were what I would call "fringe contemporaries" in the early 2010s - like all the time he spent gassing on about his unmaintainable, barely usable nightmare of a Javascript mode for Emacs. (I did like the MozRepl integration, for what that's worth.)

I don't particularly recall him talking about smoking pot, and I think I would have, if he'd been as memorably effusive there as about js2-mode. But it's been a lot of years and I couldn't begin to remember where to look for an archive of his old blog. Would you happen to have a link?


The most obvious one is this brilliant piece on complexity:

https://steve-yegge.blogspot.com/2009/04/have-you-ever-legal...

It doesn't match OP's description, but it certainly fits talk about his pot use.

There may be others.


I remember thinking of him as a skillful writer and a sometimes incisive thinker, back then. Apparently my taste has significantly improved in the interim; for a piece ostensibly about complexity, this is an embarrassingly superficial analysis from priors that already don't make any sense.

I'm not going to knock a guy today based on an almost twenty-year-old piece, especially on subjects (cannabis legalization, the quality and direction of Obama administration policy initiatives) that were widely misunderstood at the time, including by such luminaries as the Nobel committee. But Yegge really wasn't starting from so strong a position as I had misrecalled. Thanks for the link.


I haven't read it in at least ten years myself - maybe it's not as good as I recall.

I do remember that I appreciated his grasp of the fact that if you aren't deep in the weeds, you really cannot understand just how complex a system really is.

I also appreciated the slow build to the actual point, which I think could help people who wouldn't hear a direct explanation understand what he was getting at.

"'Shit's Easy' syndrome" is real, and I wonder if the prevalence of LLMs doing the scutwork will lead to an entire generation of programmers who suffer from it.


Well, sure. Trying to plan events at incomprehensibly large scale is like that, as the 20th century collectivist states failed largely in consequence of too late discovering. You have to retain a sense of scale in these things, not to say humility. Meanwhile, cannabis legalization in the US proceeds apace as a fifty-state patchwork, with simple possession still a major felony some places, while commercial distribution in others is a wholly legitimate storefront affair, and someone will eventually reap a small political windfall through federal recognition of the situation in being. No one is really planning anything. It is the assumption someone must that I'm criticizing, because for all the decades of planning indulged by the interminable old-times legalization advocates, their desideratum in practice looks nothing like they ever came close to seriously imagining or predicting.

To his dubious credit, I think Yegge has in the interim learned this lesson, possibly at the cost of some others. Looking at his "Gas Town" makes the hair stand up on the back of my neck, not least for that I once had ferrets and I know what chaos they embody and wreak (and how f—ing expensive they are!); I'm sure he was intentional in his choice of the metaphor, but he's always been one of those for whom consensus reality and good sense are likewise mostly optional. So in entire fairness I have to admit I really can't see any just criticism that he's planning too much these days. But the value in such a swing from one extreme to another, versus something more closely resembling moderation, charitably has yet to be demonstrated.

(As a programmer of both fintech and actual finance experience, btw, it's very comical to me to see the Big Design Up Front approach being applied in this way to this specific example, precisely because it so little resembles how anyone genuinely approaching the task does so. It is very much how I would expect the Google of 2009 to look at things. It isn't that much like how a bank or a startup does. But I said I wasn't going to beat up on old work, and I can't pretend I had so broad a perspective myself so long ago.)


Good points.

I was similarly appalled and shocked at Gas Town. Maybe something like it is the future, but I really didn't expect Yegge to be a genAI booster.

If Gas Town has "the Quality Without a Name," I will eat my hat.

https://sites.google.com/site/steveyegge2/tour-de-babel


(Thank you for a pleasant and thought-provoking conversation, by the way! All hopes for a favorable Friday and weekend.)

Likewise!

Oh, God, spare me from the architect who must be sure he is seen to be one with the Tao. Its name is 无为 and Emacs, which I have used exclusively since 2010, does not "have" it, although a given human Emacs user may. (But see previously my comments with respect to js2-mode; Yegge's enthusiasm of the moment notwithstanding, he was at least not then the most obviously reliable judge.)

It isn't something that can exist in the absence of consciousness, because only in the presence of consciousness can it not exist. I grant some computer programs sensu lato may conceivably experience qualia, but even today would be taken sorely aback to discover Emacs among them.


I did not realize that Yegge was referencing the Tao with that, though it certainly had some of that aesthetic flavor to my untutored Western ears.

I can roughly intuit how it might be something which can only be relevant in the presence of consciousness, despite my near-total lack of knowledge of any religious tradition outside the Western ones.

I agree that conscious programs in some sense are conceivable, but I'm skeptical of it myself especially in comprehensible programs, however large - something self-documenting and readable is nearly the opposite of the human brain, which is the only thing we really have strong reason to believe is conscious (by way of each possessing one).


Properly, with "the Quality without a Name" Yegge was referencing Christopher Alexander's The Timeless Way of Building (1979) wherein that phrase is - one would ordinarily say 'defined,' but in this case the author strove with what I consider deeply tasteless artifice to inflict a mostly ersatz epiphany. (It is an extremely 2009 Google or "Chocolate Factory" kind of book.) It was Alexander whom I excoriated as the architect who etc., since he was that. (His work on the U of O campus gets too much credit; Eugene could not but have been lovely, anyway, and it was not the town's fault I wilt for want of full sun.) In any case to construct the idea as "religious" obscures a trivially essential point, in that to do so is like saying you're worried the Name might get mad if you pick up a hammer. Oh, if with a heart of hate or concupiscence then sure, that's a problem, but Jimmy Carter built houses with Habitat for about a million years and I know flights of angels sang that man to his rest. The "Tao," if we like, is a hammer. Anyone is free to believe in it or not. It drives nails just the same either way. 'The rest is commentary.' Don't worry about it too much.

I'm not actually much of a mystic, though some who've known me might disagree, especially after that last paragraph! My concept of consciousness is broadly both mechanistic and scalar, which having arisen is reliably conserved because abstraction, reflection, and introspection are behaviors whose adaptive benefit easily compounds on itself. (The singularitarians aren't wrong that getting smarter makes you better at getting smarter; they just have no idea what "smart" means.) I am also wholly unapologetic about the wholly intuitive and qualitative nature of that understanding, not least because to be both at once places me serenely beyond the moist and smelly grasp of rote scientism. For example, my friends who've been wasps were not less conscious in my estimation than myself and my friends who are human, but I would say they perhaps reflect and ramify less deeply. One might resort for a mental model to the concept of a space-filling or Peano curve [1]: we iterate many orders more deeply than even the most capacious of social wasps, to be sure! But I have seen a Polistes exclamans wasp comfort her anxious and frightened sister with a hug in my kitchen (2), and I've seen them learn me as the final waypoint of what, given the unusually capable aerialism and extensive navigational skills of the average Polistes metricus forager, could well have been a longer and more complex daily commute than mine. (And I never have to deal with birds trying to eat me!)

So these are not at all stupid or robotic animals, the social wasps. As terrestrial predators and foragers who hunt energetically expensive prey by sight, they experience many of the same selection pressures as we do toward episodic memory, constructive theory of mind, kinship recognition by sight rather than odor (and thus at much greater distance,) and other such relatively complex cognitive skills. Also, I have watched a wasp sleep, and seen the rate of her breathing oscillate in a fairly close parallel to the periodicity one sees in the stages of mammalian sleep. I believe they may experience something very like the voluntary paralysis of our REM sleep. I believe there is no reason for such an inhibitory circuit to develop and be conserved, other than the reason we have it. In short I believe they may very well dream, in some way meaningfully like we do, and again for the same reasons. (I incline, in my incompetently autodidactic manner, toward the "integrated information theory" expounded by Hoel, at least inasmuch as I borrow the need for balancing surprise minimization versus overfitting avoidance, but I'm not really dogmatic about it.) And finally, ineluctably, I defy anyone anywhere to show me that of any kind which dreams yet is not conscious.

These are not only (or not all only) individual observations via personal correspondence, either; I'm happy to cite and discuss at length the specific details of the ethology and neurology underlying such complex behavior, which I may not be the first to observe is strongly suggestive of social wasps exercising a constructive theory of mind for a species deeply dissimilar to themselves, ie we H. sap. A good lay overview, written much from a love which I recognize, is Seirian Sumner's 2022 Endless Forms. I forget offhand if she is as explicit as I'm being, but that's okay; no one really of whom I'm aware is really making the kind of (what is arguably a) leap that I am, to treat consciousness in this way; an unkind critic might accuse me of half-assing my way to some half-baked animism, through a daytime-TV pop science conception of consciousness as waves hands I dunno...holographic? Luckily, with no costly postnominals to defend nor student loans to defray, I suppose I'm free to say more or less what I like. (Such as that, if Sumner leaves you wanting more, a good next step into entomology proper - and one of my own first sources! - is 2018's The Social Biology of Wasps.)

Even the largest and fanciest frontier model (properly the vast infrastructure which serves it, which may to some useful ends be considered as a kind of organism) is many orders of magnitude less complex in both "neurome" and connectome than even the most basal of social wasps, and there is no real cause to expect this will change in our lifetime. (Wasps are not getting simpler, and God as yet still stubbornly refuses to be invented by Sam Altman.) A human's brain of course ramifies as many orders further still, but no matter; if there was only ever one example of "Shit's Easy syndrome," I must surely be making fun of it now, in the idea that our programs express our minds more magically than any other form of human mechanism or artifice, so much so as to encapsulate much less surpass.

If a conscious computer system ever arises - and note by that 'in the broad sense,' I include eg the idea of the entire planetary network considered as "a" consciousness, so we're definitely not aiming for any immediate or concrete mapping for that intentionally nebulous concept - then I confide there will also arise humans able to recognize it as like themselves, and vice versa. I would not expect them to find it more comprehensible than they find themselves, or for that matter than it would likely find itself or them. Good grief, who ever does in this life?

(And at no doubt welcome last, thanks once more for the nudge to further work in clarifying my thesis and its argument, perhaps not without interest. I regret if I've given the impression of making light of your faith despite that I do not share it. Oh, I have my differences with Them Upstairs, and we'll work those out by and by - but that is no fault of yours so far as I know, and I hope I haven't made it too much your problem.)

[1] https://en.wikipedia.org/wiki/Space-filling_curve

(2) I was sheltering them from a cold snap, an experience more or less semiotically indistinguishable for them from an alien abduction, although I of course had the grace as a host not to stress them unnecessarily. We all had a hell of a fight on our hands anyway, the night the local pavement ant supercolony caught wind and mounted an invasion, but the next morning was finally warm and mild enough for them to disperse. I suppose things turned out well enough in their eyes, since the family stuck around and we were porch neighbors for a few years after that.


One would as sensibly dismiss the concept of an assembly line as "how to build a car if you cannot."

Dijkstra was a mathematician. It is a necessary discipline. If it alone were sufficient, then the "program correctness" fans would have simply and inarguably outdone everyone else forty years ago at the peak of their efforts, instead of having resorted to eloquently whiny, but still whiny, thinkpieces (such as the 1988 example [1] quoted here above) about how and why they would like history to understand them as having failed.

[1] https://www.cs.utexas.edu/~EWD/ewd10xx/EWD1036.PDF [2]

[2] I will freely grant that the man both wrote and lettered with rare beauty, which shames me even in this photocopier-burned example when I compare it to the cheerful but largely unrefined loops and scrawls of my own daily hand.


The formal methods people may yet have the last laugh. I did not have Lean becoming a hyped programming language / proof assistant on my bingo card for 2025-26 and yet here we are, because these tools help us close the validation loop for LLM agents. That is not dead which can eternal lie...

But yes, I think the best rebuttal to Dijkstra-style griping is Perlis' "one can't proceed from the informal to the formal by formal means". That said I also believe kind of like Chesterton's quote about Christianity, they've also mostly not been tried and found wanting but rather found hard and left untried. By myself included, although I do enjoy a spot of the old dependent types (or at least their approximations). There's an economic argument lurking there about how robust most software really needs to be.


Certainly, and it's at that economic argument that I strive to get, I think.

Every so often an article makes the rounds on the correctness and verification methods used for Space Shuttle avionics software and applications of similar import, or if not that then Nancy Leveson's comprehensive 1995 review of the Therac-25 accidents. [1]

Most software doesn't need to be nearly so robust, but Dijkstra constructs his argument as though all did, hinging the inversion on the obvious and frankly shocking cheat across the gap between his pages 14 and 15, ie, that paragraph beginning "But before a computer is ready to perform..." Here he casually, and without direct acknowledgement much less justification, assumes as rhetorically axiomatic that a program, not the machine that executes it, is the original artifact of computing, of which any reification merely constitutes less than perfect instantiation, which he is then free to criticize on the wholly theoretical grounds of mathematical beauty; that is, on the grounds he prefers to inhabit in all cases, whether to do so in any given example makes any sense or not.

If that's his preferred ground, fair enough; after all, he was a mathematician. But his hypocrisy in concealing the insistence by means of subtle rhetoric - mere pages after inveighing against "medieval thinking" by way of an example, his "reasoning by analogy," faulting specifically that argument made by way of specious rhetoric! - casts suspicion on all that both precedes and follows. From a layperson, I could regard it as honest error, but I have known and loved academic mathematicians, and I really can't conceive of any of them leaving intact so consequential a mistake.

Perhaps Dijkstra was different, or merely becoming old, but for someone so heavily invested in pushing a paradigm of programming with mathematical rigor at its core, it seems a remarkable flaw in what should be a crucial argument (especially in advance of a solution for the halting problem). I regret that flaw, because he isn't all wrong about what an engineering paradigm can do to the agency and optionality of programmers especially in industry - not that his one extremely privileged position therein, parallel with Feynman's time at Thinking Machines, would much acquaint him with our desiderata or our constraints - and I would like to find that point made in better company than he was able to give it.

But then, his conception never offered much in preference, did it? The labor of mathematicians is scarce and expensive: what good is a proof assistant to anyone who can't understand its output, much less give it input? And Dijkstra himself, not less strange a bird than any other mathematician, famously did all he could to avoid actually using the machines on whose correct use he here wrote. (Hence his hand, which I complimented so highly before. I also use a fountain pen, but as I said, not so beautifully - and I'm glad I know how to use a keyboard well, instead.)

There would not be more programmers or more software in a world run on such principles, I think, than in this one - on the contrary, less by far. Maybe that would be preferable, but mostly not for the reasons Dijkstra claimed.

[1] http://sunnyday.mit.edu/papers/therac.pdf


There is no liability or penalty for software defects and therefore no incentive for program correctness. Arguably, Dijkstra didn't fail; society has foolishly decided not to hold developers accountable for their bad code.

Arguably? Okay, so argue it.

I think the real tragedy here is that we can spend *all* of our time trying to improve the quality of our output, but it simply doesn't matter, because as long as the button is where the boss wants it to be and is the right color, all is right with the world.

Literally nothing else matters, and we (or at least I) have wasted a ton of time getting good at writing software.


As long as it continues to matter what the button actually does, I can't consider our effort to have been entirely wasted. We only have the misfortune to live in stupid and dangerous times, but good heavens, we're hardly the first in that, and hardly starved for examples from whom to learn.

> One would as sensibly dismiss the concept of an assembly line as "how to build a car if you cannot."

I agree, but I'm not sure this says what you think it does.

The people on the car assembly line may know nothing of engineering, and the assembly line has theoretically been set up where that is OK.

The people on the software assembly line may also (and arguably often do) know nothing of engineering, but it's not clear that it is possible to set up the assembly line in such a way so as to make this OK.

Arguably, the use of LLMs will at least have some utility in helping us to figure this out, because a lot of LLMs are now being used on the assembly line.


I hope OP has friends.

I'm much the same. It's hard to notice. Coming up having onsites with three and four and five consulting clients in a "normally busy" day (you want 'high touch?' I started before smartphones!) taught me to associate just that purposeful attitude with the satisfying knowledge that I probably wouldn't end the day running further behind when I started.

It's a good energy to bring to work or to a crisis. Everywhere else? Maybe not so much. I appreciate you mentioning it here; it can serve me also, as a reminder to work on that.


Worthy of the HN black bar, I feel.


I think that depends on whether dang was a Nintendo kid or a Sega kid.


black bar?


This site will often have a thin black bar along the top of the page as a mark of commemoration for someone noteworthy who has recently died.


Interesting... I wonder why I have never seen it then.


At least for me on mobile with a dark mode address bar, it's usually quite hard to see it.


One of SOMA's more subtle, and much more effective, narrative choices was to make its protagonist and its player character two entirely different people.


That’s an interesting interpretation, but I’m not so sure how much of it was international, and how much it was a result of the developers calibrating the narrative flow for people who aren’t that familiar with sci-fi mind uploading tropes. I do agree that the protagonist had some perceptual blocks due to his “condition”.

If you recall, the difficulty of the gameplay focused portions of Soma has also received a radically different perception by more experienced gamers, and people who were in just for the story. Leading to the eventual release of a story mode patch.


I might suggest the two cases differ less in detail than at first blush, in that each describes an effort to pitch one of the game's ludic ('play-wise') or narrative aspects toward a broader audience than might select itself into playing something with such a high-concept story and that could also look like belonging in the sometimes fraught "walking sim" genre. I met System Shock 2 in 1999 and have been so imsim-brained since that I don't even play first-person games with nondiegetic music turned on, so SOMA is kind of a theme park for me, but I fully get why they'd ship a patch to give folks the option. Part of tuning the experience is giving its audience the tools to tune that for themselves; that's one of the things at which games can really excel.

With respect to protagonist vs. player character, that deserves discussion in detail, because it's an important and quite central decision to make those roles distinct. The technique itself isn't novel at all; in games I grant it's not that common, but Doyle for example did the same thing in his Sherlock Holmes stories, whose titular "scientific detective's" intentions and actions largely drive their plots, while we as the audience enjoy our perspective on said plots - and explanations thereof - from the viewpoint of the doughty Dr. Watson. Just as with the Holmes stories, SOMA's has, as you rightly note, a lot of work to do in explaining itself, if it's going to appeal successfully to an audience not already steeped in its ideas. Even in a book, that's better done in dialogue and action than by pseudencyclopedic "info dump," and in a game certainly no less so - indeed much more so, if we as the player are not to start rolling our eyes and skipping cutscenes. Hence placing Catherine in the protagonistic "Holmes" role of primary influence in developing the action of the plot, while Simon, the character who gives the player access to the story, acts as a kind of "Watson."

The parallel is far from exact and shouldn't be overindexed upon, but the technique is the same in both cases and, absent an authoritative comment on the matter from the authors of SOMA, I don't see cause to assume it more accidental in one case than the other. Indeed, absent evidence otherwise, its importance here makes the only sensible assumption that the choice was extremely intentional, in that a player naïve to the game's ideas may well also need them explained several times, and doing that in a game with its own clock demands more artifice than in a novel whose text unfolds exactly at the reader's pace. Even in a game allowing cutscene replay, that's a lot more disjointing to the experience than rereading a paragraph in a novel, or even pausing to think it over for a moment. Artifice, then, is required to obtain an excuse for organically repeating the plot's basics several times from different angles, to help make sure it can sink in and not leave the naïve player with a plaintive "...what the hell?" Giving the player access to the story via a well-meaning but brain-damaged, ignorant, incurious, and mostly useless early-21st-century airhead such as Simon constitutes exactly such artifice, and I regard its implementation here as a minor stroke of genius. SOMA's is a story I'd be proud to have written.

And for those of us not in need of so much handholding on ideas like the difference between mind uploading and consciousness transfer...well, this is meant to be an existential horror story, after all, what with the extinction-level event immediately following the prologue in our (and Simon's) subjective experience. Irritating though his obtuseness can be, try to have a heart for the poor dumb ox, won't you? It's good emotional exercise. After all, he just thought he was going in for some kind of experimental MRI or something. It isn't really his fault he finds himself in this situation so far beyond his comprehension, both in circumstance and in the demands with which it taxes the character of those present.

You or I would speak and act much differently, no doubt. (One hopes. TBI and fear can both change a person for the lesser.) But the story isn't trying to be about what you or I would say and do. I think that choice improves the result, which I admit is slightly surprising; in this sense it resembles Ian McEwan's Solar, which I cordially despise exactly for its viewpoint character's constant venal fecklessness, but that guy had a lot of choices which Simon really doesn't, and I think that makes a real difference in how the characters deserve to be read. That as far as it goes is opinion to which the usual dictum applies, but I do mean to push back on the idea that the game's story is like it is in any way by mistake.

(If nothing else, good grief, think how much complexity would be alleviated by not voicing Simon. No VA to find and work with, none of his dialogue to write and accommodate in the narrative, no ludonarrative dissonance to attend - Simon cost the devs a lot of money to implement the way they did! If that core a decision to both story and production had been made by mistake, it might be hard to understand how the same people who screwed that up so bad could get so much else so right, you know?)


HF-secreting microorganisms with a tropism for a material extremely common in the human environment? No way that could go wrong.


I'm not asking if it could go "wrong", which is a matter of perspective though.


I guess you can see it can't work because glass is typically the material used by scientists to contain microbes.


Yeah, I just wonder if it is entirely impossible, or there MAY be a way, even if theoretically (so far). Or is it like "it certainly can never happen"?


You imply microorganisms capable of digesting glass, stone, and sand, via a hydrofluoric-acid-concentrating biochemistry that I suspect is not even remotely possible in Earth's atmosphere, which is something of a relief because if it did happen, it would almost certainly end all other life on Earth within a span of years to decades.


I do not imply their existence. My question is, can they exist or is it completely impossible, even theoretically?


Nothing is impossible in theory.


OK, well, is it completely impossible in practice? I just want to get educated! Gosh.


Good to see a couple stories here I first read in Clarkesworld! If you're a fan of the old-school SF magazine as form, there's no better place to go lately, in my view, and Neil's editorial taste is excellent - if you like this anthology, you'll enjoy the magazine, also. Take a look!

https://clarkesworldmagazine.com/


Thank you for posting a link to Neil's magazine! I think he's the best short-form editor working in SFF today.


I occasionally encounter a story in Clarkesworld that I don't click with and skip over, but most of them range from like to love (I really hope The Apologists¹ from this month's issue wins some awards).

Even though he makes each issue free to read online, I've been buying it Kobo every month for around a year now to help support the magazine. Too bad the platform doesn't seem to support subscriptions so I don't have to manually buy each issue.

[1] https://clarkesworldmagazine.com/thompson_11_25/


IMO, if you like every single story that a magazine publishes, the editor is playing it too safe and not doing their job properly.

The biggest advantage of short fiction magazines over longer form is that it's a lower stakes way to try out new ideas and ways of telling stories and to expose readers to new authors.

Doing that means taking some risks and publishing some stories that won't always land with readers.


Christian Bale played him in a Michael Lewis movie.


It's the region, or maybe just the GM. The Parkville and Rockville stores aren't like that at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: