Spaces are not for fullscreen but for basically virtual desktops i3 linux style
Here is superior user experience:
1. Install moom. Its keyboard windows arrangement is second to none. Its two-step tiling is a killer. Ie caps-a to show a popup with all the shortcuts, then “a” letter for vertical 1/3 of the screen. Or s for middle 2/3. Or q for top left third — you can assign any letter for any portion of the screen.
2. Use option1-6 to switch between desktops
3. For example alt-4 is a desktop where you have all on one screen (suppose you have 6k xdr like i do): safari, mail, messages, telegram, hey email, reeder
alt-3 is your productivity desktop where you have things, calendar, basecamp, notes, ia writer
alt-1 and 2 is for your main work like rider ide or what have you
Alt-5 for your remote stuff like remote desktop, servers, what have you
—
So with this you have a mental model of where everything is always and instant switching to it. Want to see your todos and notes? Alt-3. Want to see your browser and messaging? Alt-4. You get it.
Moom is better than tiling manager for screens like 6k 32” xdr.
Otherwise tiling managers are perfectly fine. For instance on windows I use komorebi
I program for 20 years now and I think that what many people do wrong about these estimates is that they give them too early. The truth is that for many project the only truthful answer you could give someone on the question hoe long it takes is: "That depends on many things some of which I don't know, some of which we both don't know and some of which potentially nobody knows." After that you should say: "In my experience it takes betwern x and y weeks, with a lot also depending on how responsive your side is."
Time estimates are always hard, not only in programming. And outside of programming one of the main insecurities is customers changing the plan or wanting adjustments. This is the side you can't really control, so it is best to get a feeling for the customer, their communication patterns and their expectations early on and factor it in. The other insecurity is tough problems you encounter during the programming phase. How well you can deal with those depends a lot on how experienced your programmers are and how much they were involved in the inital process.
The truth is that the latter insecurities make up a main part of the whole thing and it has to be okay to tell a customer you can't give them an estimate before you know some more details.
Several years ago I bought a product that aimed to solve this problem. It was a set of rails, a door, and a controller/motor that would raise and lower the door, via a string.
The door itself has a spring loaded catch at the bottom, which is retracted when the door is lifted, via a little mechanism built into the door. Pull up on the peg, the latch retracts, and you can slide the door open
The controller was the weakest part of this whole assembly. It worked, but was crude and often would lock hens out, like when summer thunderstorms would darken the sky. It just used a light sensor
Last year I replaced the controller with an Esphome device I built, and it's been going strong all summer and winter
> Apollo was over three orders of magnitude more efficient in producing scientific papers per day of fieldwork than are the MERs. This is essentially the same as Squyres’ (2005) intuitive estimate given above, and is consistent with the more quantitative analogue fieldwork tests reported by Snook et al. (2007).
Scientific papers are a pretty poor measure of productivity so here's another one. We know about the existence of He-3 thanks to samples brought back from astronauts on the moon. Astronauts setup fiddly UV telescope experiments on the moon, trying to set up a gravimeter to measure gravitational waves, digging into the soil to put explosive charges at different ranges for seismic measurement of the moon's subsurface... They were extremely productive. Most of what we know about the moon happened thanks to the 12 days spent on the lunar surface.
Yes, a robot car that drives on its own will be a better driver than most humans who text and drive, or have 400ms reaction times.
But making a machine that can beat a 110ms reaction time human with 2SD+ IQ, and the ability to override the ground controllers with human curiosity is much harder. Humans have high dexterity, are extremely capable of switching roles fast, are surprisingly efficient, and force us to return back home.
So in terms of total science return, one Apollo mission did more for lunar science and discovery than 53 years of robots on the surface and in orbit.
Regardless of whether this particular mission is perfectly planned, this is precisely the kind of thing that will help humanity outgrow the dark age of war, inequality and climate mismanagement.
It is a noble endeavor - science, engineering and peaceful exploration hold the keys to our survival and prosperity.
It is also important psychologically to our survival - a reminder there is a bigger pie, that we can solve hard problems, that progress can be made, that competence and education counts, as does courage, and that we can work together for a common cause.
This is the best of America, and for a while we can be proud of the human race.
It is very disconcerting to see so many completely disregarding incredible technological innovation because other problems exist, especially on HN.
If we were not allowed to progress technology until everybody is 100% free of suffering, we'd never be able to create technological that may potentially lead to the alleviation of suffering. It all feels very crabs in a bucket - "I don't feel happy so nobody else should, and nothing should happen unless it is things that directly, immediately do things I want and solve problems I care about."
You're making the mistake of thinking of "nature" and "evolution" as intelligent, reasoning systems, and that every evolutionary adaptation exists for a purpose. Evolution doesn't do things for "reasons," things just happen.
Remember that cephalopod brains are donut shaped and their digestive tracts go right through the middle and if they eat something too big they'll have an anyeurism. Pandas and koalas evolved special diets that serve no evolutionary purpose and both would be extinct if humans didn't find them cute. Sloths have to climb down from trees to take a shit. Female hyenas give birth through a pseudopenis that often ruptures and kils them. Horses can't vomit and if they swallow something toxic, their stomach ruptures. Also their hooves and ankles are extremely weak and not well designed to support their weight. Numerous species like the fiddler crab and peacock have evolved sexual displays that are actively harmful to their survival.
And as for humans, our spines are not well adapted for walking upright, our retinas are wired backwards, and we still have a useless appendix and wisdom teeth. The recurrent laryngeal nerve has an unnecessarily long and complex route branching off the vagus and travelling around the aorta before running back up to the larynx.
Evolution is not smart. Evolution isn't even stupid. It isn't trying to keep you alive and it isn't even capable of caring if you die. Yes we should absolutely fuck with it, because we don't want to live in a world where we still die of sepsis and parasites and plagues because "we don't want to mess with evolution."
Reasoning by analogy is usually a bad idea, and nowhere is this worse than talking about software development.
It’s just not analogous to architecture, or cooking, or engineering. Software development is just its own thing. So you can’t use analogy to get yourself anywhere with a hint of rigour.
The problem is, AI is generating code that may be buggy, insecure, and unmaintainable. We have as a community spent decades trying to avoid producing that kind of code. And now we are being told that productivity gains mean we should abandon those goals and accept poor quality, as evidenced by MoltBook’s security problems.
It’s a weird cognitive dissonance and it’s still not clear how this gets resolved.
Architects went from drawing everything on paper, to using CAD products over a generation. That's a lot of years! They're still called architects.
Our tooling just had a refresh in less than 3 years and it leaves heads spinning. People are confused, fighting for or against it. Torn even between 2025 to 2026. I know I was.
People need a way to describe it from 'agentic coding' to 'vibe coding' to 'modern AI assisted stack'.
We don't call architects 'vibe architects' even though they copy-paste 4/5th of your next house and use a library of things in their work!
We don't call builders 'vibe builders' for using earth-moving machines instead of a shovel...
When was the last time you reviewed the machine code produced by a compiler? ...
The real issue this industry is facing, is the phenomenal speed of change. But what are we really doing? That's right, programming.
That subheading is complete nonsense and I can't think of a single charitable reading of that sentence that in any way makes sense. Archaeologists have known that our ancestors have been making tools for over a million years since the Acheulean industry was conclusively dated in the 1850s. It took half a century for archaeologists to figure that out after William Smith invented stratigraphy. Scientists didn't even know what an isotope was yet.
The original paper's abstract is much more specific (ignore the Significance section, which is more editorializing):
> Here, we present the earliest handheld wooden tools, identified from secure contexts at the site of Marathousa 1, Greece, dated to ca. 430 ka (MIS12). [1]
Which is true. Before this the oldest handheld wooden tool with a secure context [2] was a thrusting spear from Germany dated ~400kYA [3]. The oldest evidence of woodworking is at least 1.5 million years old but we just don't have any surviving wooden tools from that period.
[2] This is a very important term of art in archaeology. It means that the artefact was excavated by a qualified team of archaeologists that painstakingly recorded every little detail of the excavation so that the dating can be validated using several different methods (carbon dating only works up to about 60k years)
I think some of this is caused by the non-obvious mechanisms of how interactions on these platforms work.
When you replied to a thread on a phpbb forum (or when you reply to this HN thread), your reply „lived” in that thread, on that forum, and that was that. The algorithm wouldn’t show that reply to your dad.
I remember liking a comment on Facebook years ago, and being horrified when some of my friends and family got a „John liked this comment, join the discussion!” notification served straight onto their timelines, completely out of context. I felt spied on. I thought I was interacting with a funny stranger, but it turned out that that tiny interaction would be recorded and rebroadcast to whomever, without my knowledge.
Similarly, commenting on a youtube video was a much different experience when your youtube account wasn’t linked to all your personal information.
If you comment on a social media post, what’s going to happen? How sure are you that that comment, however innocuous it may seem now, won’t be dredged up 8 years by a prospective employer? Even if not, your like or comment it’s still a valuable data point that you’re giving to Zuckerberg or similar. Every smallest interaction enriches some of the worst people in the industry, if not in the world.
The way I speak, the tone I use, the mannerisms I employ, they all change depending on the room I’m in and on the people I’m speaking to - but on modern social media, you can never be sure who your audience is. It’s safer to stay quiet and passive.
If you tried to turn the sensor data into light again, there would not be enough information to do so accurately. Everything is built around human perception of color. When a photon hits your eye, it produces a "tristimulus value" for your brain. (The tristimulus is produced by "S", "M", and "L" sensitive-cones; these roughly correspond to blue, green, and red; that's why those colors are what we use for base colors. But, you can use other colors, and you could use more than 3 if you wanted to. There is no law of the universe that splits colors into red/green/blue parts... that's just a quirk of human anatomy.)
The goal of a digital camera is to be sensitive to colors in the same way that your eyes are. If that tristimulus can be recorded and played back, your brain won't know the difference. The colors your monitor emits when viewing a photograph could be totally unrelated to what was in the original scene, but since your brain is just looking for a tristimulus, as long as that same tristimulus is produced, you won't be able to tell the difference.
(Fun fact -- there are colors you can see that don't correspond to a wavelength of light. There is no single wavelength of light that fully stimulates your S and L cones, but plenty of things are magenta.)
TL;DR: computerized color is basically hacking your brain, and it works pretty well!
"Y.T.’s mom pulls up the new memo, checks the time, and starts reading it. The estimated reading time is 15.62 minutes. Later, when Marietta does her end-of-day statistical roundup, sitting in her private office at 9:00pm, she will see the name of each employee and next to it, the amount of time spent reading this memo, and her reaction, based on the time spent, will go something like this:
Less than 10 min.: Time for an employee conference and possible attitude counseling.
10-14 min.: Keep an eye on this employee; may be developing slipshod attitude.
14-15.61 min.: Employee is an efficient worker, may sometimes miss important details.
16-18 min.: Employee is a methodical worker, may sometimes get hung up on minor details.
More than 18 min.: Check the security videotape, see just what this employee was up to (e.g., possible unauthorized restroom break).
Y.T.’s mom decides to spend between fourteen and fifteen minutes reading the memo. It’s better for younger workers to spend too long, to show that they’re careful, not cocky. It’s better for older workers to go a little fast, to show good management potential. She’s pushing forty. She scans through the memo, hitting the Page Down button at reasonably regular intervals, occasionally paging back up to pretend to reread some earlier section. The computer is going to notice all this. It approves of rereading. It’s a small thing, but over a decade or so this stuff really shows up on your work-habits summary."
My mirrorless camera shoots in RAW. When someone asks me if a certain photo was “edited”, I honestly don’t know what to answer. The files went through a RAW development suite that applied a bewildering amount of maths to transform them into a sRGB image. Some of the maths had sliders attached to it and I have moved some of the sliders, but their default positions were just what the software thought was appropriate. The camera isn’t even set to produce a JPEG + RAW combo, so there is literally no reference.
While I appreciate anyone rebuilding from the studs, there is so much left out that I think is essential to even a basic discussion.
1. Not all sensors are CMOS/Bayer. Fuji's APS C series uses X-Trans filters, which are similar to Bayer, but a very different overlay. And there's RYYB, Nonacell, EXR, Quad Bayer, and others.
2. Building your own crude demosaicing and LUT (look up table) process is ok, but important to mention that every sensor is different and requires its own demosaicing and debayering algorithms that are fine-tuned to that particular sensor.
3. Pro photogs and color graders have been doing this work for a long time, and there are much more well-defined processes for getting to a good image. Most color grading software (Resolve, SCRATCH, Baselight) have a wide variety of LUT stacking options to build proper color chains.
4. etc.
Having a discussion about RAW processing that talks about human perception w/o talking about CIE, color spaces, input and output LUTs, ACES, and several other acronyms feels unintentionally misleading to someone who really wants to dig into the core of digital capture and post-processing.
(side note - I've always found it one of the industry's great ironies that Kodak IP - Bruce Bayer's original 1976 patent - is the single biggest thing that killed Kodak in the industry.)
Several reasons,
-Silicon efficiency (QE) peaks in the green
-Green spectral response curve is close to the luminance curve humans see, like you said.
-Twice the pixels to increase the effective resolution in the green/luminance channel, color channels in YUV contribute almost no details.
Why is YUV or other luminance-chrominance color spaces important for a RGB input? Because many processing steps and encoders, work in YUV colorspaces. This wasn't really covered in the article.
My dad devised the "Bayer filter" used in digital cameras, in the 1970's in the Kodak Park Research Labs. It is hard to convey now exactly how remote and speculative the idea of a digital camera was then. The HP-35 calculator was the cutting edge, very expensive consumer electronics of the day; the idea of an iPhone was science fiction. Simply put, my dad was playing.
This was the decade that the Hunt brothers were cornering the silver market. Kodak's practical interest in digital methods was to use less silver while keeping customers happy. The idea was for Kodak to insert a digital step before printing enlargements, to reduce the inevitable grain that came with using less silver. Black and white digital prints were scattered about our home, often involving the challenging textural details of bathing beauties on rugs.
I love posts that peel back the abstraction layer of "images." It really highlights that modern photography is just signal processing with better marketing.
A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data.
In many advanced demosaicing algorithms, the pipeline actually reconstructs the green channel first to get a high-resolution luminance map, and then interpolates the red/blue signals—which act more like "color difference" layers—on top of it. We can get away with this because the human visual system is much more forgiving of low-resolution color data than it is of low-resolution brightness data. It’s the same psycho-visual principle that justifies 4:2:0 chroma subsampling in video compression.
Also, for anyone interested in how deep the rabbit hole goes, looking at the source code for dcraw (or libraw) is a rite of passage. It’s impressive how many edge cases exist just to interpret the "raw" voltages from different sensor manufacturers.
The neatnik calendar is very nice. Others are talking about enhancements they've done and I've done my own, creating a pretty faithful JavaScript implementation with enhancements:
HN is designed to downweight sensational-indignant stories, internet dramas, and riler-uppers, for the obvious reason that if we didn't, then they would dominate HN's frontpage like they dominate the rest of the internet. Anyone who spends time here (or has read https://news.ycombinator.com/newsguidelines.html) knows that this is not what the site is for. The vast majority of HN readers like HN for just this reason. It is not some arbitrary switch that we could just flip, if only we would stop being censoriously sinister It's essential to the operation of the site.
At the same time, we downweight such threads less when the sensational-indignant story, drama, or riler-upper happens to be about YC or a YC-related startup. Note that word less. It means we "put our thumb on the scale" in the opposite direction you're implying: to make those stories rank higher than they otherwise would.
How you get from that all the way back to the notion that we moderate HN specifically to suppress negative stories about YC strikes me as escape-artist-level logic, and citing a web page that we ourselves publish as the best (only?) supposed evidence for this is surely a bit ironic.
Politicians don't say anything useful because everything they say is fodder for the media to find something to excerpt, distort, and then replay endlessly out of context as an attack. The media has forced politicians not to say anything.
Qu'on me donne six lignes écrites de la main
du plus honnête homme, j'y trouverai de quoi
le faire pendre.
If you give me six lines written by the hand of
the most honest of men, I will find something in
them which will hang him.
-- Cardinal Richelieu (attributed)
One of my favorite interview questions for senior positions is "Tell me about a decision you made that you would change in hindsight." Junior level people and people who are otherwise unfit for the role will try to give answers that minimize their responsibility or (worst case) have no examples. Senior level people will have an example where they can walk you through exactly how they messed up and what they would have done differently. Good senior level candidates examine their mistakes and are honest about them.
I'll just mention some that I have used and found good.
The drop-down visor like Yakuake is great.
Instant Replay is handy for ephemeral text that gets wiped from the terminal, like TUI apps and scaffolding tools.
You can imagine that there's always something like Asciinema recording into a buffer, so you can stop and rewind to catch any output you missed.
The notifications are useful.. I can start a long running task, get on with other things, and get a MacOS notification when that terminal rang a bell.
Global search is good, and searches across tabs. I also set a large scrollback buffer, so I can do a reverse incremental search for strings. You can also use the Triggers facility to highlight any string matches (or regex) whenever they occur in the terminal output. This is great when you are tailing a log and want to know immediately when an expression is output, alerting you that a condition has occurred.
Jumping up and down through the command entry points in a session is useful, if there's a lot of output to cut through (I think vscode terminal also does this).
I've also used the toolbelt side-window when I want to repeat verbose commands on a host where I don't want to set up aliases. There is much more you can do with the toolbelt, including automatically capturing text that matches regex patterns.
There's a lot I haven't mentioned, but those are some features I can recall finding useful.
Spaces are not for fullscreen but for basically virtual desktops i3 linux style
Here is superior user experience:
1. Install moom. Its keyboard windows arrangement is second to none. Its two-step tiling is a killer. Ie caps-a to show a popup with all the shortcuts, then “a” letter for vertical 1/3 of the screen. Or s for middle 2/3. Or q for top left third — you can assign any letter for any portion of the screen.
2. Use option1-6 to switch between desktops
3. For example alt-4 is a desktop where you have all on one screen (suppose you have 6k xdr like i do): safari, mail, messages, telegram, hey email, reeder
alt-3 is your productivity desktop where you have things, calendar, basecamp, notes, ia writer
alt-1 and 2 is for your main work like rider ide or what have you
Alt-5 for your remote stuff like remote desktop, servers, what have you
—
So with this you have a mental model of where everything is always and instant switching to it. Want to see your todos and notes? Alt-3. Want to see your browser and messaging? Alt-4. You get it.
Moom is better than tiling manager for screens like 6k 32” xdr.
Otherwise tiling managers are perfectly fine. For instance on windows I use komorebi