Hacker Newsnew | past | comments | ask | show | jobs | submit | et1337's commentslogin

I’ve been driving Bluefin DX for a year or two. On the plus side, it works absolutely flawlessly. This is the longest I’ve ever run a Linux distro without a Nvidia driver update causing the whole thing to explode. It truly is the year of Linux on the desktop.

But I can’t say I recommend it for dev work. It wants you to do everything inside devcontainers, which I like in theory but in practice come with so many annoyances. It wants you to install Flatpaks but Flathub is pretty sparse. I ended up downloading raw Linux binaries into my home directory (which actually works surprisingly well. Maybe this is the future, hah)

I think next time I’ll just go with vanilla Fedora.


I also think there’s an interesting effect when cool functional language features like currying and closures are adopted by imperative languages. They make it way too easy to create state in a way that makes you FEEL like you’re writing beautiful pure functions. Of course, in a functional language everything IS pure and this is just how things work. But in an imperative language you can trick yourself into thinking you’ve gotten away with something. At one point I stored practically all state in local variables captured by closures. It was a dark time.

I'm actually fascinated by what you wrote. Why was it a dark time?

No encapsulation… huge functions with tons of local variables shared between closures… essentially global state in practice. I think ant the time, objects with member variables felt “heavy” and local variables felt “light”. But the fact that they were so lightweight just gave me more opportunities to squirrel away state into random places with no structure around it. It really wasn’t all that horrific, and it helped me ship something quickly, but it wasn’t maintainable. These days I think the “heavy boilerplate” of grouping stuff into structs and objects forces me to slow down and think a bit harder about whether I really want to enshrine a new piece of state into the app’s data model. Most of the time I don’t.

I think the worst case is actually that the LLM faithfully implements your spec, but your spec was flawed. To the extent that you outsource the mechanical details to a machine trained to do exactly what you tell it, you destroy or at least hamper the feedback loop between fuzzy human thoughts and cold hard facts.


Unfortunately even formal specifications have this problem. Nothing can replace thinking. But sycophancy, I agree, is a problem. These tools are designed to be pleasing, to generate plausible output; but they cannot think critically about the tasks they're given.

Nothing will save you from a bad specification. And there's no royal road to knowing how to write good ones.


Right, there’s no silver bullet. I think all I can do is increase the feedback bandwidth between my brain and the real world. Regular old stuff like linters, static typing, borrow checkers, e2e tests… all the way to “talking to customers more”


Turn off your watch history. It disables the front page and shorts, but you can still watch any video you want and also follow your subscriptions. You still get recommendations next to each video but I find those much less problematic personally.


Unfortunately, with watch history off, YouTube still pushes Shorts in the subscriptions page (at least on mobile web, which is where I primarily use YouTube).


I find that a lot less problematic as there's just very few shorts on my feed, I've never been able to scroll through more than 5 or so without just going into ones I've seen before.


The Unhook browser extension gets rid of that. And optionally other things.


This was a fun one today:

% cat /Users/evan.todd/web/inky/context.md

Done — I wrote concise findings to:

`/Users/evan.todd/web/inky/context.md`%


Perfect! It concatenated one file.


To be fair, it was very concise


Based on my experience writing many games that work great barring the occasional random physics engine explosion, I suspect that trigonometry is responsible for a significant proportion of glitches.

I think over the years I subconsciously learned to avoid trig because of the issues mentioned, but I do still fall back to angles, especially for things like camera rotation. I am curious how far the OP goes with this crusade in their production code.


Yes, for physics engines I think that's a very good use case when its worth the extra complexity for robustness. Generally I think if errors (or especially nan's) can meaningfully compound, ie if you have persistent state, that's when its a good idea to do a deeper investigation


Your response is well-grounded--trig is trouble. Angles are often fine, but many 3rd party library functions are not.

Have you ended up with a set of self-implemented tools that you reuse?


You can definitely handle camera rotation via vector operations on rotation matrices.


This video is a really cool dive into EUV for the uninitiated (me) https://youtu.be/MiUHjLxm3V0?si=kEPSicC2WXYhcQ6L


Or this video, which came out before Veritasium's

https://www.youtube.com/watch?v=B2482h_TNwg


https://youtu.be/NGFhc8R_uO4

Or this presentation which came out way long ago.


This is worth the (re)watch every time it comes up.


"I didn't want my name associated with this on the internet"


I didn’t, still don’t, but that’s a lost cause.

I’ll note that this video is way out of date…both in content and my skills as a speaker :P


Thanks for your presentation, watched ut several times over the years. If your presentation skills are better now hopefully you csn make a new one.


Thanks for the informative presentation!


thanks for the HN community - the video is how I ended up here and its one of the few social media-esque sites I bother visiting. Taught me a pile of things about coding and CS that weren't in my mechanical engineering degree.


Glad to see Branch Education represented here.


I thought this video was a lot better than the Veritasium video. The Veritasium video was awkward. I think they tried to follow the formula from the (excellent) blue led video that performed so well, but it just didn't work.


Disagree, I thought the Veritasium video was fantastic. You understand how the machine works in depth, the history of its development and challenges it encountered, and hear from people actively working on it. It’s a science lesson and history lesson. Like usual, they keep the video engaging and focused on the story, while still keeping a lot of depth with the science. It’s a great format


Or this Asianometry video which came out even sooner.

https://youtu.be/MXnrzS3aGeM


> Thanks for mentioning ASML sponsoring this. I was about to buy an EUV machine from another vendor

lol


The whole “exploding tiny drops of metal” in the middle of this is just Loony Toons. This machine is literally insane and two of the companies I am long-long on would be completely fucked without it.


You forgot WITH LASERS, and IN A VACUUM


IIRC from the Veritasium video[0] there is actually some hydrogen gas flowing at quite a high speed though the laser chamber to carry away the tin debris so that it does not accumulate on the mirrors.

[0] https://www.youtube.com/watch?v=MiUHjLxm3V0


They account to every single tiny atom somehow too, but I think I fell asleep last time I watched the video.


The old SemiAccurate article https://semiaccurate.com/2013/02/13/euv-moves-forward-two-st... was very funny.


Seeing this news story made me briefly fear that they’d found a way to replace this glorious mechanism. Thankfully not. In fact, they’re going to shoot more droplets, more often!

So much more fun than LEDs.


Yes it was crazy when I first heard about it "wait what? they shoot it in mid-air?" and that was before I found out they did that like 30k times a second.

But now 100k times a second apparently. Humans are amazing.


You have a machine that’s basically a clean room inside and one of the parts is essentially electrosputtering tin but then throwing all the tin away and using the EM pulse from the sputter to do work.

Oh and can you build it so it can run hundreds or thousands of hours before being cleaned? Thanks byyyyyyyyeeeeee!


The inside of those machines are far, far cleaner than the inside of any clean room ever entered by a human. They have to be molecularly clean.


Which isn't easy considering they explode tin droplets in the machine. I think that's the point the other commenter wanted to make.


Think about the purity requirements that places on the tin.


> We are going to spray expensive stuff in an extremely fine and precise line. Then we're going to shoot a laser at each droplet.

< Why?!

> To make a better laser.

< Yes, of course you are.

> 100,000 times per second.

< [AFK, buying shares.]


I have shares in one of their biggest customers, and one of their customer’s biggest customers.

We are quickly leaving the realm of dependent variables still looking anything like diversification.


> We are quickly leaving the realm of dependent variables still looking anything like diversification.

What does that mean?


It seems like you want someone to ask you what the two companies are. So - what are the two companies?


Nvidia and an AI company


Don't forget that they are hitting each droplet 3 times.


That is why each machine costs a few hundred million eurodollars.


The thing I didn't understand after watching that video was why you need such an exotic solution to produce EUV light. We can make lights no problem in the visible spectrum, we can make xray machines easily enough that every doctors office can afford one, what is it specifically about those wavelengths that are so tricky.


The efficiency of X-ray tubes is proportional to voltage, and is about 1% at 100kV voltage. This is the ballpark for the garden variety Xray machines. But the wavelength of interest for lithography corresponds to the voltage of only about 100V, so the efficiency would be 10 parts per million.

The source in the ASML machine produces something like 300-500W of light. With an Xray tube this would then require an electron beam with 50 MW of power. When focused into a microscopic dot on the target this would not work for any duration of time. Even if it did, the cooling and getting rid of unwanted wavelengths would have been very difficult.

A light bulb does not work because it is not hot enough. I suppose some kind of RF driven plasma could be hot enough, but considering that the source needs to be microscopic in size for focusing reasons, it is not clear how one could focus the RF energy on it without also ruining the hardware.

So, they use a microscopic plasma discharge which is heated by the focused laser. It "only" requires a few hundred kilowatts of electricity to power and cool the source itself.


The issue isn't in generating short wavelength light, it's in focusing it accurately enough to print a pattern with trillions of nanoscale features with few defects. We can't really use lenses since every material we could use is opaque to high energy photons so we need to use mirrors, which still absorb a lot of the light energy hitting them. Now this only explains why we need all the crazy stuff that asml puts in it's EUV machines to use near x-ray light, but not why they don't use x-ray or higher energy photons. I believe the answer to this is just that the mirrors they can use for EUV are unacceptably bad for anything higher, but I'm not sure


Photoresist too. XRays are really good at passing through matter, which is a bit of a problem when the whole goal is for them to be absorbed by a 100 nanometer thick film. They tend to ionize stuff, which is actually a mechanism for resist development, but XRay energies are high enough that the reactions become less predictable. They can knock electrons into neighboring resist regions or even knock them out of the material altogether.


It really is the specific wavelength. Higher or lower is easier. But euv has tricky properties which make it feasible for Lithography (although just barely it you have a look at the optics) but hard to produce with high intensities.


Specifically, what makes x-rays easy to generate are these: https://en.wikipedia.org/wiki/Characteristic_X-ray In essence, smashing electrons into atoms allows you to ionize the inner shell of an atom and when an electron drops down from an outer shell, the excess energy is shed as high-energy photons. This constrains the energy range of X-ray tubes ("smash electron into metal") to wavelengths well below 13.5nm.

(These emission lines are also what is being used in x-ray spectroscopy to identify elements)


You can also generate broad spectrum bremsstrahlung radiation easily, this is widely used for medical X-rays.


Any source to this? I am hearing this for the first time.


ITs easy to make X-rays, you just hit a metal target with electrons: https://en.wikipedia.org/wiki/X-ray_tube


You can hit metal the same way for EUV.


No you can't, or rather you only get a tiny amount in the correct wavelengths


I assume this doesn't work well otherwise everyone would be doing it.


There is such a thing as X-ray lithography, but it comes with significant challenges that make it not really worth it compared to EUV.


I'd like to hear more about these challenges


There are no normal x-ray mirrors. The only way to focus them is to use special grazing mirrors where the x-rays hit them almost parallel to the surface.

https://science.gsfc.nasa.gov/662/instruments/mirrorlab/xopt...


As I understand it, primarly because due to the high energy level of x-rays, light x-ray interacts very differently with materials[1]. Primarily they get absorbed, so very difficult to make mirrors or lenses, which are crucial for litography to redirect and focus the light on a specific miniscule point on the wafer.

The primary method is to rely grazing angle reflection, but that per definition only allows you a tiny deflection at a time, nothing like a parabolic mirror or whatnot.

[1]: https://en.wikipedia.org/wiki/X-ray_optics


All of these problems or equivalent still exist in EUV. Litho industry had to kind of rethink the source and scanner because it went from all lenses to all mirrors in EUV. This is also why low NA and high NA EUV scanners were different phases.

As I hear it, the decision had large economic component related to Masks and even OPC.


100%. EUV barely works. XRay litho takes all the issues with EUV and cranks them up to 11. It will take comparable effort to EUV, if not more, to get XRay litho up and running, and I'm not aware of anyone approaching this to anywhere near the level of investment that ASML (and others) have pumped into developing EUV tech. We may get there eventually as a species, but we're a ways off.


If you think it barely works now, you should've seen it when we first started. Availability of a machine was "fuck you"% and the whole system was held together by duct tape, bubblegum and hope. Compared to that the current system is entirely controllable.


Oh, for sure, via herculean effort and investment we have created ourselves a functioning and economical process!

We do actually have functioning processes for XRay litho today, but we'll need that same level (or more) of investment and effort to make it economical.


Stochastic effects become a bigger and bigger problem. At some point (EUV) a single photon has enough energy to ionize atoms, causing a cascade that causes effects to bloom outside of the illumination spot.


Here's your link without the surveillance

https://www.youtube.com/watch?v=MiUHjLxm3V0


With slightly less surveillance


Touché. Here's the link without surveillance

https://yewtu.be/watch?v=MiUHjLxm3V0


try duck player


https://www.youtube.com/watch?v=5Ge2RcvDlgw

Asianometry has lots of videos on ASML, this one is specifically about the light sources.


> https://youtu.be/MiUHjLxm3V0

PSA: the si (along with pp) parameter is used for tracking purposes:

    ?si=kEPSicC2WXYhcQ6L
consider cutting whenever possible.


Asianometry has half a dozen or so videos of you want some really deep dives on the tech and industry (with sources, since we’re on HN)


Okay this is weird.

> The key advancements in Monday's disclosure involved doubling the number of tin drops to about 100,000 every second, and shaping them into plasma using two smaller laser bursts, as opposed to today's machines that use a single shaping burst.

This is covered in that video. Did they let him leak their Q1 plans?


That has been covered before in other videos[0] that this is their roadmap to higher power, so I'm also not sure what they have announced now that wasn't previously announced.

[0]: https://www.youtube.com/watch?v=MXnrzS3aGeM


From the first video I thought they had already shipped this, but it sounds like they were describing what their new model was.

This seems like a product with a very very long sales pipeline, so I wonder if they work on pre-orders with existing customers but announce delivery milestones only as they come?


Highly recommend this video as well, he has a bunch more worth watching. https://youtu.be/rdlZ8KYVtPU?si=wgjkkNDSzuuS3lVK


One of those odd moments where a YouTube title looks like clickbait but is actually, factually correct.

+1 for this video, and the Branch education one. Well done to both teams.


As shown with that terrible speed of electricity video, Veritasium prefers "technically correct" over factually correct.


A personal finance app called “Predictable” that takes chaotic sloshes of money and turns them into steady streams of cash. You tell it “I receive this much money weekly/monthly/on the first and fifteenth/when Mercury is in retrograde, and I have these expenses at other various intervals” and it evens everything out into a constant weekly flow of cash by, essentially, buffering. Any overflow or underflow goes to a “margin” bucket which basically tells you how much you could spend right now and still have enough for all your recurring expenses.

Currently making it just for myself but curious if anyone else would find it useful.


I'd love something like that, with the added ability to basically split the margin bucket into multiple buckets (one for me, one for the wife).

The main issue I've had with budgeting apps continues to be pulling in up-to-date transaction data, which is necessary to know how much I can spend right now. There always seems to be problems with the data syncing. Apple Card is the worst, as you can only pull transactions via wallet on device.

I wish we could just use a single bank account at the Fed. The banking network is absolutely shit and there's basically a 1% tax on everything that goes to the rich for no good reason.

Budgeting was soooo much easier with cash – it's maddening all the data is there for real-time personal finance but it can't be accessed.


Didn’t we just have a front page article about the average founder age increasing well beyond 30 this year? Is it a non-normal distribution or what?


Tunguz shows early 40s as the median

https://tomtunguz.com/founder-age-median-trend/

YC trends younger given what they’re looking for


Lots of explanations with power here:

- There's a hard edge to the distribution that isn't far from 24 (I'd expect relatively few sub-18-year-old YC founders, but more 31+-year-olds)

- Older founders (with more experience, larger networks and less life flexibility) aren't a good fit for incubators.



I don't see how this article could possibly support the argument that C# is slower than GDScript

It compares several C# implementations of raycasts, never directly compares with GDScript, blames the C# performance on GDScript compatibility and has an strike-out'ed section advocating dropping of GDScript to improve C# performance!

Meanwhile, Godot's official documentation[1] actually does explicitly compare C# and GDScript, unlike the the article which just blames GDScript for C#'s numbers, claiming that C# wins in raw compute while having higher overhead calling into the engine

[1]: https://docs.godotengine.org/en/stable/about/faq.html#doc-fa...


My post could have been a bit longer. It seems to have been misunderstood.

I use GDScript because it’s currently the best supported language in Godot. Most of the ecosystem is GDScript. C# feels a bit bolted-on. (See: binding overhead) If the situation were reversed, I’d be using C#. That’s one technical reason to prefer GDScript. But you’re free to choose C# for any number of reasons, I’m just trying to answer the question.


At least in my case, I got curious about the strength of /u/dustbunny's denouncement of Godot+C#.

I would have have put it as a matter of preference/right tool with GDScripts tighter engine integration contrasted with C#'s stronger tooling and available ecosystem.

But with how it was phrased, it didn't sound like expressing a preference for GDScript+C++ over C# or C#++, it sounded like C# had some fatal flaw. And that of course makes me curious. Was it a slightly awkward phrasing, or does C# Godot have some serious footgun I'm unaware of?


Makes sense! I think dustbunny said it best: C# is “not worth the squeeze” specifically in Godot, and specifically if you’re going for performance. But maybe that’ll change soon, who knows. The engine is still improving at a good clip.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: