I can understand why various unscrupulous entities and individuals would use AI to generate "slop" content to drive clicks/karma farm etc. But it's baffling to me when I ask someone a question and they respond saying they asked ChatGPT/Claude/etc. and then just share the full response. They seem to genuinely think this is something I wanted them to do.
It makes sense to me that a program currently written using multiple processes would now be re-written to use multiple truly parallel threads. But it seems very odd to suggest (as your grandparent comment does) that a program currently run in multiple containers would likely be migrated to run on multiple threads.
In other words, I imagine anyone who cares about the overhead from serialization, IPC, or process creation would already be avoiding (as much as possible) using containers to scale in the first place.
Yeah, I somehow glossed over the whole container thing.
The container thing might be horizontal scaling thing where 1 container runs on 1 instance with 1 vCPU, running multiple processes on instances means you need beefier slices of compute to take advantage of the parallelism, and you can't cleanly scale up and then down using only the resources you need.
If you have a queue distributing work, that model makes sense with single-threaded interpreters where consumers instances are spun up and down as needed, versus pushing work to a thread pool, or multiple instances with their own thread pools, that aren't inhibited by the GIL. The latter could be more efficient depending on the work.
A bus route that is made twice to be twice as fast (whether through stop consolidation or some other means) will mechanically be twice as frequent, given the same number of service-hours. Slow buses are either more expensive to run, or come less frequently.
I agree with your opinion about the naming being confusing. Specifically regarding your mathematician friend what would you lose by taking a fast Fourier transform over a normal Fourier transform? Well the two aren't interchangeable. You would lose continuous time / frequency!
Some personal preference:
I find it hard to read the grey text on a white background that you have, and it's probably just a fundamental limit of reader mode in firefox, but it doesn't render mathml right. To read it I zoomed in, but then there were CSS issues where the content overlapped the sidebar.
While |x| is common to reference the length of a set I've not really seen that to reference the number of elements in a vector in the fields where I've used discrete Fourier transforms. I've always just defined N as the length of my vector. I honestly read it at first as the norm of x, and the norm of F{x} and thought you might be about to talk about Parseval's theorem.
Enjoyable enough and accurate article though. Thanks!
I used "|x|" notation because I don't like introducing new unknown names if I don't have to. Too bad the annotation is ambiguous; I'll make a note about it.
If you right-click on the math blocks, you can change some of the parameters of the MathJAX renderer. One feature I've found helpful is the "click to zoom" which can be activated by following `Math Settings -> Zoom Trigger -> Click`.
I tried changing the text color. How does it look to you now?
I find the new text color easier to read. I hope everyone else shares my opinion since you put some work into catering to it.
Math notation is not great generally. There are canonical notations for somethings, and some times they're overloaded. Not much to do about it other than know about it.
Annoyingly you have to "know your audience" to get your math notation right for who you're presenting to. (You can never really do that on the Internet)
As an electrical engineer who's done a lot of DSP and worked with mathematicians I can point out some things that look either odd or normal depending on who I'm talking to. You can never really win with notation -- you'll always be wrong to someone =), but there are choices that are maybe less wrong for one discipline or another.
All that to say keep writing! You're doing pretty well!
reply