The ratio of computer minutes per programmer-minute has indeed gone to an amazing number nowadays! I work in VFX (at RSP) and this fact is vividly illustrated for me all the time by the millions of thread-hours we go through on the renderfarm each week!
Despite all the astounding developments in AI/ML though, I still think there's still a critical need for the application of human/biological imagination and creativity. Sure the amount of leverage between thoughts and CPU cycles can be utterly giant now, but it doesn't seem to diminish the need (where performance or correctness/less-bugs are needed) for a full understanding of what the computer actually gets up to in the end.
For what it's worth, we do have an ML department at RSP and they are doing great! But I'm not sure we'd get very far if we tried to vibe-code the underlying pipeline, as it really requires full understanding of many interlocking pieces.
Agree, but can't we just include both average _and_ mean? And maybe min/max while we're at it? Seems like that could give a much clearer picture (without even needing a graph!?)
Min & max are also meaningless for most distributions, so probably you should instead look at P1 and P99 or something, and all of a sudden you're now talking about 5 numbers when all you wanted was a quick point.
I totally loved the plasma effect from whenever I first saw it, and implementing it myself in Pascal/DOS was one of the first times I really started to understand a 'shading'-like context where you are coming up with a value for every pixel, the pixels can be made to have 2D 'coordinates' (even though they are actually a 1D chunk of VRAM! -> modulo to the rescue!) and that you could transform the 'space' such that you feed in the coordinates (including time) and evaluate different-enough sine functions (then sum them, in this case) to create a beautiful soft-waves-evolving-over-time result! Was definitely an eye-opener about how to make it have nice colors as well! Great to see things like this being documented in this way!
Thanks for that! This is amazingly faithful to the original! And although aliasing is pretty-much my arch-enemy in life, here the aliasing and all the slightly-odd z-chatter stuff somehow adds to the charm!? Brilliant!
Yeah that was my original interpretation of the title too! Perhaps something like:
How did early humans understand their situation and what did they think the 'world' was like, and what did they think they should do with their lives?! I find it fascinating to think how that longing to know what it's all about has changed so much for humans over time.
Mirrors are still heaps interesting though, as is reflection/refraction/light-transport in general I'd say! But it wasn't about what I expected when I read it.
Indeed! I rather like the idea that solitons are something like the simplest self-propagating 'things' in any medium/computation-reigeme, so gliders in Game-of-Life might qualify and in more complex/subtle systems they can have more complex behaviours as well (like bacteria, or flies? Hehe) Here's a fun example I made in gollygang/Ready (and Houdini) of PDE solitons that spin around with rippling wakes:
Only if they retain their original shape. The point is not that any wave is a soliton, but a soliton never changes shape as it moves (through time, a medium, or whatever). The soliton can decrease in amplitude, and expand in width, but otherwise remains the same.
A pure, single Gaussian hump is the soliton for homogenous linear media. If you create an audible with the spectral shape of a Gaussian (and therefore also the time shape), it might get quieter as it moves across the room, and longer, but will still "sound" the same.
I believe so, although the way I usually think about Solitons is like a single packet.. so just one cycle of a wave. Continuous sound could probably be thought of as a continuous stream of solitons (I think ppl call them phonons when it's sound though). I haven't studied PDEs nor solitons in a formal way I just love playing with them. Gray Scott with History and Wave (a formula I contributed to Gollygang/Ready) supports many fascinating soliton behaviours. Here's 25mins of one of the strangest parameter settings I've found:
I'm not an expert and have not yet worked with splats, however I understood that unlike super-sharp-edged triangles they can represent complicatedly-transparent 'soft' phenomena like fur or clouds or similar that would ordinarily need to be rendered using possibly semi-transparent curves/sheathes (for fur/grass) or voxels for cloudy things like steam/mist. I gather splats can also represent and reproduce a limited amount of view-dependent specularity, as other commenters have said this is not dynamic and cannot easily deal with changing scene geometry or light sources.. still sounds like a fun research-project I make it do more in terms of illumination though!
I've been interested in continuous automata for a good while! I usually call it "Reaction Diffusion" (good old Turing!) but a lot of people hearing that might think I might just mean "Gray Scott", but I actually mean the whole realm of solving various types of equations (in feedback of course) on continuous-valued fields. My software of choice for that is gollygang/Ready (where some of my experiments are available to play with in the release) and also Houdini, to use the resulting simulation data in whatever way one creatively desires!
I want to know more about an intuitive take on how the Zeta function does what it does! I know it must relate somehow to finding (or perhaps excluding) all the composite numbers but I'd really love to get more of a feeling about what each 'octave' of the function is adding-in. Seems like it must be something that 'flattens' the composites but increases sharply (in the infinite sum) at each prime.. but it's still a mystery to me how one could intuitively realise or discover that it's this specific function!? How did he do it?!
Despite all the astounding developments in AI/ML though, I still think there's still a critical need for the application of human/biological imagination and creativity. Sure the amount of leverage between thoughts and CPU cycles can be utterly giant now, but it doesn't seem to diminish the need (where performance or correctness/less-bugs are needed) for a full understanding of what the computer actually gets up to in the end.
For what it's worth, we do have an ML department at RSP and they are doing great! But I'm not sure we'd get very far if we tried to vibe-code the underlying pipeline, as it really requires full understanding of many interlocking pieces.
reply