With all due respect, I'm fairly sure that anyone using "VM" the same way as you do here really think of it as a container or what.
It's a runtime, and go also has a similar, fairly fat runtime. It's just burnt into the binary instead of being shipped separately. (Hell, even Rust has a runtime, it's just very very lean compared to languages featuring a full GC like go and java)
What started in Ukraine, this is modern warfare. Like most "consumer" goods that are mass produced, you can now get a capable strike force for peanuts.
The russians have taken close to 1.5 million casulties because ukraine engineering for cheap drones. Putin really, really f-ed up his "3 day military operation".
VAC is actually an AI based anticheat. I guess IF (a big if) it ever gets good enough it will be better than any kernel level AC, because it analyzes the gameplay, not the inputs, meaning a DMA cheat would also be caught.
"VAC" is a catch-all term for all of Valve's anti-cheating mechanisms.
The primary one is a standard user-mode software module, that does traditional scanning.
The AI mechanism you're referring to is these days referred to as "VAC Live" (previously, VACNet). The primary game it is deployed on is Counter-Strike 2. From what we understand, it is a very game-dependent stack, so it is not universally deploy-able.
I don't think that's what VAC is. I think VAC just looks for known cheat patterns in memory and such, and if it finds indisputable proof of cheating it marks a player for banning in the next wave. Maybe there is some ML involved in finding these patterns but I think it's very strictly controlled by humans to prevent fase positives. That's why VAC bans are irreversible, false positives are supposed to be impossible.
Valve has some AI detection stuff for CS2, but it’s remarkably ineffective. VAC itself delivers small DLLs that get manual mapped by Steam service, do some analysis and send that to Valve (at least to the best of my knowledge, there may be more logic implemented in Valve’s games or in Steam/Steam service).
This is pretty lame. I WANT to write code, something that has a formal definition and express my ideas in THAT, not some adhoc pseudo english an LLM then puts the cowboy hat on and does what the hotness of the week is.
Programming is in the end math, the model is defined and, when done correctly follows common laws.
The swedish gripen can do mach2 (2300km/h) and does not need a traditional runway (500 meters of something "flat enough" will do). I assume its way cheaper than something like this.
That doctrine works great for defending your homeland, when you are taking off from your roadside base and coming back home to a road-based airfield already on the map.
My understanding of these VTOL aircraft is they need to travel a long way, quickly, and set down in far less predictable conditions.
Why do you need hover? Its a pretty useless thing that requires shitloads of engineering (for a plane, not a heli). It sounds like a hollywood movie like requirement that is built for the purpose of burning tax payer dollars.
async rust is the worse async out there. I prayed that rust did not include a async at all. But the JS devs pushed it thru. That pretty much sealed my rust use. Im still salty.
You know what, I’ve heard people say this and thought “OK, maybe these other languages with GCs and huge runtimes really do something magical to make async a breeze”.
But then I actually tried both TypeScript and C#, and no. Writing correct async code in those languages is not any nicer at all. What the heck is “.ConfigureAwait(false)”? How fun do you really think debugging promise resolution is? Is it even possible to contain heap/GC pressure when every `await` allocates?
Of course it has drawbacks, everything does, but my practical experience has been hugely in favor of what golang is doing, at least, in terms of cognitive load and code simplicity. It is very much worth it in many, many cases
In .NET 11 C# async management moved to the runtime, mostly eliminating heap allocations and also bringing clean stack traces. You really only need to think about ConfigureAwait(false) when building shared libraries or dealing with UI frameworks (even there you mostly don't need it).
Sure, it’s really fine for what it does, but it is not significantly easier to deal with than Rust async, and remains fundamentally unsuited in several scenarios where Rust async works really well.
And embedded systems vastly outnumber classical computers. Every classical computer includes several microcontrollers, every car, modern appliance, camera, toy, etc does too. Safe languages for embedded and OSes is very important. Rust just happens to be pretty good for other use cases too, which is a nice bonus. But that means the language can't be tied to a single prescribed runtime. And that it can't have a GC, etc.
Never heard of either. You will have to expand on your reasoning. Microcontrollers do outnumber classical computers though, that is just a fact. So i don't see why there is anything to disagree about there. Even GPUs have helper microcontrollers for thermal management and other functions.
They have been selling real time bare metal Java runtimes for embedded systems, widely deployed across weapon systems in battleships and missile tracking units, factory automation, satellites, and other similar systems for the last decades.
I bet many of those helper microcontrollers, are still Assembly, compiler specific C, and if there is Rust support, most likely only no_std fits, thus no async anyway.
Java on smartcard etc is a thing. I haven't met anyone who used that and actually like it. And it is apparently nothing like normal java.
Many microcontrollers are indeed still running C, but things are starting to change. Esperif has official support for Rust for example, and other vendors are experimenting with that too. Many other microcontrollers have good community support.
> if there is Rust support, most likely only no_std fits, thus no async anyway.
This is just plain incorrect. The beauty of async in rust is that it does work on no_std. You don't need an allocator even to use Embassy. Instead becuause async tasks are perfectly sized, you can reserve space statically at compile time, you just need to specify with an attribute how many concurrent instances of a given task should be supported.
PTC and Aicas aren't Java on smartcards, they are Java on high integrity computing where human lives might be at stake.
Interesting how compiler specific extensions are ok for C, with a freestanding subset, or Rust no_std, but when it goes to other languages it is no longer the same.
> Interesting how compiler specific extensions are ok for C, with a freestanding subset, or Rust no_std, but when it goes to other languages it is no longer the same.
Not sure what you mean here. For Rust there is only one de facto compiler currently, though work is ongoing on gccrs (not to be confused with rustc_codegen_gcc, which only replaces the llvm backend but keeps the rest of the compiler the same). Work is also ongoing on an official spec. But as it currently stands there are no compiler specific extensions.
If you meant the attribute I mentioned for embassy? That is just processed by a rust proc-macro, similar to derives with serde is used to generate (de)serialisation code. It too adds custom attributes on members.
It means that being constrained to C compiler dialect for tiny CPUs instead of ISO C proper, or being constrained to [no_std] instead of the whole Rust capabilities and ecosystem, isn't seen under the same light as when it is C++, Swift, Go, Java or whatever else might also be constrained for the same purpose.
Hm, I haven't come across such sentiment. It is fairly well understood that if you use C++ for example on embedded, you are limited as to what features you can use. I remember coming across that when using PlatformIO for Arduino many years ago, certain parts of STL was just missing.
The other languages you mentioned I have no personal experience of in embedded (and hardly outside embedded either), but I understand they are less common (apart from Java possibly, but only in certain niches). There is also Ada/SPARK and MicroPython (which always seemed more like an educational thing for people new to programming). I haven't used either.
I would like to add that it feels like no-std rust is less constrained than freestanding C to me. I haven't managed to figure out why exactly. Perhaps the ecosystem of crates in Rust for embedded is just better, with many crates offering a way to opt out of the std-parts (plus a number of good libraries aimed directly at no-std). Perhaps it is that it is easy to add alloc to no-std if you are working on a higher end embedded system (with obvious tradeoffs in code size, memory usage etc). Or perhaps the no-std parts of rust simply contains more than freestanding C.
reply