Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you mean compiling the .NET runtime (CoreCLR) itself or AOT compiling C# code? Because I recently tried AOT compiled C# and that builds quickly. And you don't generally need to compile the .NET runtime when embedding it because you can just load the builds they ship.


Simply running a DotNET dll. I just tried a basic "Hello World" program compiled to a CIL .dll and to RV64 machine code, on my x86 Linux machine (original ThreadRipper 2990WX) and then running it through different JIT/interpreters. Wall time:

DotNET: 0.061s

qemu-riscv64: 0.006s

Spike: 0.031s

Qemu is JIT with Linux syscall layer built in (in native code).

Spike is a RISC-V interpreter with Linux syscall layer provided by interpreted RISC-V "pk"

So the DotNET JIT has quite a high overhead for startup, or one-time code.

For very compute-intensive code DotNet has an advantage. e.g. on my own primes benchmark (https://hoult.org/primes.txt, https://hoult.org/primes.cs)

DotNET: 3.5s

qemu-riscv64: 10.2s

gcc: 9.7s (i.e. defaulting to -O0)

gcc -O1: 3.2s

DotNET beats emulated RISC-V here, but the RISC-V emulator (with RISC-V code compiled with -O1) is pretty much as fast as a lazy person compiling C to native x86 gets.


Oh, so you meant startup time for the runtime, JIT, etc. then. That's not exactly relevant to usage in game engines here because it is only done once.

If you want to compare startup times you should be using .NET's AOT compilation so it doesn't need to load the full runtime or do any JIT compilation. See here: https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: