Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would be very interested to see the compiler that can optimise away memset across a shared library boundary.


LLVM can and will do it. It will assume it knows what a function named "memcpy" (for example) does and optimizes accordingly. (Look at TargetLibraryInfo.cpp and grep for LibFunc::memset in, for example, SimplifyLibCalls.cpp.)

(That said, I think TheLoneWolfling is being too strong with his/her claims. You can get modern compilers to avoid dangerous optimizations; it's just not for the faint of heart.)


I never said you couldn't get a particular compiler to. Or indeed, all current compilers.

I am saying that it's impossible to do so and remain in the realm of portable C / C++.

There is a distinction.


Also: isn't that a bug? Is there something in a C / C++ standard that states that a function named "memcpy" (for example) is necessarily the normal function?


Compilers have been doing this for a long time. The optimizations that this enables are essential for performance. They shouldn't stop; if the spec prohibits it, the spec should change (and if it doesn't, the compilers should ignore the spec).


Good to know about the first and last part.

And as for the second part... Meh. I don't see any optimizations that hard-coding calling something named "memcpy" (or whatever) does that cannot be enabled by looking at the actual code that gets linked. Albeit with more difficulty.


A JITter would be able to do that. And the JVM can (and will) do the equivalent for Java code.

Remember: there is nothing that specifies that C / C++ needs to be compiled.

Also: you could have said the same twenty years ago about many of the optimizations that currently compilers do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: