Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for the tips. gcc builtins are definitely the next level for me.

I'm curious about your scanf point though. I'm using scanf/strdup before the check loop to load the words into memory. Are subsequent memcpy calls on an implicitly loaded *char dict[] array faster on non-malloc'd memory for some reason?



Unfortunately, I have no empirical data or to support that hard-coding in a payload has an advantage over a dynamic allocation. As an aside, the Python dictionary (hash table for you rubyists/perlfaces) works this way, as it over-resizes to allow growth, unless you specify its maximum size (__slots__ I think?)

My hope and understanding are that the compiler's static analysis skills will best align the given data, whereas something like malloc() would try to allocate on the heap as best it can with the open possibility of other data needing to be on the heap as well.

The point I was driving at, and what I tried to do, was to hard-code in as many known variables as possible such that the compiler could do its thing and try to be as efficient as it could with the given data. This includes the word list, the list of lengths, the target hash -- none of which has to be done by hand, but rather a higher-level script called before compilation.

Good times.


That makes sense, I might test it sometime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: