Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unfortunately that's not feasible for highly performance-sensitive projects like browsers or games.


It absolutely is. If most of your time is spent on calls traversing large portions of a code-base that size, then you have a far bigger problem in that you'll be blowing your cache all the time. Fix that problem, and you're halfway there to creating better separated modules that can be encapsulated the way I described.


Can you point to an example where this has been done?


A 21-year old book is dedicated to this subject:

https://www.amazon.com/Large-Scale-Software-Design-John-Lako...


> Unfortunately that's not feasible for highly performance-sensitive projects like browsers or games.

Bullshit. Code needs to be compiled, but it isn't required to build everything from scratch whenever someone touches a source file.

Additionally, not all code is located in any hot path.


Code needs to be compiled, but it isn't required to build everything from scratch whenever someone touches a source file.

Except for C++, where a tiny change in a single object will require recompiling every file that transitively includes that object's header.


Depends on the change and the though given to header file dependencies.

PIMPL, forward declarations, pre-compiled headers, binary libraries are all tools to reduce such dependencies.


I think that only re-linking should be required if you only change source files and not headers. Headers implicitly convey are the sizes, inheritance and other stuff that dependencies need for compilation.

I suppose you could have some extra aggressive optimizations that force inlining, but I haven't seen a need for this, even in game dev.


The stated numbers are for full rebuilds.


This would be the end of the discussion if it weren't for stupidity like this: https://groups.google.com/a/chromium.org/forum/#!msg/chromiu...

Generally, I find when people crow about performance, the product they're talking about usually has some questionable architectural/design/implentation decisions that dominate the performance issues so I have to do my best not to roll my eyes.

Yes, you can write performant C++ using well-understood compiler firewalls, interfaces, etc that reduce your compile time.


I once cut 30% of page generation times for a commercial CMS in half a day by just skimming through their output generation code and changing std::string method invocations to get rid of unnecessary temporaries.

People very rarely has any clue about this at all.


Agreed - though I'd say that performance-sensitivity is more a function of the number of users than the application domain.

If a few hundred or few thousand people each have to build Chrome from scratch a couple times, and making their compilation process much slower makes each of a trillion pageviews a millisecond faster...the break-even point seems to be about a 27 hour build time sacrifice.


Chrome has a staggering amount of C++ code. It's not all heavily hand-optimized. Probably very little of it is optimized at all, or needs to be.

They're relying on the compiler working its magic to make non-hand-optimized code run pretty fast. That's fine, but it requires you to expose a lot of stuff in headers and that slows down compilation.

I'm fairly sure, like other commenters, that they could speed up compilation a lot and impact performance very little by carefully modularizing their header files. But that's a really big job.


The 90/10 rule still holds. Sometimes it's even more skewed.


Even Knuth talked about it being a 97:3 rule[1] and according to some it's gotten more skewed since then[2]

[1] http://sbel.wisc.edu/Courses/ME964/Literature/knuthProgrammi...

[2] http://blog.cr.yp.to/20150314-optimizing.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: