Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not very familiar with the implementation of programming languages, so maybe the terminology is subtly different, but how are either of those situations not deterministic?


When calling out to other code you don't control, you lose determinism in the sense of being able to exactly predict when objects get destroyed when you write the code. The refcount semantics of the code you're calling can change while still maintaining correctness, and this can cause your objects to be destroyed differently. Accidentally relying on this has been the cause of many OS-version compatibility problems on the Mac over the years.

For multithreading, I thought that would be fairly obvious. Once two or more threads hold ownership over a single object, you can no longer be sure which thread will perform the final decrement (at least in the general case) and so you don't know exactly when the object will be destroyed, or even which thread it will be destroyed on.


As others have pointed out, it's deterministic in the sense that the memory is reclaimed when the ref count goes to zero, and the ref count is always well-defined. I would have used the word "predictable", in that `delete foo` will always release memory, but `foo.Decrement()` may or may not, and local reasoning may not suffice.


It's still deterministic, strictly speaking. Maybe we need more subtle terminology here, like "fog of war" or "situational awareness" – as in "even with reference counting, the programmer may lose situational awareness of deallocation when calling into libraries beyond the fog of war."


In this case, I think this means it's not deterministic within the context of a single thread of execution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: