Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looks really cool.

If I understand it correctly, the first call to lthread_create() in the main thread will create a new pthread with a local scheduler. Each call to lthread_create() in that lthread will create local lthreads in that scheduler, so in essence each lthread pool is actually single threaded unless there is a non exclusive or blocking operation you can run lthread_compute_begin()/lthread_compute_end() on. This is opposed to something like Grand Central Dispatch where you can assign "tasks" to the scheduler which will schedule them on an available thread pool.



It will not create a new pthread in the local scheduler but a local lthread scheduler gets created in the thread context. So if you want to create more than one lthread scheduler, you just have to create a pthread first and the new lthreads created in the pthread will be bound to that pthread.

lthread_compute_begin()/end() moves the lthread into a separate pthread and resumes it there. That pthread is called lthread compute scheduler, its job is to resume lthreads that will take relatively long time to finish a task. lthread compute schedulers are created as needed and they stay alive for 60 secs after which they die of inactivity. If it fails to create a new pthread (max pthreads reached for example) to resume the lthread, then it will get queued in the least busy compute scheduler. When few lthread compute schedulers get created, they act as a pool accepting new lthreads and resuming them, when they cannot handle the load, the pool grows until the pthread limit is reached and jobs will get queued up.

I believe this is close to what GCD does but probably not exactly the same.


rather than queuing in the least busy scheduler, they could perhaps be queued centrally waiting for a scheduler?

Knowing when a long-running task finishes is not easy to guess.


I have a requirement that I couldn't get rid of yet, which requires me to know which scheduler it is going to run on before I let go of it. Once I manage to find a way around it I'll move to a global queue model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: