The thing that burned me with asyncio primitives is that calling an async function doesn't even schedule it (by default). The pattern for fire-and-forget is impossible to guess when coming from any other language - although it's called out in the docs. You must also call create_task AND you must maintain a collection of tasks because otherwise the task might not run, because it can be garbage collected before running, AND you must clean up completed tasks from your collection.
Then someone should really update the official python docs that explain the fire-and-forget pattern (https://docs.python.org/3/library/asyncio-task.html#asyncio....)! I had a FastAPI server, and calling a particular endpoint is supposed to kick off some work in the background. The background work does very little CPU work, but does often need to await more work for several minutes, so it's a good fit for asyncio. How do you want it to be structured? (In other words, on the level of human requirements, it IS fire and forget.)
Concurrency based on coroutines is making use of cooperative multitasking, leaving the concurrent execution of tasks up to the runtime. To elaborate a bit on what that implies, let me just ask you the following question: Is there something less cooperative than a task that doesn't yield its control back to the main thread?
Regarding the fire-and-forget pattern I can think of at least two issues, let me illustrate them with the following example:
import asyncio
from typing import Any, Coroutine
_background_tasks = set[asyncio.Task[None]]()
def _fire_and_forget(coro: Coroutine[Any, Any, None]) -> None:
task = asyncio.create_task(coro)
_background_tasks.add(task)
task.add_done_callback(_background_tasks.discard)
async def _run_a() -> None: # to illustrate issue A
raise RuntimeError()
async def _run_b() -> None: # to illustrate issue B
await asyncio.sleep(1)
print('done')
async def main() -> None:
# issue A: Exceptions can't be caught
try:
_fire_and_forget(_run_a())
except RuntimeError as exc:
print(f'Exception caught: {exc}')
# issue B: Task won't complete
_fire_and_forget(_run_b())
if __name__ == '__main__':
asyncio.run(main())
Feel free to comment out either task in the main function to observe the resulting behaviors individually.
For issue A: Any raised error in the background task can't be caught and will crash the running main thread (the process)
For issue B: Background tasks won't be completed if the main thread comes to a halt
With the decision for the fire-and-forget pattern you'll make a deliberate choice to leave any control of the runtime up to blind chance. So from an engineering POV that pattern isn't a solution-pattern to some real problem, it's rather a problem-pattern that demands a reworked solution.
Losing control of a background task (and therefor the runtime) might be fine for some demo project, but I think you'll want to notice raised errors in any serious production system, especially for any work that takes several minutes to complete.
> Is there something less cooperative than a task that doesn't yield its control back to the main thread?
Of course it does yield back to the main thread in my example, at each await point, just like any other cooperative task.
In my case, I specifically want an independent execution of a task. Admittedly, it has to catch its own exceptions and deal with them, as you pointed out, because that's part of being independent.
(Technically, in issue A it doesn't crash the running thread. The event loop catches the exception, but it complains later when the task is garbage collected. Issue B is fine for my use - when the event loop shuts down, it cancels remaining tasks, which is exactly right for my server.)
But if you release the O2 and convert it into diamond, then by my highy-suspect back of the envelope calculations, it'd be a diamond that would fit into one square kilometer, 87 meters high. It would make quite the tourist attraction.
If you do speak Hebrew, you would know that Netanyahu and Gallant have been heavily attacked by the extreme right specifically because they have been refusing to cut off food.
Not to pick on you, but every time this discussion happens on HN, someone argues that the nuclear power industry is burdened by far more red tape than other industries (probably true) and that if we simply reduce the red tape, we could profitably build new nuclear plants (probably true) and they would still be safe (probably not true). This isn't an engineering problem. This is a social problem. Suppose you offer to let people build with minimal regulation - the most profitable plants are going to be the ones that cut the most corners on safety. The great engineering team that made a safe but slightly more expensive reactor than the minimum allowed by regulation will be out of the market.
And unsafe nuclear is really unsafe in a politically terrible way. You are doomed to either have Chernobyls or a lot of non-optimal regulation, or excellent regulation in the world of spherical cows and frictionless planes.
Perhaps one of the new nuclear startups can find a solution to this, but it'll have to be by finding a way to mass produce nuclear within the existing heavy red tape regime. And in the real world, that's not a bad thing.
I am sure that new designs would be better on paper. I am also sure that a regulatory regime in which nuclear plants are allowed to be exactly as unsafe as fossil fuel power plants would somehow turn out in the real world of money and politics to actually be worse than fossil fuel plants.
I don't think we as humans know how to create a regulatory structure for nuclear that would keep away people who are willing to sacrifice principles for money, and at the same time allows new designs to easily be built.
Complete anecdata, but a few months ago a neighbor of mine successfully resuscitated another neighbor with CPR. There was an AED present, but from what I heard, it did not recommend a shock. The person who collapsed is around 70 years old. Sometimes, it does work! There clearly are some cases where it would be crazy not to try it.
CPR alone is very, very unlikely to reverse cardiac arrest. It's possible your neighbor lost consciousness, but wasn't actually without a pulse, and CPR just woke them up.
Is there any way to know if the model is "holding back" knowledge? Could it have knowledge that it doesn't reveal to any prompt, and if so, is there any other way to find out? Or can we always assume it will reveal all it's knowledge at some point?
The real story in Israel is more complicated; the courts have taken power from the other branches; it's about time for there to be some kind of correction, and is often the case, the rebound might very well go too far in the other direction, ruining the possibility of a real system of checks and balances. Netanyahu himself may be going along with it to penalize the courts, but it's unlikely to help his court cases directly. The idea that it's supposed to somehow magically help him is a talking point for some of the political parties, but he would probably need some kind of new legislation to grant him immunity.
To hijack the thread a bit, if you are still with Dropbox, could you get them to implement what you did in #2 in the official Dropbox JS SDK? Right now it still does a pre-flight request for everything.
I'm having trouble reconciling the rest of the article with this:
> If a death occurred either on the 18 hottest or the 18 coldest days that each city experienced in a typical year, they linked it to extreme temperatures. Using a statistical model, the researchers compared the risk of dying on very hot and cold days, and this risk with the risk of dying on temperate days. They found that in Latin American metropolises, nearly 6%—almost 1 million—of all deaths between those years happened on days of extreme heat and cold.
So the 36 most extreme days out of 365 only account for 6% of all deaths? Meaning those days are safer than average?
The study did model excess deaths, not total deaths.
But to your point (from the study):
> The excess death fraction of total deaths was 0.67% (95% confidence interval (CI) 0.58–0.74%) for heat-related deaths and 5.09% (95% CI 4.64–5.47%) for cold-related deaths. The relative risk of death was 1.057 (95% CI 1.046–1.067%) per 1 °C higher temperature during extreme heat and 1.034 (95% CI 1.028–1.040%) per 1 °C lower temperature during extreme cold.
So of those 1M, it looks like 90%ish were cold-related (though more extreme temps are equally dangerous in both directions).