I find it nice to be able to simply make changes as I need directly on the system without going through any config management abstractions. But then I have a record of previous state a can revert to. It's simple and works well.
The standard work on flow-sensitive higher-order type inference proceeds from Olin Shivers's dissertation, but Oortmerssen has written a bit about his ideas on lambda-the-ultimate.
> So now that we now to look for them, it may be possible that we'll be able to find others, at least before the probe reports back from Proxima Centauri.
To me this is the most exciting part of the discovery. Previous research was quite pessimistic about our ability to observe these kinds of interstellar comets[1]. Finding the few interstellar comets among the many objects within our solar system requires effort and specialized methods. Since our previous estimates indicated that we would not be able to observe many of these interstellar comets, it did not seem like looking for them was worth the effort. Now that has changed.
Since previous estimates indicated that this discovery should not have been likely, we can be reasonably sure that previous estimates were incorrect. Meaning it now seems worth the effort to begin looking for them.
With the LSST[2] coming online in the next year or so, our ability to observe such objects will be dramatically improved over current telescopes including Pan-STARRS which discovered this one.
[1] (Disclaimer: I am an author of a previous paper which concluded that these kinds of discoveries would be nearly impossible with current telescopes. Never have I been more happy to be wrong) My paper along with several others are referenced in the Nature letter.
This is really exciting news. Not only did we find an interstellar comet but we found it using the Pan-STARRS telescope. With the LSST coming online soon which is the next generation of survey telescopes we can expect to find many more interstellar comets that previously expected.
This seems like an ineffectual measure. Instead of giving the domain to the individual nodes in the DDoS. I'd resolve it once and pound the IP until it changes.
With a simple script curling the page and looking at the content to check if it's pointed to the right server. Ignoring unroutable or inane IPs returned by the DNS.
Several years ago and I don't remember much of the specifics but we had an issue with static content being served from our site being randomly truncated (polluting the cache etc).
We eventually traced the issue down to the Nginx server that was serving the files and one of it's cache buffer size config options, (I don't remember which one anymore). We noticed if the file being served was larger than a certain size it would occasionally truncate the file but not always. We tested increasing the buffer size by repeatedly doubling the default value, which was a power of two, up to a size of several GBs. But the files kept being truncated for some small percentage of the requests. At this point we knew it wasn't directly related to the size of the buffer since it was larger than any files being served. Finally someone suggested that we test a value that wasn't a power of two and the issue was gone.
We figured it was an internal bug in Nginx where it was growing an allocation buffer and used powers of two, but had an off by one error that didn't copy the second half of the buffer or something. We dug through the code but never found anything and so we left the cache setting at +1 from the default power of two value and never had an issue again.
I find it nice to be able to simply make changes as I need directly on the system without going through any config management abstractions. But then I have a record of previous state a can revert to. It's simple and works well.