At this time, the competition is soon finishing - with no models having succeeded. Given the incentives for top labs, and the short time needed for a successful automated solution, we can make a reliable upper bound on the capability of current models - better than any normal benchmaxed datasets.
What I would like to see is an easier version of this same format.
"I already decided in my private reasoning trace to resolve this ambiguity by emitting the string '27' instead of '22' right here, thus '27' has 100% probability"
I think you misunderstand induced demand. The increased demand when you add more capacity, simply means even more utilisation is achieved -> increased productivity. This is not a waste of time.
The correct comparison should be comparing for each area and situation, which type of transport investment results in the greatest utility.
The objective is not to minimise traffic, but to maximise peoples ability to get where they need to be.
By that logic: How can you actually increase utility for trains if you don't build a train? If public transportation is eschewed in favor of more roadways, then unsurprisingly the utility of the roads increases.
But, many studies have shown that building more roads does not reduce congestion. Congestion is not function of roadway space-- other factors like accidents, merging, exiting, peak travel capacity, etc greatly impact congestion. Building another lane on a highway doesn't solve the problem.
Lore dump: This is not a coincidence, as the characters themselves refer to the chidori being named due to sounding like thousands of birds. The alternate name used by kakashi, raikiri, is related to https://en.wikipedia.org/wiki/Tachibana_D%C5%8Dsetsu
The latter option is closest, but neither is quite right. It would have ~known~ that the problem asked, combined with a phrase for a 15 line limit has associations with a length of 12 lines (perhaps most strongly 12, but depending on temp it could have given other answers). From there it is constrained to (complete) solutions that lead to 12 lines, from the several (partial) solutions that already exist in the weights.
Sharing of the model weights should now be fine.
There is a 12 hour old post for the original pull request that was dismissed on the basis of being unlikely to be considered; that links the preferred url.
EDIT; I may have misunderstood the reviewers acceptance. The post should probably be deleted.
Gaussian blur is essentially acting as a low pass filter. An IR filter does not strictly destroy information in the filtered spectrum components, but does attenuate their power.
Given a perfect blurred image, reconstruction is possible - however due to the attenuation, these high frequency components are ~sensitive~.
Apart from quantisation effects [you mentioned which limits perfect de-convolution], adding a little AW Gaussian noise(such as taking a photo of the image from across the room) after the kernel is applied obliterates high frequency features.
Recovery when noise is low (plus known glyphs) is why you should not use Gaussian blur followed by print screen to redact documents.
Inability to recover when there are artifacts and noise is [part of] why cameras cannot just set a fixed focus [at whatever distance] and deconvolve with the aperture [estimated width at each pixel] to deblur everything that was out of focus.
TLDR for readers, It is unlikely to recover sufficient detail via de-convolution here.
What I would like to see is an easier version of this same format.