Hacker Newsnew | past | comments | ask | show | jobs | submit | BooglyWoo's commentslogin

Also sounds a bit like Thompson's Lamp.

EDIT: Whoops, that's pretty much the same thing as Zeno's paradox.


Because it's hard to unanimously and unequivocally identify those who are intolerant as such. Owing to the beguiling nature of language and rhetoric, the intolerant can thrive and propagate by arguing that they're not in fact intolerant, and appealing to "free speech". This seems to be the MO of this so-called "alt-right" stuff; legal or constitutional barriers are almost impossible to define or enforce.


If you want to ban reactionaries then you are pushing out an entire political category of thought that historically and arguably presently includes a huge portion of the West's intelligentsia. Might as well throw in Socialism or Liberalism while you're at it.

> Because it's hard to unanimously and unequivocally identify those who are intolerant as such.

> This seems to be the MO of this so-called "alt-right" stuff; legal or constitutional barriers are almost impossible to define or enforce.

These are fairly creepy statements. Very difficult to apply a charitable explanation to what you're saying.

> Owing to the beguiling nature of language and rhetoric, the intolerant can thrive and propagate

Sounds familiar.


I don't want to ban anything; I was just explaining how the potential effects of Popper's intolerance paradox cannot easily be solved with legislation.


I see, well I appear to have misinterpreted then. I'd edit my post to mention that but of course we can't.


My English is not so good so I hope that I understood you.

What I am trying to say is that there is no need to identify intolerant people. All that is needed is to make sure that intolerant and tolerant people do not limit free speech. Protecting free speech can be done without worrying that people will misinterpret laws. For example, a law saying that anyone wishing to publish something to their site can do so regardless if they are anarchists, Nazis, conspiracy theorist, or anyone else. No mater how much hate the intolerant write they will not be able to stop other people from writing about how much they hate the intolerant people.


OK, there are multiple counter examples from just the last couple of months of this year in the USA, a woman laughed at Jeff Sessions during his confirmation hearing and she's on trial for this, (twice) http://thehill.com/blogs/blog-briefing-room/news/348857-woma... , and the US Justice Department subpoenaed the users of an anti-Trump website. http://www.esquire.com/news-politics/politics/news/a57033/ju...

So we have the intolerant people in the government trying to suppress dissent from the tolerant. This is described in the first paragraph from the wiki link.


So wouldn't the best thing be to make sure that no one can restrict others speech? If the law is as simple and broad as possible then we don't have to worry about the "wrong" people gaining control of the government. For example, the right to bear arms is a very simple constitutional right. There are many people who oppose it but since it is simple, so deeply embedded into the laws and a large group of people support it, it has been really hard to restrict.

If there was a law to be created today making a list of speeches that should be suppressed, what topics do you think would be on there? With the current government, probably things like making it illegal to claim that climate change is happening.

If the left really wants to protect themselves during the upcoming 3 years, I would expect them to try to make free speech as protected as possible before it is too late.


The examples I gave show that it doesn't matter what the law is - we probably have the most extreme free speech encoded in the US Constitution as interpreted by the Supreme Court and it still requires vigilance on the part of those who are not in power, plus a little bit of cooperation of one of the three branches of government - if the judiciary sided with the executive branch interpretation of these laws, it doesn't matter what the US Constitution says, it's whatever the party in power decides.


As another user TheOtherHobbes noted, humans share this characteristic with insects to some extent:

> Wild solo humans are only a little smarter than wolves individually, but being able to share and externalise invention and learning created a massive advantage.

This seems to resonate with a strain of Philosophy of Mind https://plato.stanford.edu/entries/content-externalism/ which deals with our mental content being distributed not only around the brain and body, but on paper, computers and relations with other people.


In this case, the guy has been spearheading a very high performance, innovative methodology for real-time object detection. Companies interested in that task should be falling over themselves to hire him.


Can't recommend the Mandelbrot book highly enough - he was a student of Paul Levy, and he wrote extensively about why Gaussian is not a good choice to model financial time series.


It's a good article in a lot of ways, and provides some warnings that many neural net evangelists should take to heart, but I agree it has some problems.

It's a bit unclear whether Fchollet is asserting that (A) Deep Learning has fundamental theoretical limitations on what it can achieve, or rather (B) that we have yet to discover ways of extracting human-like performance from it.

Certainly I agree with (B) that the current generation of models are little more than 'pattern matching', and the SOTA CNNs are, at best, something like small pieces of visual cortex or insect brains. But rather than deriding this limitation I'm more impressed at the range of tasks "mere" pattern matching is able to do so well - that's my takeaway.

But I also disagree with the distinction he makes between "local" and "extreme" generalization, or at least would contend that it's not a hard, or particularly meaningful, epistemic distinction. It is totally unsurprising that high-level planning and abstract reasoning capabilities are lacking in neural nets because the tasks we set them are so narrowly focused in scope. A neural net doesn't have a childhood, a desire/need to sustain itself, it doesn't grapple with its identity and mortality, set life goals for itself, forge relationships with others, or ponder the cosmos. And these types of quintessentially human activities are what I believe our capacities for high-level planning, reasoning with formal logic etc. arose to service. For this reason it's not obvious to me that a deep-learning-like system (with sufficient conception of causality, scarcity of resources, sanctity of life and so forth) would ALWAYS have to expend 1000s of fruitless trials crashing the rocket into the moon. It's conceivable that a system could know to develop an internal model of celestial mechanics and use it as a kind of staging area to plan trajectories.

I think there's a danger of questionable philosophy of mind assertions creeping into the discussion here (I've already read several poor or irrelevant expositions of Searle's Chinese Room in the comments). The high-level planning, and "true understanding" stuff sounds very much like what was debated for the last 25 years in philosophy of mind circles, under the rubric of "systematicity" in connectionist computational theories of mind. While I don't want to attempt a single-sentence exposition of this complicated debate, I will say that the requirement for "real understanding" (read systematicity) in AI systems, beyond mechanistic manipulation of tokens, is one that has been often criticised as ill-posed and potentially lacking even in human thought; leading to many movements of the goalposts vis-à-vis what "real understanding" actually is.

It's not clear to me that "real understanding" is not, or at least cannot be legitimately conceptualized as, some kind of geometric transformation from inputs to outputs - not least because vector spaces and their morphisms are pretty general mathematical objects.

EDIT: a word


I similarly find myself frustrated with philosophy of mind "contributions" to conversations on deep learning/consciousness/AI. There seems to be a lot of equivocation between the things you label as (a) and (b) above, and a lot of apathy toward distinguishing between them. But (a) and (b) are completely different things, and too often it seems like critics of computers doing smart things treat arguments for one like they are arguments for the other.

Probably the most famous AI critic, Hubert Dreyfus, said "current claims and hopes for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon." But it is progress. Because by climbing a tree I've gained much more than height. I actually did move toward the moon. I've gained the insight that I'm using the right principle.


I think Bart's blood revitalised Mr Burns once.


Have you seen hypernom.com for visualizing 4 dimensions?


"Classical" CV and deep-learning CV needn't be opposing one another.

There are several cases in which the classical approach is emulated by deep networks - implementing the same carefully thought-out pipelines but in a way that leverages representations learned from huge datasets (which are undeniably very powerful).

Some examples are:

* Bags of convolutional features for scalable instance search https://arxiv.org/pdf/1604.04653.pdf

This paper treats each 'pixel' of a CNN activation tensor as a local descriptor, clusters them, and describes an image as a bag-of-visual-words histogram.

* Learned Invariant Feature Transform https://arxiv.org/abs/1603.09114v2

This paper very explicitly emulates the entire SIFT pipeline for computing correspondences across pairs of images

* Inverse compositional spatial transformer networks https://arxiv.org/abs/1612.03897v1

This paper emulates Lucas-Kanade approach to computing the transform between 2 images with differentiable (trainable) components.

Also, don't forget that deformable part models are convolutional networks! https://arxiv.org/abs/1409.5403


Thank you for these great links. I'll add another interesting paper that carries on this emulation program:

Conditional Random Fields as Recurrent Neural Networks https://arxiv.org/abs/1409.5403

I hope more fruit comes out of the fusion deep learning and graphical models.


I think these images are called "Lissajous Figures".


A Lissajous figure is what you get when both the X and Y positions of the beam (or virtual beam) are controlled by sinusoidal oscillators. This is using the same XY mode on the oscilloscope as used for drawing Lissajous figures but it's not a Lissajous figure itself.


Technically (loosely) the way the ball (circle) in the game is a lissajous figure; it uses a scaled sin and cosine signal driving the x and y input of the oscope to create it. It's one of several signals (mainly on the y-side) that are mixed down (done in a similar manner to a simple op-amp backed audio mixer circuit) before being sent to the scope input.


It's more like a custom homebrew analog computer with a few unusual additions.

Seriously impressive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: