Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Image Processing with Comonads (jaspervdj.be)
47 points by emillon on Nov 28, 2014 | hide | past | favorite | 10 comments


Why do Haskell programmers see it as a good thing to package their abstractions in syntactic sugar?

Stop doing that. It makes it incredibly difficult to participate -- you basically need to learn 3 syntaxes on top of Haskell's own to contribute to any non-trivial project.

I'm referring to the article the post's author recommends: http://www.haskellforall.com/2013/02/you-could-have-invented...


Which syntaxes are you referring to?

Without knowing which one's you mean, can I offer a suggestion: Instead of looking at these syntaxes and more overhead, (feeling that you need to somehow mentally parse them into whatever they de-sugar to) try to treat them more as a chunking [1] opportunity.

So for example, when you see code in a monadic do block, don't try to mentally de-sugar it to the function calls it results in. Rather think of it slightly like imperative code where x <- someMonad ~ x = someMonad(). This of course isn't what's going on really, but in many cases it's close enough for you to use it and move on.

[1] http://en.wikipedia.org/wiki/Chunking_%28psychology%29


Thanks. I am referring to, for instance, the 'method notation' introduced in that article.

I think I know what you mean by chunking. It is my understanding that it is a counter-productive thing, though. Just like how with 'frameworks' imperative programmers lose perception of the technical debt they incur by not knowing what their code actually does.

Maybe this line of thought is not universal but I'd presume the majority of programmers would be afraid to submit code to a project where they don't really know the notational mix and syntactic sugar in use.


Syntax sugar in Haskell tends to be different than other languages. Since it's essentially just do-notation and do-notation follows monads and monads are required to follow a hearty chunk of behavioral laws... It's actually quite similar across the board what do-notation means. This makes chunking far more effective.


I'm generally agreed with you about the evils of excessive syntactic sugar, though I'm not sure about this particular case (I'd have to, like, go read the blog article or something...) The general problem is that the interface that is optimal for the experienced user (terse, lots of syntax) is not best for learning. All UIs (such as programming languages -- an interface to the machine) must determine where they fall on this particular continuum.

My personal beef is the use of the Latex extended character set to typeset Haskell code in papers. It has the awesome effect of making it impossible to type in and run code if you're not already familiar with Haskell. This greatly hampered my learning of FP.


I really want to agree with you about the extended character set. Much of it is totally cosmetic even. But when you get to reading more advanced things the ASCII overload begins and nice notation becomes key. Coq/Agda/Idris all embrace this out of necessity essentially.


To me this argument isn't much different than saying "why do programmers see it as a good thing to package their code in abstractions?". There will be difficulty of participation for all non-trivial projects. Every substantial project builds up its own unique abstractions, and you have to learn them to contribute. I see this as an unavoidable fact of building complex systems.

I really don't see a meaningful difference between the more syntactically varied abstractions in Haskell and the more wordy abstractions you see in OO languages (like FooFactoryProxyBean). It's still something new that you have to learn. You're just not as familiar with the way people do it in Haskell. If you had never seen the terms factory, proxy, and bean before, you'd have exactly the same complaint that you have about Haskell abstractions.

Where Haskell possibly does differ is in the level of abstraction it can achieve. Some of the abstractions haskellers use are significantly more abstract than what you typically see in mainstream languages. Sure, it will be more difficult for a newcomer to comprehend. But in return we gain a lot of power and expressiveness that you don't get otherwise.

Oh, and to my knowledge Gabriel's "method notation" is not actually in use in real code. It seemed like more of a convenience notation he introduced for the purposes of that post. That kind of thing is pretty common in academic literature. And if it is being used somewhere, it's definitely not mainstream Haskell.


I don't know if anyone takes comonadic co-do syntax seriously. Do-syntax lets you build up sophisticated monads from simpler ones in a nice way and this is a common mode of use for monads. Comonads tend to get sequenced a little less often, though. Co-do-notation thus seems a bit low on the power-weight ratio.


The comonads / OO article was interesting, but ultimately I got the same sense from it that I get from things like the construction of the reals from Cauchy sequences - neat, with fascinating theoretical implications, but brutally unuseful for practical applications...


Seeing as this was presented as an example of how comonads can be used in real-world scenarios: what's the performance like?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: