You may be interested in going through "A General Theory of Reactivity"[0] from Kris Kowal, creator of the q promise library. It discusses relationships between streams, promises, iterators and generators (although the streams that he discusses are not how Node.js streams are implemented)
Cool, I'll have to take some time to read through more of that, but a quick skim actually made clear to me some of the differences I was missing, especially when combined with the other comments.
While I am currently trying to teach myself Haskell for exactly this reason (understand the "real big picture" in FP), I don't know if there is much harm in starting the journey by exploring some FP concepts in languages one is already familiar with, despite how lacking the language is in that regard. This presentation does point out many features that JavaScript lacks from "true FP" languages.
I, for one, don't know if I would have ever tried to start learning Haskell if I didn't discover some of the benefits of FP first hand through a familiar language.
You can indeed use transducers in asynchronous contexts. In fact, that is what intrigues me most about the abstraction.
Transducers are defined such that you can abstract the context of input, output and iteration and focus on the transformation of each element from an independent source individually. The implementation of each transducer accepts another transformation in a "pipeline" (eventually the output sink) and accepts an input during an external iteration process. The implementation decides what to do with it (map changes with a function, filter ignores certain values with a predicate, etc.) You can also define transducers that send multiple values for each input (think string.split) or some that do not send any until completion (think buffered results).
Since the iteration source and sink are abstracted from the transformation, you can use the same transformation in other contexts (Promises, event streams, IO streams, CSP, etc.)
I've been experimenting with transducers in asynchronous contexts in JavaScript if anyone is interested, for example:
For another take on Reactive Programming from Erik Meijer, check out "Duality and the End of Reactive" [1] (video).
I watched it just last night and found it very thought provoking (so much that I stole some of the concepts to create transduce-async [2] this morning).
[Edit]: change "form of FRP" to "form of reactive programming" as I'm not sure I know the true notion of "FRP" based on tel's response.
You could probably say that this library implements a form of reactive programming using transducers with `asCallback` and `asyncCallback`. In fact, many ideas for this library were rooted in my exploration of ReactiveCocoa a few years back [1]. I think that transducers lead to a much cleaner implementation and I may resurrect that library to be based on underscore-transducer and add more reactive extensions. (Also, the 'r' in `_r` stands for "reactive".)
I really like the way transducers abstract the process of iteration from the transformation that makes this possible.
To be honest, FRP is a dicey term. The original definition was rather clear arising from Elliott and Hudak's work on things like Fran and The Haskell School of Expression. It clearly spoke to the a need to a) represent values as time-varying instead of fixed and b) to encode "continuous time" and these features became known as FRP.
(I'd also add in Elliott's denotational design aspects, but those sort of become dicier with respect to arrowized FRP.)
Reactive Programming is a more general term and is often applied to any kind of synchronous (and sometimes even asynchronous) stream processing. I think it's utter buzz and appreciate Eric Meijer's talk[0] about what "reactive" tends to mean (push and pull streams, fine tuned effect handling).
Now, if you're doing regular old stream processing then you can hook up whatever kind of event handlers you like, push and pull. Transducers form a framework for achieving some of these kinds of transformations as they encode one-to-many transforms (with ambient local state as available in Clojure and all the early termination business, but basically just a -> [b]). In some sense they're nearly "arrowized" as you're focusing on the transformation of "reactive" quantities, but they're certainly discretized and tied to sampling rates in their formulation.
Very interesting. I certainly have some reading to do.
Can you explain a little more the notion discrete values and sampling rates and how that applies to stream processing? I assume that it applies to sampling values of a process over time, but what would be an example in computation where the value would be considered continuous? Is it similar to continuous vs discrete signal processing?
It's very similar to continuous/discrete DSP. The classic example is the integral or the feedback loop. We often talk about circuits which have continuous logic in terms of integrators and feedback loops. Direct from [0] we have a computation of
Exp[t] = 1 + Integral[Exp[x], {x, 0, t}]
as
exp :: SF () Double
exp = proc () -> do
rec let e = 1 + i
i <- integral −< e
returnA −< e
The integral is approximate (of course) but the algorithm holds to approximation no matter what the sampling rate is---that can be chosen by the consumer of the algorithm. From [1] you have an example from a vision system where the algorithm is specified in physical terms w.r.t. the motion of vehicles within a video frame. Again, the sampling occurs when the FRP computation is executed not when it's constructed.
Probably the most tangible example for this audience is thinking about Javascript GUIs. A Javascript GUI might be thought of as depending upon continuous signals like the mouse position, current time, scroll position, etc and also a set of (instantaneous) events like mouse clicks, new data arriving from asynchronous requests, etc. The output is a continuous "state of the GUI" signal.
Ultimately, the actual mouse position and painting loops are discrete, obviously, but their sample rates may be chosen independently of the actual business logic.
[Edit]: explain step function and memo when mentioning map.
A transducer is composable algorithmic transformation, that is independent of input and output sources and the process of iteration [1].
The threading macro `-->` reorders execution of a list of forms, inserting the first form as the last item of the second, etc. Composition of transducers apply transformations in the same order.
Essentially all transducer transformations are defined as a series of steps, where each step is possibly advanced (0 or more times) by each transformation by a function similar to what you pass `reduce`: `memo = step(memo, item)`. When you execute a transducer, you supply the step function, the initial memo, and each item when iterating. This allows you to abstract the input, output, and iteration outside the transformation (these are implementation details normally provided by the library).
You can define `map` as a form of `reduce`: `memo = step(memo, mappingFn(item))`, which allows you to create a transducer for `map`.
Remember that the step function and initial memo are supplied outside the transformation. But, as an example, if you are transducing over arrays, the initial memo is an empty array, the step function appends each item to the array and returns the modified array, and the return value is used as the memo (result) of the next iteration of `step`. This step function is executed for every item in a source array using some process of iteration (normally a reduce, but does not have to be).
Transducers allow the abstraction of algorithmic transformations independent from the input and output, and even the process of iteration.
Whereas lodash operates on arrays and objects calculating intermediate results, transducers simply define the transformation in terms of functions similar to what you pass reduce: start with a memo, execute a function with a memo and an item, return the possibly transformed memo for the next iteration. Once you abstract the transformation away from the data, you can apply the same transformations to different processes that start with an initial value and step through a result. One benefit is that you can compute the result in one pass (without intermediate results). Another is you can use the same transformation in different contexts (lazy lists, indefinite sequence generation, CSP, event streams, etc.).
The source could be anything that produces a sequences of values: streams, iterators, callbacks, immutable-js, etc. You simply have to define (external to the transducer) how you append each item to the supplied result. The "step function" that knows how to append results to values is passed to the transducer, and the transducer executes the step function when reducing over results.
It's interesting you mention lodash in the context of transducers, as I have been developing my own take on transducers closely following the underscore API [1].
[0]: https://github.com/kriskowal/gtor