For that matter—not seeing the source interactions or the prompts—I wonder the extent to which business owners see business relationships as negotiated rather than “picked from the shelf.”
When I’m dealing with small businesses, I tend to explain my frustrations long before I cancel, and offer them a chance to fix them. Whereas with an off-the-shelf product, there’s no point: I say “just cancel my subscription please and thank you.”
I could see that being coded as “confrontational,” but more often than not, I and the vendor fix what’s bothering us and continue with our mutually beneficial relationship.
Oftentimes, I’m not the only customer with that pain, and fixing it with me has the happy side effect of making their product or service more attractive to others too.
By the time I do leave for good, that process has failed, so it doesn’t surprise me that there will be residual reasons for leaving…
Unlikely. The majority of businesses are bootstrapped with a loan.
Those that survive are often the result of an owner who had their hand on the wheel through some very desperate times, times which would have killed the business had they stopped micromanaging.
You could get search results on yahoo. The directory results would come first and then search results from their current “partner.” At one point it was Inktomi, the Berkeley company behind HotBot. At one point it was Google. Before them, one of the more generic ones.
It's Friday and the conference is Tuesday. Half their people, it sounds like, at least, are on the ground in Zambia already.
You'd take a conference a year in the making and shift it online over a weekend from your hotel room in a developing country? No you would not. I don't blame them for not doing that.
People build on chromium for the same reason they build on Linux. I’d personally prefer if they built on illumos or bsd but at a certain point people would rather spend their innovation budget higher up the stack and benefit from the platform that has the most open source engineers working on it.
It’s not binary. Some customers were willing and some weren’t. Even if the company was able to keep selling the item profitably, it may have reduced its total profits at the higher price point (fewer sales) and would gladly revert once the tariff is gone.
Businesses that raised prices to cover tariffs also saw reduced demand — axiomatically, that is what happens when you raise prices - and almost certainly made less total profit (since the rise went toward higher cost not margin expansion). I know we’re all supposed to be at each other’s throats these days but the tariffs were a shared burden.
“ However, a number of major issues with the study were identified by the Panel which made interpretation of the findings difficult. Notably, a high background incidence of chronic inflammatory disease in the lung and other organs was observed in all the animal groups including controls which did not receive aspartame, as reported by the European Ramazzini Foundation. This was considered to be a major confounding factor.”
Not a medical professional, but inflammation is something different from cancer that they mentioned in their website.
And we need to understand also the trial scenario: in the one about 5G they expose rats for more than 20 hours to a radio power more than 10 the law limits.
I think you and I agree. This is about the Italian cancer aspartame study you referenced (Ctrl-f on cancer). This is EFSA saying the study has major issues and reiterating that aspartame is safe.
It goes beyond a foldable, can be applied to streams. Clojure had foldables, called reducers, this was generalized further when core.async came along - transducers can be attached to core async channels and also used in places where reducers were used. The terminology is used to document the thing that various contexts accept (chan, into, sequence, eduction etc). They exist to make the language simpler and more general. They could actually allow a bunch of old constructs to be dispensed with but came along too late to build the whole language around.
> It goes beyond a foldable, can be applied to streams.
> Clojure had foldables, called reducers, this was generalized further when core.async came along - transducers can be attached to core async channels and also used in places where reducers were used.
Ok, you mean there's a distinction between foldables and the effectful and/or infinite streams, so there's natural divide between them in terms of interfaces such as (for instance) `Foldable f` and `Stream f e` where `e` is the effect context. It's a fair distinction, however, I guess my overall point is that they all have applicability within the same kind of folding algorithms that don't need a separate notion of "a composing object that's called a transducer" if you hop your Clojure practice onto Haskell runtime where transformations are lazy by default.
The key insight behind transducers is that a ton of performance is lost not to bad algorithms or slow interpreters but to copying things around needlessly in memory, specifically through intermediate collections.
While the mechanics of transducers are interesting the bottom line is they allow you to fuse functions and basic conditional logic together in such a way that you transform a collection exactly once instead of n times, meaning new allocation happens only once. Once you start using them you begin to see intermediate collections everywhere.
Of course, in any language you can theoretically do everything in one hyperoptimized loop; transducers get you this loop without much of a compromise on keeping your program broken into simple, composable parts where intent is very clear. In fact your code ends up looking nearly identical (especially once you learn about eductions… cough).
A transducer is returned by comp, and each item within comp is itself a transducer. You can see how the flow is exactly like the double threading macro.
map for example is called with one arg, this means it will return a transducer, unlike in the first example when it has a second argument, the coll posts, so immediately runs over that and returns a new coll.
The composed transducer returned by comp is passed to into as the second of three arguments. In three argument form, into applies the transducer to each item in coll, the third argument. In two argument form, as in the first example, it just puts coll into the first argument (also a coll).
That does not sound like a good example. The two-argument form of `map` already returns a lazy sequence. Same for `filter`. I thought lazy sequences are already supposed to get rid of the performance problem of materializing the entire collection. So
Lazy sequences reduce the size of intermediate collections but they “chunk” - you get 32 items at a time, multiply that by however many transformations you have and obviously by the size of the items.
There are some additional inefficiencies in terms of context capturing at each lazy transformation point. The problem gets worse outside of a tidy immediate set of transformations like you’ll see in any example.
This article gives a good overview of the inefficiencies, search on “thunk” for tldr. https://clojure-goes-fast.com/blog/clojures-deadly-sin/ (I don’t agree with its near condemnation of the whole lazy pattern (laziness is quite useful - we can complain about it because we have it, it would suck if we didn’t).)
So what’s your coding style in Clojure? Do you eschew lazy sequences as much as possible and only use either non-lazy manipulation functions like mapv or transducers?
I liked using lazy sequences because it’s more amenable to breaking larger functions into smaller ones and decreases coupling. One part of my program uses map, and a distant part of it uses filter on the result of the map. With transducers it seems like the way to do it is eductions, but I avoided it because each time it is used it reevaluates each item, so it’s sacrificing time for less space, which is not usually what I want.
I should add that I almost always write my code with lazy sequences first because it’s intuitive. Then maybe one time out of five I re-read my code after it’s done and realize I could refactor it to use transduce. I don’t think I’ve ever used eduction at all.
It's evolving, and I'm using transducers more over time, but I still regularly am in situations where a simple map or mapv is all I need.
Lazy sequences can be a good fit for a lot of use cases. For example, I have some scenarios where I'm selecting from a web page DOM and most of the time I only want the first match but sometimes I want them all - laziness is great there. Or walking directories in a certain order, and the number of items they contains varies, so I don't know how many I'll need to walk but I know it's usually a small fraction of the total. Laziness is great there.
This can still work with transducers - you can either pass a lazy thing in as the coll to an eager transducing context (maybe with a "take n" along the way) or use the "sequence" transducing context which is lazy.
I tend to reach for transducers in places in my code where I'm combining multiple collection transformations, usually with literal map/filter/take/whatever right there in the code. Easy wins.
Recently I've started building more functions that return either transducers or eductions (depending on whether I want to "set" / couple in the base collection, which is what eduction is good for) so I can compose disparate functions at different points in the code and combine them efficiently. I did this in the context of a web pipeline, where I was chaining a request through different functions to come up with a response. Passing an eduction along, I could just nest it inside other eductions when I wanted to add transducers, then realize the whole thing at the end with an into and render.
Mentally it took me some time to wrap my head around transducers and when and how to use them, so I'm still figuring it out, but I could see myself ending up using them for most things. Rich Hickey, who created clojure, has said if he had thought of them near the beginning he'd have built the whole language around them. But I don't worry about it too much, I mostly just want to get sh-t done and I use them when I can see the opportunity to do so.
Performance is one of the niceties of transducers, but the real benefits are from better code abstractions.
For example, transducers decouple the collection type from data-processing functions. So you can write (into #{} ...) (a set), (into [] ...) (a vector) or (into {} ...) (a map) — and you don't have to modify the functions that process your data, or convert a collection at the end. The functions don't care about your target data structure, or the source data structure. They only care about what they process.
The fact that no intermediate structures have to be created is an additional nicety, not really an optimization.
It is true that for simple examples the (-> ...) is easier to read and understand. But you get used to the (into) syntax quickly, and you can do so much more this way (composable pipelines built on demand!).
I'd argue for most people performance is the single best reason to use them. Exception is if you regularly use streams/channels and benefit from transforming inside of them.
To take your example, there isn't much abstraction difference between (into #{} (map inc ids)) vs (into #{} (map inc) ids), nor is there a flexibility difference. The non transducer version has the exact same benefit of allowing specification of an arbitrary destination coll and accepting just as wide range of things as the source (any seqable). Whether in a transducer or not, inc doesn't care about where its argument is coming from or going. The only difference between those two invocations is performance.
Functions already provide a ton of abstractability and the programmer will rightly ask, "why should I bother with transducers instead of just using functions?" (aka other, arbitrary functions not of the particular transducer shape) The answer is usually going to be performance.
For a literal core async pipeline, of course, there is no replacing transducers because they are built to be used there, and there is a big abstraction benefit to being able to just hand in a transducer to the pipeline or chan vs building a function that reads from one channel, transforms, and puts on another channel. I never had the impression these pipelines were widely used, but I'd love to be wrong!
They're not really that interesting. They're "reduce transformers". So, take a reduction operation, turn it into an object, define a way to convert one reduction operation into another and you're basically done. 99% of the time they're basically mapcat.
The real thing to learn is how to express things in terms of reduce. Once you've understood that, just take a look at e.g. the map and filter transducers and it should be pretty obvious. But it doesn't work until you've grasped the fundamentals.
You seem to have equated “more likely to terminate with critical comments” to “worst.” Seems pretty reductive.
reply