One of my own favorite explanations of meter in English is Timothy Steele's All the Fun's in How you Say a Thing: An Explanation of Meter and Versification [0].
I'd be interested if anyone has other recommendations?
I've been programming for a long time and had to learn R a few years back for a series of projects. The documentation and community seemed to me, as a newcomer, to be very focused on how to solve particular statistical and modeling problems, and I was almost always able to do what needed to be done relatively quickly. (Python documentation and communities, in contrast, span all kinds of applications that can be difficult even for an experienced programmer learning Python to sort through and evaluate. I can also imagine the migration to 3.x has been a challenge for newcomers.)
Although I enjoyed learning and using R, as a CS person I was bothered that I understood how to do X in R, but I had no clue about what was happening when I did X. I found this paper to be particularly useful in describing R from a CS perspective: http://www.lirmm.fr/~ducour/Doc-objets/ECOOP2012/ECOOP/ecoop...
In case you are a casual observer or are new to Scheme and are wondering why this might be particularly important to the language, wikipedia explains:
"The R6RS standard has caused controversy because it is seen to have departed from the minimalist philosophy [of scheme]. In August 2009, the Scheme Steering Committee ... announced its intention to recommend splitting Scheme into two languages: a large modern programming language for programmers, and a subset of the large version retaining the minimalism praised by educators and casual implementors;"[1]
Perhaps the history of Scheme standardization itself has something to teach us about the challenges and rewards of openly balancing a wide range of interests and agendas over the long term?
To some extent I think the R7RS-large effort is a bit misguided. I believe successful "large languages" (as in, comes with a big "standard library") need a large team of professional programmers to implement it all (professional, in this sense as directed by some organization so that they work on what the organization prioritizes rather than on whatever they fancy), and preferably a single implementation so that all the implementation effort can be concentrated on that one implementation. Successful examples of this approach: Java, .NET.
Scheme is not anything like the above: Plenty of implementations, mostly by pretty small teams.
With ubiquitous internet access, and the rise of open source, I think the better approach for R7RS-large would be to instead standardize a package manager, and then let a thousand flowers bloom. This approach has proved successful, e.g. npm (for node.js), Cargo (Rust), elpa (emacs-lisp), quicklisp (Common LISP).
Then again, I'm not a Scheme "insider", so what do I know. YMMV.
I maintain a number of Chicken packages and am a contributor to Geiser.
With standardized libraries in R7RS-small the opportunity has existed for sometime for a community-created package manager to come into existence. There are several in existence, mostly targeting particular schemes, but Snow2 is the closest to what you desire.
Why it hasn't caught on is unquestionably, IMHO, because of the limitations upon the R7RS-small language. Library writers often cannot write for standard R7RS and must ship implementation-specific code. For instance, the R7RS opted not to define a standard FFI of any sort.
Even if your library is internally R7RS compliant it is often the case that it may rely on packages which are not, so deploying to the likes of Snow2 is a matter of porting code you didn't write to every scheme that Snow2 supports.
The best reason I have been given for the decision not to define an FFI is because it would force writers of toy schemes to conform their data structures to the standard. And so in an effort to remain a toy language it has restricted itself to being only a toy language.
Use Chicken, Chez, Gambit, Chibi or Racket. Forget about RnRS.
IMHO, Scheme has two, somewhat conflicting, goals. There's the desire to make it a practical language, with rich libraries (the central package repository is ok, but it'll require quite an infrastructure to implement portable package manager client.)
On the other hand, Scheme has been a toolkit to test out ideas of programming languages since its origin. We still keep call/cc not because we use it for daily application development, but rather because it is invaluable to implement new idea of control abstraction portably. With this goal, we don't want to restrict implementation too much (hence the loose definition of "error"), nor want to raise the hurdle of standard conformance by requiring too many libs.
R7RS small/large split reflects this view, and I think it's a good thing. I also think SRFI process is working. If there ever is a common package manager thing, R7RS-large is a step to it. I've been using Scheme at work for last 20 years; let's see what we have in the next 20 years.
The problem is that there's a baseline minimum of features that need to be standardized to have enough interoperability between implementations for a package manager to actually work, and that minimum is still too big for many die-hard Schemers. Call them "50-page-purists", if you will, but they want the standard to be as small as it can possibly be. They argue that inessential features are better left to the programmer -- the problem is that in reality, those features get implemented by the implementor, often in a different way than another implementor did it. So we wind up with a mess where everybody is R5RS-conformant, but nobody has a modules system compatible with anyone else's.
The R6RS editors tried to remedy this, but wound up causing a schism in the Scheme community which was pretty damaging. The "50-page-purists" have somehow gotten into their heads that the R5RS is the be-all, end-all of Scheme, and that any additions whatsoever are features piled on top of features. Now, with the R7RS, we've not re-united the Scheme community, we've fractured it yet again. There are now three factions (in no particular order): (1) those who think the R7RS sucks because of gratuitous incompatibilities with R6RS; (2) those who think the R7RS sucks because it's got more features than the R5RS; (3) those who like the R7RS.
Personally, I find that a module system and user-defined records are extremely welcome, and a long-time-coming, really, but I fear that the R7RS-small on its own is still too small for something like Snow2 to really take off in an implementation-independent way.
In short, I agree with you. But, it's complicated.
For what it's worth, they did introduce a standard module system. That is the prerequisite to a global standard package repository.
I've been following the effort and voted on ratifying R7RS-small. One of the nice things about the large-small split is that it satisfies all parties, while clarifying the specification. It also means that new implementers can start with a R7RS-small implementation, and immediately gain the power of much of the R7RS-large library in a portable form. This benefit is not to be underestimated for the community we're talking about.
> One of the nice things about the large-small split is that it satisfies all parties, ...
Unfortunately not. R7RS-small introduced some incompatibilities with R6RS that caused some R6RS fans (yes, they exist) to dissent. One was samth, as I recall, but I can't remember the names of any of the other outspoken critics of R7RS. If the mailing lists hadn't disappeared, I'd probably be able to find the threads for you. They made some good points.
Personally, my gripes with the R7RS-small are pretty minor, and overall I think it's good and I welcome it whole-heartedly. The modules and the records were sorely needed for decades. They were slotted for R5RS, but back then the RnRS was ratified by full consensus rather than majority vote, and some people simply refused to compromise. This is why R5RS took almost 10 years when its predecessors only took a handful of years. (In retrospect, I wish they'd made them an appendix in R5RS, like what they did for macros in R4RS, but it's far too late for that now...).
Yup, and neither of those is a full consensus. Iirc, the only person holding out on the record proposal for R5RS was Kent Dybvig, and that was enough to block it. Prior to R6RS, the RnRS was only ratified on full consensus, not almost consensus.
Note that I'm not opposed to the new "80%+ vote in favor" system (I think it's a very good thing), I'm just pointing out the history.
Any popular language that is going through a standardization process can teach us a lot. As opposed to languages that have "benevolent dictators" (Ruby, Python, etc), languages like Common Lisp, Scheme, C++ have all gone through standardization processes that often leave a lot to be desired.
Unfortunately, that lesson usually learned is "don't do it", but I'm hoping that Scheme has righted the ship a bit and R7RS will be able to balance the goals of multiple diverse groups. The last thing we need is another Common Lisp.
Considering Common Lisp is still in use and doesn't need to be re-standardised despite several decades passing since, isn't its standardisation a great success?
It's lacking enough things like standardized networking and threading support that, no, many people don't consider it a success.
It was also a semi-atrocity in creation in ways that needlessly made it more difficult to implement, but that tale is best told by people more knowledgeable than I, I'd moved onto Scheme by then for the most part.
I think Common Lisp's success is due to the power of Lisp and that Common Lisp is the most modern full-featured version of Lisp. For all we know, if the standardization had been done better, Lisp might be more widely used than it is today.
It's trivial to run R5RS in DrRacket, for example, or other Scheme implementations as well. So it shouldn't really be a problem if R7RS becomes bloated, as you can stay on R5.
Well, note that this is about R7RS-large; R7RS-small is "a 'small' language, suitable for educators, researchers, and users of embedded languages, focused on R5RS compatibility" (http://trac.sacrideo.us/wg/wiki/R7RSHomePage see also http://www.scheme-reports.org/), its spirit is R5RS cleaned up and with a library feature to allow R7RS-large to be cleanly added.
I have had the same experience after deleting my account several years ago (for different reasons). I do keep wondering if it will pop up as an issue, so I'm very curious about other people's experiences when not participating in LinkedIn.
I had a LinkedIn account ages ago but found it useless and chock full of recruiter spam. It was the first of all my social networks to go.
I am always astonished when I hear phrases like "it's required". Really? By whom? I'm glad to hear there are other people out there who don't have it, because all I hear about is the other side: People bemoaning LinkedIn's existence but parroting this idea that their professional career would be all but ruined without it. I just cannot understand this, but I always feel like I must be naive or living in a tech bubble, or just outside the mainstream.
Do people really hurt their job prospects without LinkedIn? (Or did they when this was written in 2013?)
I don't understand fellow coders who view recruiters as a nuisance when u can use them to get a way better deal on your next gig.
With my LinkedIn I am vague re: where I currently work and provide them all with my spam contact info(spam email and spam google voice number that is a voicemail box which messages go to my email).
Maybe it's the volume of recruiters? I get ten a week and enjoy seeing what they can do for me.
I'm getting maybe one a month, but I specified clearly that I'm not looking for career opportunities. Either it's from always the same companies (Amazon and ARM are 80% of it) that won't let me work from home or it's completely outside my domain (web stuff---good companies to work for, but I work on the kernel not on web back-ends).
I did apply for my current job through LinkedIn, that's the only reason why I haven't deleted the profile. That said it was my first job, seven years after I would be able to find another job just through email if I needed one.
My solution to the problem was to simply "connect" with every recruiter and friend on linkedin until I hit the "500+" connections tier, then disabled all forms of notifications from the site. Now my profile is very much idle, but when any business-type person looks me up, I still have a presence of sorts.
Yes, I agree. The history of philosophy is an important part of philosophy, not all of philosophy. I studied the subject nearly 30 years ago and can only recall the history courses as having a decisive Western bias (though not a monopoly). I believe all the courses were correctly labeled, and I can remember at least one embarrassed explanation that other traditions could not be included.
I am curious how the bias might affect topics like contemporary epistemology, logic, or the philosophy of science or of mathematics?
The very idea that something like logic is part of philosophy is Western, I think. The ancient Greek philosophers studied logic, and therefore modern Western philosophers still do. I don't think people with an African or Chinese or whatever background would even consider that a natural part of the subject. They probably would consider other topics that are not included in our philosophy programs.
I'd put it the other way around. Philosophy is a Greek word, and the collection of subjects it encompasses is part of a tradition inherited from the Greeks (as similar think applies to Religion and the Romans).
However, other people do have the same concerns, though they might package it differently. So logic is hard to escape and was studied everywhere, but whether it was studied alongside Grammar, Metaphysics or Mathematics -- well that could vary a bit.
The connection is closer to Philosophy <-> Law <-> Logic <-> Philosophy than Philosophy containing Logic. In context much of this discourse relates to their system of government.
PS: There is a great comedy about this stuff from the roman republic which has a great scene that roughly translates as: A: Yay communism! B: Who would work the fields? A: Why the slaves of course! Anyway, stripped from context it's really easy to misread many of their arguments.
There are many huge biases at the core of all western philosophy. 'Self' as a meaningful separate unit is one of them. To take a slightly different take on a famous quote. 'It is presumed that I exist and I think, therefore I exist.' Even logic stems from a way of thinking, A and !A could both be true if you step out of the mindset of Truth with a T.
The problem with looking at world philosophy is you find 'western' philosophy is an ecco chamber as far from math as literary criticism.
PS: Not to offend people, but a huge chunk of western philosophy was so closely tied to religious thinking it's actually painful to read. Though this stuff is often a footnote in most philosophy programs.
There is truth to this statement. Nothing wrong with saying that binary systems of logical proofs, based on things like "Self" is important to the history of philosophy in the west
Unlikely that there's a single cause. One possibility is that people's value systems differ: do you like smoking? Do you like eating bacon? Do you like sunbathing? For each, there are people that disapprove, or would like you not to do this, or would even ban you from doing so (I could pick more controversial examples, but these are sufficient). Conflict clearly exists - but a way of resolution is not clear at all.
As the number of conferences and speakers grows, are you concerned that the Law of Large Numbers will exact its toll and make the sample of speakers more typical of the population as a whole? Inviting the flashiest of SF and NYC to speak is quite a builtin selection bias, but there seems to be no apparent strategy for preserving the distinctiveness of speakers as TEDx scales.
Yes, the average quality will probably fall, but the total amount of "good" talks will vastly increase. As with any popular content platform. A problem yet to be solved is how to highlight the better talks, because the current approach of featuring a talk a day on TED.com doesn't scale.
By the way, TED never expected TEDx to grow this big, so there is little strategy in what's been happening so far. There are I think 6 or 7 TED staff responsible for the whole TEDx program, with its 3000+ events.
Agreed, andr, that the number of new good talks can continue to increase. This is an interesting story and relates to a variety of other organizations seeking to scale quickly, whether from necessity or ambition. Is there an ongoing discussion of it as it pertains to TED & TEDx anywhere?
I am a mathematician, and the Law of Large Numbers just says that if you roll a die many times, then the average of your rolls will converge to the expected value, in some strong way.
I think he is just applying Sturgeon's Law (90% of everything is crap), and then and then the law of large numbers to conclude that the average video is crappy.
I'd be interested if anyone has other recommendations?
[0]: https://www.ohioswallow.com/book/All+the+Fun%E2%80%99s+in+Ho...