Optics. That's the true name of the evil overlord.
Government cares about "looking" like they are doing the right thing. Because they want to get public support.
Then there is corruption. But I will say that's a secondary issue. Anyways, I am pretty sure once govt changes were I am from, former politicians will find a job easily in pharma. Again this is secondary and will always happen.
I remember back in the 70s in college the great debate was whether the HP-35 calculator with RPN was better or worse than the TI SR-50 with infix notation.
My conclusion was that RPN required less to remember than infix, and with a calculator you had to be very careful to not mess up where you were in punching buttons.
But for a tty, infix wins every time. Like no book publisher ever writes math equations with RPN, unless they are writing about Forth or Lithp or some intermediate code.
I should have known, however, that my remark about RPN not being human would flush out the 3 people for whom it is!
> unfamiliar gets conflated with unintuitive and hard all the time
In a deep sense unfamiliar and unintuitive/hard can be viewed as the same thing (see e.g. the invariance theorem for Kolgomorov Complexity). Hence striving to be "familiar" is still a virtue that it makes sense for a programming language to aspire to (balanced of course against other concerns).
Objects are not intrinsically difficult or intuitive though. Different people will perceive the same thing to have different levels of difficulty or intuitiveness.
Or as Von Neumann said about mathematics (and I think the same is applicable to CS and programming): "You don't understand things. You just get used to them."
'unfamiliar' is 'unintuitive'. You only have an intuition for things that are similar to things you have encountered. Some unfamiliar or unintuitive things might be simpler than familiar things. But, at first, it is often easier to use familiar things, even if they are less simple.
Regarding RPN, I am not convinced that they are actually simpler to a human. In order to understand an RPN expression I try to keep the stack of operands and intermediate results in my ultra short term memory. This can easily exceeds its typical capacity, whereas when trying to understand an infix expression, I can replace subtrees with an abstraction, i.e. a short name, in my mind. But perhaps I have just not yet seen the light and spend enough time working with RPN expressions.
On the other hand, RPN expressions are certainly far simpler to implement.
I never said PRN is simpler.
It is just as hard as infix.
I do not like PNR any better or worse. It takes me about 5 minutes to switch from lisps to others and back. I just put the parens in the wrong place a couple of times and I am done.
Paredit + PNR makes editing slightly more comfy, but that's it. They are the same thing.
I think this is the takeaway as well; I prefer RPN (and I can read and I like k which is considered 'unreadable/readonly' and Lisp which is also apparently 'hard to read') because I am used to writing a lot of forth(likes); I like discovery while programming and it is quite trivial to stuff in a forth into anything, even when no-one did it before. For instance, 10 years ago, to speed up Xamarin (which was compile/run and quite slow at that), I sped up development by adding a Forth into C# so I could rapidly discover and prototype on my device without having to compile. And I have done that for the past 30 years, starting on my 80s homecomputer to avoid having to type hex codes and risk more crashing during discovering what I wanted to make.
I agree with your general principle but I do think infix notation is still better because so many equations are a tree of binary operators. Look at how binary trees are drawn - the nodes are in the middle of their children. It just makes sense for binary operators to have their operands on either side.
Otherwise you end up having to maintain some kind of mental stack which is just a bit mentally taxing.
Prefix and postfix notations permit a great distance between the operands and the operators. If done in a disciplined fashion so that they're merely swapped (at least in the majority of cases), then sure, it's not too different. But consider this: (¬ used for unary negation)
b ¬ b 2 ^ 4 a c * * - √ + 2 a * /
The two expressions being divided are quite large, so there's no easy way to just move the division one token to the right, it goes all the way to the end. Which two things are being divided at a glance, just point them out. I've even selected a more familiar expression for this exercise that should make it easier. But what about an expression people are less familiar with? Which thing are we calculating the square root of? This is what RPN looks like when entered into a calculator, which is handy and quick for computations (I mean, I have an HP-32S II at my desk next to my laptop, I like it). But even the HP-48G with its graphical equation editor didn't force you to think like this, algebraic notation provides locality of information that makes it much more comprehensible without overburdening your working memory.
And once you start adding parentheses or indentation to convey that information, you're on your way back to a tweaked infix notation. The same for prefix notations. If you can see past the parentheses of Lisp/Scheme it's not too hard to see what's being expressed (using the same symbols as above):
(/ (+ (¬ b)
(√ (- (^ b 2)
(* 4 a c))))
(* 2 a))
This is more comprehensible, but it's not a strict prefix notation anymore, we've used parentheses and indentation in order to convey information about the grouping. I've even borrowed Lisp's use of variable numbers of arguments to simplify *. If asked, "What is being divided by what", you can, rather quickly, identify the divisor and dividend, at least point them out if not express totally what they do. But a straight prefix notation or postfix notation makes that task very difficult.
You could start naming subexpressions, but then you get into one of the two hard problems in computer science (naming things).
And then we get to the other things people want to do with mathematical expressions, which is largely symbolic manipulation. The present infix notation (or a highly demarcated prefix/postfix notation like the Lisp-y one above) is much better for this task than a strict prefix/postfix notation.
I ate two bananas before my motorcycle test for their placebo effects to calm nerves. Cool as a cucumber I passed without a jitter on the throttle. It's anect-data-l. I know.
A) I would say lisps are rather boring. Clojure, one of the most recent ones hasn't changed in 15 years.
B) most systems are built in C, Java, Python. So no wonder most complex systems are written in those.
Efficiency is rewarded by consumer buying (voting with money) efficiently produced products.
It's not some abstract evil ideal that drives the market. It's people doing purchases.
Now, good markets need good (perfect to be precise) information. If people knew this is where we would end up (say most production moved to Asia), would they have made different choices (say to preserve manufacturing in US EU with better worker conditions)?
I would argue our economic system is just fine. But we fail in political, educational and ethical issues. Especially ethical, people know about horrible conditions in sweatshops, still there are massive queues to shop at low cost brands. I feel clothing as the most egregious, because there are decent alternative choices.
> Efficiency is rewarded by consumer buying (voting with money) efficiently produced products.
Right, but the fact that our economic outcomes are decided by consumers is an artifact of our economic system, not some necessary truth. There are a lot if upsides to such a system, but as discussed in this thread there are also downsides.
> I would argue our economic system is just fine. But we fail in political, educational and ethical issues
I disagree here. Our economic system makes sweatshop clothes cheaper (we could for example regulate or raise tariffs against them). That means that making thw ethical choice becomes a sacrifice of sorts, and not only that but it puts people who don't make that choice at a comparative advantage (they have more money left over), which effectively makes their influence over the rest of the economy greater.
We should be doing better in terms of political and ethical education, but we should also be ssetting up economic incentives to do the ethical thing not the opposite.
There is very little spec logic. It looks a lot like type declarations in typed languages.
It's usually outside the scope of functions, since you are likely going to want to reuse those declarations. For example, you can use spec to generate test cases for something like quick-check.
You can add pre and post conditions to clojure function's metadata that test wether the spec complies with the function's input/output.
As far as I understand, and I don't understand much. Also I am not endorsing vanden Bossche because I don't understand much. Furthermore, just as everyone else, he is just trying to sell his vaccine over others'.
The issue is selective pressure. Yes, the virus will mutate. But under an mRNA vaccine, only the spike protein needs to mutate for the vaccine to be render useless. In other words: mutating the spike protein will give the virus access to very broad unimmunized population.
Up to that it makes perfect sense to me and my limited evolutionary knowledge. I can't tell wether is right or wrong. But it makes sense.
He goes further saying that antibodies from vaccines are more affine to the virus, even with mutated spike protein, this compromises the natural immune system, given it will try to fight off infection with useless vaccine-learnt antibodies rather than with natural antibodies. This will make the virus more deadly.
This seems off to me. I can't see the logic, but I will be happy to be corrected. How can something be ignored by the virus and be more affine to it?
Again, he says that all these issues are solved with his vaccine, once he finishes it.
> The issue is selective pressure. Yes, the virus will mutate. But under an mRNA vaccine, only the spike protein needs to mutate for the vaccine to be render useless. In other words: mutating the spike protein will give the virus access to very broad unimmunized population.
> Up to that it makes perfect sense to me and my limited evolutionary knowledge. I can't tell wether is right or wrong. But it makes sense.
The issue with this is that the spike protein has to mutate enough for vaccine induced immunity to fail to recognize it, but the spike protein is critical for the virus actually entering and infecting cells, and therefore there isn't a whole lot of mutating it can do while remaining functional.
On the other hand, there is some evidence that vaccines targeting more than just the spike protein (as well as natural immunity) may potentially be more at risk for the "antigenic sin" trap, in which the immune system fails to respond to mutated versions of a virus as well as it does to the original version it encountered.