I run operations for a company that relies heavily on R, and I'd strongly advise against using the language. R's package management system makes reproducing work difficult. We've had to rely on using renv, a snapshot of CRAN (the default source of R packages: some FTP servers), and a bunch of Docker to get vaguely reproducible installs. However, since R installing a package involves compiling that code that you just downloaded from a public FTP server, installs are extremely slow.
I'd recommend python based on the slightly-saner tooling. I've found that python with conda/pipenv/poetry results in mostly reproducible installs of the tools needed to run a computation.
So, I actually get what you mean here, and have used both R and Python in anger for a number of years.
This is all about tradeoffs. Fundamentally, if your package doesn't compile on the latest version of R, it gets removed from CRAN. This means that each version of R has a consistent set of packages that (mostly) work together.
Contrast with Python which does facilitate reproducible builds, because you can hack together ancient versions of Python and make them continue working. I could go into a massive rant here about pip, but it's trending in the right direction now and I don't want to discourage any of the people working on it.
R is better in terms of being able to ensure that for a given R version, any package you install will be compatible, Python is better for making sure that that one application built three years ago keeps working in the same fashion.
Also, it sounds like you're running a nix based system, have you considered (I'm sure you have) using the system packages. For example, the Debian/Ubuntu ones are pretty comprehensive, at the cost of using older versions. I believe that R-studio also have pre-built packages for Linux (but have not tested this) so that could also work.
To be fair, conda is pretty good as a package manager, because it handles the C++/C dependencies. But to your Docker point, that's how I handle the insanity that is python packaging, especially in the data science space, so it may just be an issue with the field itself.
Conda supports R and it gives you R binaries on any platform. We've used this setup for years at my old workplace, and it gives you sane reproducible builds.
Using modular licenses might mitigate the problem of managing a frankendocument, but I have to agree: making ethos licensing viable would require a ton of effort. The SPDX (Software Package Data Exchange) standard includes boolean expressions to combine licenses[0], like `MIT AND ISC`. It would be difficult but conceivable for a lawyer to write human-readable and enforceable licenses each forbidding one specific use. Bundling licenses with `AND` into cohesive super-licenses covering ethical standards would take more effort. Figuring out what ethical standards a large-enough ecosystem of engineers agrees upon is another herculean task.
Still, I think the efforts are worth it: I'd like to opt out of some subsidizing fields of endeavor but not others. IANAL, just a programmer, but I'd be interested in contributing to ethos-licensing-related projects.
That would be a very interesting project indeed. The fun part would be to hand the judge the syntax document and ask them to please "interpret" the licence according to these rules.
Slightly related, I wonder whether complexity/length of licences plays a factor in adoption. If you have something that is very widely known, somewhat short and readable by lay-persons, you don't need to check it every time, you figure out once that you're okay with working with XYZ Licence or you're not. If you'd have to essentially parse complex expressions of licensing fragments, I expect less adoption because of higher risk of catastrophic issues being overlooked (much like I'd probably not buy a candy bar if the store asked me to read & sign 12 pages of fine print to do so).
From the second item on their FAQ (https://projectvesta.org/frequently-asked-questions/):
> The Life Cycle Analysis (LCA) of the release of CO2 from mining, milling, and transport of olivine creates an approximately 4-6% loss on CO2 removed.
I'd recommend python based on the slightly-saner tooling. I've found that python with conda/pipenv/poetry results in mostly reproducible installs of the tools needed to run a computation.