Couldn't something like this be a few hundred lines of Ansible? Not trying to be intentionally negative but I don't really get the use case as I feel this is a pretty well established process by existing tools for server automation and provisioning.
I've been using Ansible with Docker Compose since 2015. I can set up a full server with all of the bells and whistles in about 20 lines of inventory configuration. Behind the scenes there's a bunch of portable and generic roles that work for baseline "server level" components. I've done this now for dozens of companies, it's a tried and true model with an easy way to customize things if needed.
Ansible sets up the server, Docker Compose runs the apps (stand alone `docker compose` commands, not the Ansible module) and git is used to deploy everything.
It's my go to solution as well. If I can't get a reasonable docker capable host out of the box (which you'd get in AWS and gcloud), use ansible to create one. I had to do this in Telekom cloud last year and dusted off some ansible stuff I had from previous projects and got it done. I sometimes combine it with using packer for building vm images. That avoids repeated overhead of having to run ansible, which can be a bit slow.
For application deployment, I use simple scripts that restart docker compose via shh that we can run from Github actions. Ansible is overkill for that.
Can you elaborate a bit more on why are you preferring standalone commands instead of going all-ansible?
I was considering switch to ansible module for myself but I'm hesitating, I feel like this could produce quite complex, intricate setup since docker compose itself has many specific stuff like .env files, .env.local files, docker-compose.yml environment specific override files, environment variables and many more so I have mixed feelings when thoughts about intertwining it with ansible arrives.
> Can you elaborate a bit more on why are you preferring standalone commands instead of going all-ansible?
I very strongly believe:
- Ansible is a great tool for setting up a server to get it to the point where you can run / deploy applications
- Docker Compose is a great tool for running your app
- Git is a pretty good tool for initiating deployments
In my opinion these are 3 distinct things. Trying to make Ansible do everything gets messy, you also lose feedback. For example if you git push, a post receive hook will stream the response back to your terminal but Ansible will only show output when a task is complete.
Using the raw Docker Compose commands also keeps things more consistent between dev, CI and prod. You run basically the same thing in all environments.
Do your examples support zero downtime deployment and zero downtime rollback? If so, could you please point to the relevant sections? I couldn't figure it out from the READMEs.
> Do your examples support zero downtime deployment and zero downtime rollback?
Those examples are for running your app, not deploying it.
Zero downtime is also a loaded term.
MRSK isn't zero downtime in a way that's commonly described as zero downtime. MRSK uses Traefik's built in functionality to queue requests so if your back-end is down it won't respond with a 502. Instead the user will see a busy mouse cursor until your back-end is up.
A user will be able to automatically "resume" their request after your app reloaded which is nice, it saves them from having to see a 502 and manually reload the browser but if you have a Rails app that takes 27 seconds to boot up then the user will be waiting for 27 seconds with no feedback. You could make a case the UX there is worse than a user explicitly seeing a custom 502 page that mentions to reload in a bit.
That's much different than a true zero downtime rolling update that something like Kubernetes will give you, or if you rolled your own solution to do it without Kubernetes. I don't know if MRSK will handle that, given the direction it's going it's quite possible DHH will implement that behavior in which case that would be very useful.
Rollback is also tricky once you start introducing database migrations. I personally always roll forward.
While these are used to deploy a Scala backend and a Scala.js frontend, I have adapted them many times to work with other backends and js frontends, the logic is the same, the build scripts is what needs to be updated.
A git post receive hook. You can set up a bare git repo which lets you push code to it. Then when you push to it, the post receive hook runs which can be any script you want. In this case you can run the commands to restart your containers or whatever else you need to do.
Ansible is Python, and MRSK is Ruby. Each appeals to their own ecosystem; switching between the two is a noticeable cognitive load.
Also, it could be implemented using Ansible, but it has been implemented in MRSK. A serious difference if you plan to count on someone to keep it updated and supported.
Probably. But not the interesting bit (the tenth of Docker swarm they've layered on top of plain Docker - for better or worse).
I actually had a quick look at the code - and at this point it doesn't strike me as a tempting building block.
There's no developer overview i could find (birds eye guide to the MRSK code structure and concepts). There doesn't appear to be much in the way of tests?
Specially - i was wondering how hard it would be to replace traefic with caddy or haproxy, do letsencrypt via MRSK-managed parts of the deployment or drop in varnish.
As best I can tell - there's no clean or sensible way to do any of those things.
Despite that, as a simple Docker glue stick it seems quite useful.
I must say i was disappointed with one thing in the demo - hosting the app on the internet in plaintext covering it with a TLS figleaf via cloudflare. That's terrible guidance in a demo/tutorial.
I used https://github.com/ansistrano/deploy heavily a few jobs ago. We were doing CI/CD across 10 deployments in both GCP and AWS with 50 to 100 VMs per deployment. It worked great!
I have mixed feelings towards NIH people. One bunch is the RIIR crew (ripgrep, ruff, etc.) that produces high quality re-writes with amazing performance and safety, the other bunch is who re-writing tools without any added value.
I am not sure which category MRSK fall into, there has not been enough details from them to justify this particular re-write.
You might think of it as NIH, but I didn't at the time that I wrote it. Because I saw a gap in the ecosystem that wasn't being filled. grep didn't offer the UX of a tool like ag, and ag didn't degrade gracefully to behave like a grep. (And also wasn't as fast on single file search.) That gap is what motivated me to turn ripgrep from what it was (a test bed for Rust's regex crate) into what it is now.
There's a reason why you basically never see me criticizing people about "NIH." Because it's actually quite difficult to accurately cast that criticism. Saying things are NIH is like saying things are broken. It's terribly overused.
I agree with you. I am not sure what classifies as re-write. For my grep use cases rg is a prefect replacement only it is faster. From this point of view some might say it is a re-write.