I'm struggling to find a way to express my opinion about this video without seeming like a complete ass.
If the author's point was to make a low effort "ha ha AWS sucks" video, well sure: success, I guess.
Nobody outside of AWS sales is going to say AWS is cheaper.
But comparing the lowest end instances, and apparently, using ECS without seeming to understand how they're configuring or using it just makes their points about it being slower kind of useless. Yes you got some instances that were 5-10x slower than Hetzner. On it's own that's not particularly useful.
I thought, going in, that this was going to be along the lines of others I have seen, previously: you can generally get a reasonably beefy machine with a bunch of memory and local SSDs that will come in half or less the cost of a similar spec EC2 instance. That would've been a reasonable path to go. Add on that you don't have issues with noisy neighbors when running a dedicated box, and yeah - something people can learn from.
But this... Yeah. Nah. Sorry
Maybe try again but get some help speccing out the comparison configuration from folks who do have experience in this.
Unfortunately it will cost more to do a proper comparison with mid-range hardware.
What is the point you are trying to make? Are you saying that we would need to have someone on payroll to have a usable machine? Then why not just have... a SysAdmin?
Shared instances is something even European "cloud" providers can do so why is EC2 so much more expensive and slower?
Because people aren't going on AWS for EC2, they go on it to have access to RDS, S3, EKS, ELB, SNS, Cognito, etc. Large enough customers also don't pay list price for AWS.
Of the services you list, S3 is OK. I would rather admin an RDBMS than use RDS at small scale
> Large enough customers also don't pay list price for AWS.
At that scale the cost savings on not hiring sysadmins becomes much smaller, so what is the case for using AWS? The absolute cost savings will be huge.
In absolute numbers maybe it's a lot, but I doubt even 10% are EC2 only.
Even "only" ECS users often benefit from load balancing there. Other clouds sometimes have their own (Hetzner), but generally it's kind of a hard problem to do well, if you don't have a cloud service like Elastic IP's that you can use to handle fail over.
Generally everywhere I've worked has been pretty excited to have a lot more than just ecs managed for them. There's still a strong perception that other people managing services is a wonderful freedom. I'd love some day if the stance could shift some, if the everyday operator felt a little more secure doing some of their own platform engineering, if folks had faith in that. Having a solid secure day-2 stance starts with simple pieces but isnt simple, is quite hard, with inherent complexity: I'm excited by those many folks out there saddling up for open source platform engineering works (operators/controllers).
>Even "only" ECS users often benefit from load balancing there. Other clouds sometimes have their own (Hetzner), but generally it's kind of a hard problem to do well, if you don't have a cloud service like Elastic IP's that you can use to handle fail over.
Pretty much everyone offers load balancing and IPs that can be moved between servers and VPSs. Even if you have to switch to new IPs DNS propagation will not take as long as waiting out an AWS shutdown.
10% of what? Users, instances/capacity...? If its users then its a lot higher because it it gets commoner the smaller users are.
> There's still a strong perception that other people managing services is a wonderful freedom.
The argument is really about whether that is a perception of a reality. If you can fit everything on one box (which is a LOT these days) then its easy to manage your own stuff. if you cannot you are probably big enough to employ people to manage it (which is discussed in other comments) and you still have to deal with the complexity of AWS (also discussed elsewhere in the comments).
I'd be shocked if 10% of users only use EC2. And as you say, for the most part I expect these to be pretty small fry users.
I've used dozens of VPS providers in my life, and a sizable majority had not advertised any load balancing offerings. I can open tickets to move IP addresses around. But that takes time. And these environments almost always require static configuration of your IP address, which you need some way to do effectively during your outage.
Anyone who declares managing their own stuff to be easy is, to me, highly suspect. Day 0 setting stuff up isn't so bad, day 1 running will show you some new things, but as time goes on there's always new and surprising ways for things to break or not scale or not be resilient or for backups to not be quite right. You talk about employing people to manage for you, but one to three persons salary will buy you a lot of elasticache and rds. As a business, it's hard to trust your DBA's and other folks, to believe the half dozen people really have done a great job. Where-as you know there have been many people-decades of work out into resiliency at AWS or others, you know what you are getting, and it's probably cheaper than having your own team for many many people.
I want to be clear that I am 100% for folks buying hardware and hosting themselves. I think it's awesome and wild how good hardware is. But what we run atop is way way way too often more an afterthought than a well planned cohesive system that's going to work well over time. Thats why I am hats off to the open source platform engineering works out there. I think we're getting closer to some very interesting spaces where doing for ourselves starts to be viable, in a way that's legitimate & runnable in ways that everyone-figuring-it-out-for-themselved of the past was always quite risky and where the business as a whole or external systems reviewers rarely had a good ability to evaluate what was really going on or how trustworthy it was.
I aspire for us to outdo the perception that other people managing for us is a great freedom. I really long for that. But the kind of "meh it's not that hard" attitude I see here, to me, dissuades from the point: it undermines how hard and what a travail it is to run systems. It is a travail. But it's one that open source platform engineering is advancing mightily to meet, in exciting & clear ways, that the just throwing some shit up there past always made murky.
I'm saying that if you do want to compare two different platforms on performance, it should probably be done in consultation with someone who has worked with it before.
To use an analogy it's like someone who's never driven a car, and really only read some basic articles about vehicles deciding to test the performance of two random vehicles.
Maybe one of them does suck, and is overpriced - but you're not getting the full picture if you never figured out that you've been driving it in first gear the whole time.
At this point managing AWS, Azure or other cloud providers is as complicated or more complicated than managing your own but at an enormous cost multiplier and if you have steady traffic workloads I'm not sure it makes sense for most companies other than burning money. You still need to pay a sysadmin to manage the cloud and the complexity of the ecosystem is pretty brutal. Combine that with random changes in shit that causes problems like when we got locked out of our Azure account because they changed how part of their roles system works. I've also seen people not understanding the complexity of permissions etc and giving way to much access to people who should not have access.
When I was learning cloud computing I ran an ASP.Net forum software on Azure Cloud Service with Azure SQL backend. It cost me ~110 USD per month and was a total dog - slow as hell intermittently.
Moved it to AWS on a small instance running Server 2012 / IIS / SqlExpress and it ran like a champ for 10 USD a month. Did that for years. Only main thing I had to do was install Fail2Ban, because being on cloud IP space seemed to invite more attackers.
10 dollars a month is probably less than I paid in electricity to run my home server.
For what it's worth - my day job does involve running a bunch of infrastructure on AWS. I know it's not good value, but that's the direction the organisation went long before I joined them.
Previous companies I worked for had their infrastructure hosted with the likes of Rackspace, Softlayer, and others. Every now and then someone from management would come back from an AWS conference saying how they'd been offered $megabucks in AWS Credit if only we'd sign an agreement to move over. We'd re-run the numbers on what our infrastructure would cost on AWS and send it back - and that would stop the questions dead every time.
So, I'm not exactly tied to doing it one way or another.
I do still think though that if you're going to do a comparison on price and performance between two things, you should at least be somewhat experienced with them first, OR involve someone who is.
The author spun up an ECS cluster and then is talking about being unsure of how it works. It's still not clear whether they spun up Fargate nodes or EC2 instances. There's talk of performance variations between runs. All of these things raise questions about their testing methodology.
So, yeah, AWS is over-priced and under-performing by comparison with just spinning up a machine on Hetzner.
But at least get some basics right. I don't think that's too much to ask.
On the "value" question, it's worth considering why so many tech savvy firms with infra-as-code chops remain with GCP or AWS. It's unlikely, given how such firms work, they find no value in this.
FWIW, I firmly believe non "cloud native" platforms should be hosted using PXE-booted bare metal withing the physical network constructs that cloud provider software-defined-network abstractions are designed to emulate.
A better presentation would be to have someone make the best performance/price on AWS EC2, then someone else make the best performance/price on Hetzner and compare.
I myself used EC2 instances with locally attached NVMe drives with (mdadm) RAID-0 on BTRFS that was quite fast. It was for a CI/CD pipeline so only the config and the most recent build data needed to be kept. Either BTRFS or the CI/CD database (PostgreSQL I think) would eventually get corrupted and I'd run a rebuild script a few times a year.
i made a similar comment on the video a week ago. It is an AWFUL analysis, in almost every way. Which is shocking, because its so easy to show that AWS is overpriced and underpowered.
If you're someone who's bought into the Apple ecosystem over multiple devices, or ave a partner or children who are also using devices in the Apple ecosystem, then your Apple ID is something that is very definitely tied to you and probably difficult to change/give up when you replace your phone.
I don't think it would be at all surprising to find that the vast majority of people use their legal name or something closely associated with their identity, and that it persists over multiple devices.
IP locations work great for probably a reasonably large chunk of services.
But it's the edges that get you.
I moved home a few years back, connected a new service with the same ISP.
They have an IP pool that is labelled as for one state (Victoria, Australia) but is also used for their services in Tasmania.
So now I have to fight every major website (Google, Amazon, Maxmind, etc) that does GeoIP lookups that I'm not in Victoria, I'm 500-800KM away.
Google was very confused for about 12 months because when I moved I also brought my wifi gear and so it would give me a precise location of my old address because it used wifi geolocation.
I have one of these that I use when dealing with retailers for which the phrase "I don't have a smartphone" does not compute. Saves the hassle of having to explain it every time, and so far I'm the only one using my made up number, so they "remember me" or whatever. But they can't actually call, and that's the point.
It's really amazing to see how far HP has fallen. They used to make brilliant printers that were top-quality, easy to service, and overall met the whole "Just Works" thing.
Then some time in the late 90s/early 2000s they let the bean counters at it, and they've become the poster child of terrible product design optimised to extract the maximum revenue at all costs from customers.
I honestly don't care how revolutionary or awesome a product is - if it's from HP I'm staying away, and would recommend everyone else to do so. The company deserves to die.
It gets half-way through the job, but runs out of paint. You load the new paint but instead of continuing, it fills the entire site with a big test print that uses 10% of the paint.
Randomly, it'll run a head cleaning cycle that involves spraying large amounts of paint into the neighbour's driveway before cleaning itself on their lawn.
Seems like a missed opportunity. Why not fill a built in sponge with that paint instead, and when the sponge is fully saturated it's time to throw out the whole machine and buy another one?
Lets not leave the Service people out, they want to get a bonus this year, too.
So make the sponge replaceable, but only with a field maintenance call-out. The sponge unit will have an NFC chip in it that if it's removed will lock the unit out until a Technician access code is entered to restore the unit.
We can sell it as a whole periodic maintenance thing - give the service guy a can of air-duster and a roll of paper towel to wipe the thing off.
(I feel dirty even writing this because I'm sure someone is taking notes and thinking these are great ideas)
If the author's point was to make a low effort "ha ha AWS sucks" video, well sure: success, I guess.
Nobody outside of AWS sales is going to say AWS is cheaper.
But comparing the lowest end instances, and apparently, using ECS without seeming to understand how they're configuring or using it just makes their points about it being slower kind of useless. Yes you got some instances that were 5-10x slower than Hetzner. On it's own that's not particularly useful.
I thought, going in, that this was going to be along the lines of others I have seen, previously: you can generally get a reasonably beefy machine with a bunch of memory and local SSDs that will come in half or less the cost of a similar spec EC2 instance. That would've been a reasonable path to go. Add on that you don't have issues with noisy neighbors when running a dedicated box, and yeah - something people can learn from.
But this... Yeah. Nah. Sorry
Maybe try again but get some help speccing out the comparison configuration from folks who do have experience in this.
Unfortunately it will cost more to do a proper comparison with mid-range hardware.