This sounds suspiciously like rubbish to me, at least in the general case. There are too many videos that need distributing in a short time for this to be how the problem is generally solved. Think of a recently uploaded YouTube video, or the coverage of a sports event that's now available on your Catch Up TV provider (iPlayer, NFL Now, etc.).
As a way of seeding the contents of a new CDN node when it's built, I can believe this. As a way of continuously synchronising an existing node, I have more trouble believing it.
Yes I agree with the sceptisim. Although if you were trying to reduce access time for rarely accessed data and had sufficient volume it could be worth doing.
“Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”
– Andrew Tanenbaum
My own CS professor used to use a similar analogy of a donkey carrying hard drives on his way to New Zealand. I think most us probably had a CS professor that used similar analogies, as it quickly distinguishes between latency and bandwidth.
Tanenbaum may well have not been using an analogy. He might have been talking from personal experience.
I have a co-worker whose father worked in the Bay area during the 70's and 80's. Back then computer time was expensive and not many places could afford to install high quality hardware (basically the 70's mainframe is the 2010's data center - there are relatively few). So people who wanted work done would have to ship their tapes over to the computer and have their job run. My friend's father was one of these couriers who would run the tapes around in the back of his station wagon.
My grandma tells a story about when her husband and her brother were running an experiment in grad school that needed computer analysis in the middle to tweak the final run. The problem was that their lab was on one side of Oakland, and the only computer they had access to was a mainframe on the other side of the city. And they only had the lab for one night.
So that night they had Grandma sit outside the lab in her car. As soon as their first run was done, they ran the punch cards with the results out to her. She drove across the city and started the computer run while they reset the experiment. As soon as the run finished, she drove the new cards back to them.
They ran their second experiment, got their results, and got the best grade in their class.
The way Grandma tells it is that she was driving 90mph across the city in her brother's Corvette at 3am. I'm not sure how true that part is...
I had to do this once. I went on a trip to Africa with an organization where I shot tons of photos and HD video. In the end, I had over 100GB total, and they needed my source footage to edit videos and such. I lived in Florida, and the editor lived in Georgia. We tried dropbox, but that just was going to take weeks.
Our plan? He went to Best Buy, got a USB hard drive & mailed it to me. I loaded it up with footage and mailed it back to him. Once he got it, I think he was even able to return the hard drive and get his money back. The only expense was postage, and we saved a lot of time.
I work in the ediscovery industry, we very regularly need to send large collections which can easily be a TB+ to another location on short notice. One of our favorite things to do with new folks is have them hop on a plane with a hard drive, give it to someone at the airport, and get right back on the same plane.
Imagine someone who travels around the world with a suitcase of SSDs to deliver data... possibly for criminals, big corporations, or governments. Johnny Mnemonic meets Transporter except plausible.
The American company I work for once decided that the shared folders of the German team should be located on an American server. So they flew the hard drives to the USA over a week-end. Then the Germans complained because of the time it took them to open their excel files. So the company had to fly the hard drives back to Germany. Anecdotal, but a good example IMHO.
I am not so sure it is physics but accidents of economics business and Engineering. Companies have not been so enthusiastic to put optic fiber to where people live. Also the TCP protocol doesn't support really fast connection when you're going up to a gigabit or so. There is a company with a patented protocol based on UDP that companies like Disney use to move big video files around.
I believe you're talking about Aspera, recently bought by IBM. It's pretty fast and reliable and I've never had a problem with it over even residential Internet speeds ( many master files for delivery well over 100GB). Bitmax is the big CDN that uses it for media delivery.
There's a free alternative I have to talk up any time Aspera gets mentioned, and that's GridFTP by way of Globus Connect.[1] It's pitched as a way to move massive quantities of research data around, but it's free for personal/academic use and dead simple to set up.
There's a standalone command line client too [2], but it requires a bit more knowledge of how to tweak the settings, while Aspera and the Globus client does a lot of auto ranging stuff.
In any case, both of them will easily get you line speed from your slowest node.
As fibre optics develops, so do storage device densities, though. You can now get a 3.5" drive with 8TBs, which would take 2+ hours to transmit at 1Gbps, even assuming perfect bandwidth utilization.
Another thing I find interesting about the speed of light is that the Internet of today wouldn't be possible in the same form as here on possible settlements on the moon and not at all on the Mars
Between. Although connectivity to the moon should be fine, pings in the order of 3-4 seconds happen regularly on Earth, too. From personal experience I can attest that WoW is perfectly playable with pings of up to 5 seconds.
Network connectivity between Earth and Mars is going to be a slow affair, though, and will require relay satellites for when the Sun is between the planets.
A worst case round trip time of 45 minutes definitely won't be fun with TCP.
I think people shouldn't get their panties in a bunch. Letters used to take months to travel from one end of a continent to another for some 3600 years and people coped just fine. We won't have near-real-time chat, but the rest of the internet ecosystem will work as before. Mail, forums, cat videos, etc. pp. will work fine.
The engineering problem is already solved for probe communications anyway.
UUCP was designed for networks where many nodes were only online for a couple of hours each night and then over slow analog lines. Perhaps it has lessons to teach us.
Bandwidth to/from Mars wouldn't be too bad with enough power output. Latency is the big problem.
So I think there will be massive caches on both sides. Accessing any popular content will be pretty fast because it will already be cached. Live data will be delayed by up to half an hour depending on orbital position, but that's not terrible. Conversations would be the worst, with up to an hour delay round-trip.
Past Mars it would get harder. You'd need much more power to keep the bandwidth up, and the latency would get worse.
As a way of seeding the contents of a new CDN node when it's built, I can believe this. As a way of continuously synchronising an existing node, I have more trouble believing it.