I want the copy functionality to tell me about conflicts right away, rather than when it starts copying that file. I also want it to continue the non-conflicting files while I'm deciding.
I cannot tell from the video whether Windows 8 does this or not.
>I also want it to continue the non-conflicting files while I'm deciding.
That's the big point. The most serious usability flaw is when doing a large copy, and every single file with a permissions issue requires user intervention to make the copy continue. Keep a running list of problem files, and let me peruse them at my leisure - keep copying the rest.
This is the OS that is supposed to carry MS until at least 2015. If they are going to unveil the cool stuff during BUILD, why just not wait a couple of weeks?
- The examples seem dated, like people copying pictures over USB drives in the era of facebook and dropbox. He runs the example on a netbook, which I guess means after his sister leaves he needs to get another USB drive and copy those files again to his PC.
- Why did he copy files, edit them, copy them back and then need to figure out what version he wanted to use? Now he's got two copies of the same filename in different locations and they may get out of sync.
- Why in the case of conflict, does a number have to be added to the file if you want to keep both versions? Why not have some type of versioning and conflict resolution built into the filesystem? Look at a user's computer, directories are littered with filenames like "copy of Myfile (2) 2011_may.doc".
- Where is the touch interface? Those buttons are small and all the dialogs assume a mouse.
I do like the blog and I like that they introduce different members of the Windows8 team, but they need to focus on more compelling features.
Yes, people copy pictures over USB drives in the era of facebook and dropbox. Get out of the Silicon Valley bubble and realize that most of the world still doesn't have good enough internet.
Lion does not let you skip files that you don't want copied, if they already exist. Which is a ridiculous choice, and one that is very hard to work around.
One copy tool that's not well known is RichCopy from Microsoft. It's a great copy util when dealing with huge amount of files. It supports parallel copy, resume from failed, and copy only changed files.
RichCopy is always one of the first tools I install on any of my machines. It's an amazingly powerful tool. About the only thing missing with RichCopy is integration with Explorer Shell.
But with all the problems of rsync and cygwin on Windows, Robocopy is the more logical choice for such tasks. I'm not going to load up explorer.exe in WINE on Linux so I can move some files around with the mouse.
Rsync in cygwin also (severely) messes up your files' and folders' NTFS-permissions when synching against a linux-host, which can lead to things like the meltdown of a complete IIS website/webserver setup since the IIS worker process can no longer access the files it needs.
I've learnt the hard way that you dont use rsync in cygwin for anything remotely crucial ;)
Am I the only one who still uses Total Commander instead of Windows Explorer whenever possible?
I'd love to see this multiple-jobs-in-movecopy-queue functionality in it, but in all other situations it suits me so much better.
I think I would not be able to use Windows without the Total Commander. I used the text-based Norton Commander in the eighties. Now I use Total Commander, including command line, Ctrl-E to get the previous line, Ctrl-Enter to get the file I'm on in the command line etc.
I think the histogram solves the usability issue where a network transfer is hung up but the "transfer speed" linearly, slowly drops off (because it is calculated as a total average). The only way to verify the transfer previously is to check if the total transfer percent/value changes absolutely.
It wouldn't solve it, it would just reduce it. It would still take 15 seconds to be able to identify that your connection dropped if you judged based off of that figure alone.
If your throughput suddenly dropped to 0 and you were watching a 15-second rolling average of that throughput speed, you'd within a very few (far less than 15) seconds know that your transfer speed had cratered.
It's interesting to note the difference between the Windows 8 dialog has compared to the GNOME one.
You can minimise the Windows 8 dialog.
On Windows 8, the dialog shows both where the files are coming from and where they're going to.
By default, Windows hides most information in favour of a percentage. Click "More details" and Windows shows you much more information than GNOME does - Windows has emphasis on power user information.
The Windows dialog's layout is much better. The GNOME dialog's buttons stick out. GNOME's dialogs are really ugly and this is no exception.
It really shows the difference in polish between the two products.
My criticisms of Windows' dialog is that 'flat buttons' are aesthetically appealing but not particularly user-friendly, and perhaps Windows should show filesize and progress by default.
IIRC, if you "closed" the gnome one, it went to the systray, which is a UI problem all its own.
Also, you can pause the win8 operations, not just cancel them.
As for ugliness, this seems to be a shot of Hardy, give or take, so 2008-ish. I would say that the Ambiance theme that debuted to much controversy in 2010 is improved over the screenshot. (And neither of these are Gnome's default.)
Well, in Windows 7 (on my PC) in 1 out of 3 times the window freezes when clicking Cancel during a copy or move operation, my reasons to doubt come from there.
And I'm talking about a Core 2 Duo machine with 4 Gigs of ram.
Reading this article makes me notice the difference between Windows and OSX/nix. Windows is allowing the user to pause and schedule their copy operations, whereas OSX and nix just provide a copy window with the ability to cancel the operation.
Even when I want to delete something in Win7, I get the message "Discovering items...". In that time, my *nix box has already deleted it and moved onto the next task.
I wish they wouldn't spend so many expensive developer hours to make an efficient pausing mechanism on your copy and move operations when instead they could focus on why the copy and move operations are taking so long in the first place.
That's something that has been annoying me since the first NT... Today it annoys less because my contact with Windows machines is much smaller, but it still bites me every once and then.
Would it tell me beforehand that not enough space is available instead of failing midway? Also, I can see a use case for this pause item here: not enough space? we paused the transfer so that you can free up space and resume, or cancel at will.
The "use pause to prioritise the thing I'm not interacting with" part is egregious at best. Let me drag/drop the activities to prioritise them: "I want THAT one to complete faster, so I move it to the top" seems much more pertinent UX.
Besides, having ten parallel copies is a sure performance killer on a single disk. Certainly having a queue with a bunch of worker slots per (read+write) device could be a nice design. A new copy task from disk A to disk B could thus be delayed because there are already a network transfer to disk A and a copy from USB key to disk A in progress. Number of per disk workers could be dynamically adjusted according to IO wait times on a given disk, so that a non-copy task (like starting an application or opening a file) would get priority. Raising/lowering priorities would automatically start/pause transfers according to available workers.
It can't know in advance if it has free space or not.
1) Other files might get deleted in the process freeing more space.
2) Other files might get copied to the same volume as well claiming space.
3) There may be transparent compression involved meaning that you could copy more data then it seems or that the original data would claim more space in the target volume.
I'm on a PC. I'm copying a 30GB total (VM splitted disk VMware image, a bunch of selected folders containing photos or whatever) on an external disk with only 10GB left. The thing should tell me beforehand if I have enough space, not midway through the process.
Yes I could look it up beforehand, but why make me take three actions (look for free space on device, look for total space required, order to copy) when one would be sufficient? The computer should do the tedious tasks and let me decide.
The graphs of transfer rates are cool and all, but consider the average "mom" user - what are they going to do with that? Is a complete graph of transfer speed really necessary? It seems a bit of geekyness thrown in because it's "cool" rather than adding significant extra value over a simple text label with the current transfer speed.
Can't help but think the file transfer rate charts is development effort that could have been better spent elsewhere. I don't really need graphically representation of the speed of my copy/paste from start to end.
The current copying paradigm is utter 70's. Here's a hint for you from the 2000's, you Microsoft or Linux or Gnome developers:
What I would like is that copies seem instant and transparent. Actually, I wouldn't like to think of copying at all. I "copy" (or merely just order) a file to (appear in) some place and it immediately becomes accessible from there. Or I decide to move it back a few seconds later and I can do that.
In reality the computer should manage the low-level copying internally and consider it more like syncing.
I shouldn't have to care about how long something takes or how the kernel will queue the actual copy operations, or whether it copies anything at all but just creates a copy-on-write mapping to the same data. I also shouldn't have to wait for copying: I should be able to just copy something and immediately turn off the computer. The operating system would continue copying the blocks, if necessary, when it's restarted later. All copy operations ought to be logically atomic as well: either a copy is immediately made or it isn't, the physical details of copying and remapping the corresponding blocks shouldn't be any of my concern.
So, an immediate, transaction-based and logically coherent file operations with automatic kernel-level delayed-I/O backend is what I would call the 2000's.
I hope that there already is at least one research O/S somewhere that would do this.
So basically, you'd "copy" a file on an external harddrive, plug it out and then have a surprise when you'd notice that the file isn't actually there when you get to another computer?
Because cross-device copying is the only place where this operation is slow, with modern HDDs all other operations are instantanious for all intents and purposes.
Good idea but you really should have put more tought on it.
Why let the exception case (removing an external drive quickly) ruin the main use case?
going forward it seems reasonable to assume ubiquitous network connectivity, so you can simply treat external drives as a cache and show a non-modal alert "please reconnect for sync, expected to finish within X minutes" iff the amount of data synced is large enough to matter via a 3G connection.
Because it's not really the exception. I rarely copy or move files except when I want to transfer them to a USB key. Otherwise,they mostly stay where I created or initially downloaded them to.
Describing an end-user experience doesn't need to account for all the details in the first sketch. But pending sync is one of the easiest ones to handle.
You obviously unmount or decouple your external drive in software first and that's a good place for the operating system to inform you that:
Ongoing I/O for your USB mass storage device.
Wait for completion before unplugging.
And if there are many parallel I/O streams going on and this dialog is invoked, automatically prioritize anything that's being copied to the external hard drive first so that the user can unplug it as soon as possible.
I used to just pull it, but a half-dozen missing and corrupted files (one of which was a final project for a class in college) trained me out of that habit.
Also, besides copying files to removable drives, etc, the primary reason I copy a file is because I want two separate copies. Otherwise I would move it.
Why stop there? Why not just stop worrying about files and folders, and just think about documents, pictures and videos instead? Why should I have to care about the low-level filesystem at all?
Which was a nice experiment which proved that it didn't work. You know what you can't do on iOS which most users want to be able to do?
Yeah. Work with the same document from multiple viewpoints, or applications. You can't work with documents across applications on iOS because you supposedly don't have files.
Like I said: A nice experiment, but it proves that "no files" only works for the simplest of use-cases and limits the ability to work with data beyond the limits that people reasonably expect to find.
Because that paradigm is currently restricted to rich people in first-world countries where always-on fast internet is ubiquitous and you can sync everything to the cloud without much trouble. The rest of us will continue using not-very-high-capacity USB sticks to move photos, videos and other data around for the foreseeable future.
I was really impressed with Office 2007 when it first came out. After watching the video that demonstrated how it was developed incorporating extensive user feedback etc i was even more impressed. I expected the same methodologies would be applied to Vista and they werent so i was somewhat dissapointed, but Windows 7 turned out pretty good.
That said, Office 2010 was over-engineered in many ways. I just hope they dont over engineer Windows 8
Every day I have to work on both Vista and Windows 7 machines. The W7 user experience is much better. One of the reasons it's much better is that Explorer is much better. The user experience with Vista Explorer truly sucks and that suckiness always seems to be in my face. My point being that Explorer matters and any improvements to it with W8 would be a large positive.
I notice that file extensions are visible in some of the screenshots. Does that mean they're getting rid of that dumb (IMO) policy of hiding the extensions from users by default? The result was lots of naive users clicking on nekkid-pitcher-of-hot-celeb.jpg.exe because they couldn't see the .exe part, in exchange for...what, exactly?
And they punished those of us who make them visible again, by having the "edit filename" function mix the extension with the filename, even though wanting to change the extension is about 100 times more rare than wanting to change the characters right before the extension.
I've seen a nice version of this in linux somewhere - when you push F2 two boxes come up and the cursor is positioned in the one to the left of the dot but you can move across to it easily.
EDIT: Ah, this is available in windows 7 too. Nice, finally.
In Windows 7 (maybe Vista too), the "edit filename" function lets you edit the extension, but has only the filename selected. Thus Edit -> start typing -> Enter changes the filename, leaving the extension intact.
Are multiple large copy jobs common? I can't recall ever doing large copies of two distinct sets of files simultaneously, which makes me wonder when I'd ever use this dialog box. Would be curious to know what their telemetry data shows.
Another thing I think is interesting: the graph will never be intuitive because they've compounded the progress bar with the actual plotting. Thus, fast operations will take up a larger percentage of the graph because during fast operating speeds more progress will have been completed. Conversely, slow periods in the graph will be disproportionately smaller compared to the actual percentage of time that the slow operation took.
It really bugged me when he hit 'more details', then had to drag the window back up to the center of the screen, because the details didn't fit in view.
I use Total commander for these things on Windows, it already has queuing and the replace dialog does have more features (
http://flint-inc.ru/Temp/over_dlg_3.png
) but this looks cool too and I like the histograms :)
While I like the improvements, I am a little disappointed that they are not implementing parallel/multi-threaded copying. It might not be something that a majority of users would need, but it feels like it should be something that should be natively supported by the OS.
What do you mean? It showed two batches being copied in parallel. If anything, they should do less in parallel and rather queue up jobs between same disks.
Yes, you can have multiple copy operating running in parallel but the way Windows works, the files themselves are copied in serial order (within an individual copy operation). Whereas with tools like RichCopy, you are actually copying multiple files in parallel.
I'd be interested in seeing benchmarks that demonstrate an improvement when doing this. I'd expect that the average case is slower due to increased disk contention.
I'm irritated because it seems the only attention given to Win7 is critical/security fixes, while any non-critical bugs will be ignored and the answer will be to simply buy the next version of Windows if you want the problem to go away. IMO a lack of attention to detail and caring about your users.
Ugh. So, instead of tackling something like auto-queuing of copy operations to prevent disk thrashing, they did the 'hard' work of adding a pause button and some silly bling. Incidentally, if that conflict-resolution dialog doesn't provide mouse-over image enlargement/preview, I'm going to instantly hate it.
Yes, from their example ("pausing" one task speeds up the rest, and graphs show speed progress "hills and valleys") it can be concluded that they let copy tasks run in parallel instead of queuing the files. That's practically a guaranteed way to produce more fragmented disk layouts. Well done, current MSFT programmers.
You understand what's going on in MSFT when the blog post ends with the line like "All of this adds up to building a significantly improved copy experience, one that is unified, concise, and clear, and which puts you in control of your experience." A clear example of managerspeak -- adjusted to sound good among other managers not to mean something.You see even from where the battles for "bling" come.
Disk fragmentation has not mattered for the last few years. Most modern hard drives are not storing data in the exact way that they are reporting to the software layer. Hardware manufacturers use all sorts of tricks to make their drives faster, many of these 'break' the original specifications for how these devices are meant to work.
Still, if you write a piece of A a piece of B a piece of C then a piece of A then piece of B ( A B C A B C ) it's fully certain that you wont have on the disk AAA BBB CCC, whereas you can expect most of the file patterns to be AAA BBB CCC if you write from the queue.
In the case of A B C A B C you need a disk seek when reading pieces of the file, in the case of AAA you don't. As long as there are mechanical hard disks (and they still have big advantages for a lot of uses) such things matter a lot. You can seek only 100 times per second! Just to compare, during time waited on one seek on 2 GHz machine you can do at least 20 millions of calculations...
Out of curiosity: what operating system does automatically queue separately initiated copy operations? Wouldn't it be quite confusing if some copy operation just wouldn't start at all before some other - possibly very time consuming - operation was concluded?
As another poster mentioned, Mac OS does this. It solves the problem you mentioned by combining the file copy progress bars under a single window. Visually, it looks like a queue, with the top progressing fastest, and the ones below progressing very slowly or not at all.
Linux. It doesn't queue individual files,but it does schedule disk blocks to minimize seeks and maximize throughput. Look up "elevator algorithm " and "Linux IO scheduler" for more detail.
I'd be surprised if it didn't. Then again, I was rather surprised that Windows XP's throughput dropped massively when I started copying more than one file at once, so it seems that it didn't do a good job of it.
This is on such a different level as to be irrelevant. The I/O scheduler will still try to accommodate requests within timeframes of seconds, at most. When copying large files to/from two different areas on a spinning disk simultaneously, this means significant time will be spent flying the heads back and forth between them, regardless of the elevator. So queueing two 8GB movie files to go one after another will be significantly faster than copying them simultaneously.
I'm 90% sure that one of the changes made to the way Lion copies files was this exactly - it now queues operations for maximum throughput. I do wish I could find a link that said this - I apologize for that.
Everyone is assuming that they are doing no improvements in the back-end. This was a discussion about the front-end of copy operations. I like the new interface and think that while it does not seem whoopingly big, it will make copy-pasta more tasty.
Just when I thought I could hate on microsoft, they come out with a nice improvement.
HOWEVER what I'd really like to see is a tool which shows you during boot time what boot ops are not behaving well, and give a very easy and responsive interface for murdering those ops. If its a video driver, fall back to default, some crap that can show me a web browser to troubleshoot with giant warnings that your video driver is dead. If they can get this whole startup taking god knows how long due to one bad application mess, windows will be quite awesome.
And then they need to help developers get the posix tools ported to windows an the most meaningful way out-of-the-box with no special install. Including replacing CMD with Bash and changing their FS to support a good structure like linux (C:, D: can still exist, but make a Sys: which contains things like proc and friends)
> That's practically a guaranteed way to produce more fragmented disk layouts
Not necessarily. You can preallocate space at the destination when the copy starts. File copies are one of those cases when you know up front what the file size will be.
IIRC, there are ways to do this in most file system since the early 90's. I remember HPFS in OS/2 could do that.
It's more complicated than that. Specifically, on NTFS there's that thing called MFT which stores the file information and even the whole file if it's small. You can even get MFT fragmented if you write files from folders A B C D like A1 B1 C1 D1 instead of A1 A2 A3 ... If you copy for example 3 folders with a lot of files (no matter of file sizes) without the queue you made disc reading three times more data and skipping them in the case when you do a simple dir for one folder (before it's cached in memory file cache, of course). Queues are important thing, ignoring them is still bad. Even on SSD -- you reduced the throughput rate even there by not caring.
Programmers should try to solve problems and not try to achieve lofty goals set by management like "improve the copy experience".
When you try to achieve goals set by management you compile all the problems together into one package, nicely designed or not, without actually solving any of the root concerns.
How you really bring results as a programmer is when you divide the issues into the smallest reasonable parts and then find the best way forward for each one of them.
And then later bringing all of the separate lines of work together into one whole symphony of code and experience, so to speak.
That's practically a guaranteed way to produce more fragmented disk layouts. Well done, current MSFT programmers.
Is disk fragmentation a serious problem for you? I use SSDs for most of my drives, and let the Microsoft defragger do its thing once in a while, and leave it at that. Doesn't seem to bother me none.
Nope, sorry-- I wrote too quickly. What I meant to say was that I use an SSD on most drives, and run the Microsoft defragger periodically on the remainder.
In the case of Linux, more specifically ext4), you get some benefit from having the blocks of a given file contiguous, as they will end up being one single extent.
"The key benefit to SSDs is that they have virtually no seek time. Reading adjacent blocks of data is no faster than reading blocks that are spread out over the drive. Fragmentation does not affect SSD drive speed.
(...) SSD drives physically wear out as you write to them. Defragmentation software moves around all the files on your drive. Thus, defragmenting an SSD reduces its life span without giving you any benefits."
I had an old netbook with an SSD. A lot of folk noticed that the performance of the disk got slower over time. Surprisingly, defragging the drive actually improved the performance -- but IIRC it was a side effect of the process, not directly anything to do with fragmentation.
Ahh, do you know why? TRIM support isn't being used.
When you're defragmenting, all you're doing is forcing the drive to write a lot, which itself essentially is a crazy way to fix it without TRIM command, but it works. On the other hand, you're doing a lot more write cycles than what TRIM or garbage collection would normally do.
SSDs don't benefit from data being sequentially aligned. Random access is just as fast. Also, you should minimize the write access to the disk whenever possible.
I used to use TeraCopy, but on my Windows 7 64 bit machine, it's half the speed of the built-in copy for network transfers. It's fairly clear that Windows is doing a significantly better job with buffering.
I still keep TC around for the odd job, but it's no longer my default.
Auto-queueing is not an no-brainer. While it is easy to assess which operations will contend with each other on local drives as soon as you start interacting with the network it gets far more fluffy as the storage arrangement is abstracted away from where the local OS can detect it.
Even for local operations automatically queueing operations could be sub-optimal. If copying chunks for small files from two spinning-disk-and-moving-heads drives to an SSD basic queuing algorithms would perform one copy after the other but the SSD is probably more than capable of keeping up with both at the same time. So you are going to need a UI control to override the default queuing.
BTW: If you need auto-queuing and other tweaks in this area for current Windows variants, I've been using the free (not Free) version of http://en.wikipedia.org/wiki/Teracopy for some time and have found it to be useful and reliable.
One thing I would love to see in a conflict resolution dialog is the option to view a diff of text files and similar, like Debian offers to help resolve file conflicts when applying package updates, not just previews of graphics files.
I'm very excited for Windows 8 and these posts are a big help in that (as well as what I perceive to be a great opportunity for the Windows platform across phones, tablets and desktop).
Maybe Microsoft will actually do this correctly instead of implementing it like Nautilus, where the copies aren't queued and thus happily trash the disk (hell, even the pause feature is quite welcome).
Unfortunately TeraCopy appears to be abandonware, as it hasn't seen a new version for nearly a year, and no real attempt has been made to deal with its terrible network copy performance under Windows 7.
I cannot tell from the video whether Windows 8 does this or not.