Fastest Way to Copy Files

tar pipe

When you don't want to copy a whole file system, the experts agree that most efficient seems to be with a 'tar pipe'

From one disk to another on the same system:

(cd /src; tar cpf - .) | pv -trab -B 500M | (cd /dst; tar xpf -)

tar pipe netcat

Across the network, you can add netcat to the mix (as long as you don't need encryption) and it's still fairly expedient.

On the receiver: 
(change to the directory you want to receive the files or directories in)

nc -l 8989 | tar -xpf -

On the sender:
(change to the directory that has the file or directory - like 'pics' - in it)

tar -cf - pics | nc some.server 8989

NFS

If you already have a NFS server on one of the systems though, it's basically just as fast. At least in informal testing, it behaves more steadily as opposed to a tar pipe's higher peaks and lower troughs. A simple cp -a will suffice though for lots of little files a tar pipe still may be faster. 

rsync

rsync is generally best if you can or expect the transfer to be interrupted. In my testing, rsync achieved about 15% less throughput with about 10% more processor overhead



http://serverfault.com/questions/43014/copying-a-large-directory-tree-locally-cp-or-rsync
http://unix.stackexchange.com/questions/66647/faster-alternative-to-cp-a
http://serverfault.com/questions/18125/how-to-copy-a-large-number-of-files-quickly-between-two-servers

Comments