Tar Pipe

AKA - The Fastest Way to Copy Files.

When you don’t want to copy a whole file system, many admins suggest the fastest way is with a ’tar pipe'.

Locally

From one disk to another on the same system. This uses pv to buffer.

(cd /src; tar cpf - .) | pv -trab -B 500M | (cd /dst; tar xpf -)

Across the network

NetCat

You can add netcat to the mix (as long as you don’t need encryption) to get it across the network.

On the receiver:

(change to the directory you want to receive the files or directories in)

nc -l -p 8989 | tar -xpzf -

On the sender:

(change to the directory that has the file or directory - like ‘pics’ - in it)

tar -czf - pics | nc some.server 8989

mbuffer

This takes the place of pc and nc and is somewhat faster.

On the receiving side

    mbuffer -4 -I 9090 | tar -xf -

On the sending side

    sudo tar -c plexmediaserver | mbuffer -m 1G -O SOME.IP:9090

SSH

You can use ssh when netcat isn’t appropriate or you want to automate with a SSH key and limited interaction with the other side. This examples ‘pulls’ from a remote server.

 (ssh [email protected] tar -czf - /srv/http/someSite) | (tar -xzf -)

NFS

If you already have a NFS server on one of the systems though, it’s basically just as fast. At least in informal testing, it behaves more steadily as opposed to a tar pipe’s higher peaks and lower troughs. A simple cp -a will suffice though for lots of little files a tar pipe still may be faster.

rsync

rsync is generally best if you can or expect the transfer to be interrupted. In my testing, rsync achieved about 15% less throughput with about 10% more processor overhead.

http://serverfault.com/questions/43014/copying-a-large-directory-tree-locally-cp-or-rsync http://unix.stackexchange.com/questions/66647/faster-alternative-to-cp-a http://serverfault.com/questions/18125/how-to-copy-a-large-number-of-files-quickly-between-two-servers


Last modified February 18, 2025: Site restructure (2b4b418)