—The Apple support page for copying a Time Machine backup disk doesn’t cover the scenario when your new backup target disk is on the network. If you try to do it by hand using
ditto or other, you will likely fail with inscrutable errors.
asr may work, but failed for me after 1 ½ days, 500GB, possibly because I had some kind of network disconnected. To rely on a network being reliable for 3 days is to ignore the 8 fallacies of distributed computing, but if your TM backup is small enough this could work.
Disk Utility -> File -> New Image -> Blank Image …to create a new sparsebundle disk image on your network drive. The arrowed options must be set correctly (well, you don’t have to use sparse bundle but it is allegedly designed specifically for efficient use across a network):
2. Mount the new disk image by double-clicking it, and also attach your existing Time Machine backup drive. Then, use
-> About This Mac -> System Report… -> Hardware/Storage and look in the column
BSD Name to find the device names on which your
Target volumes are mounted:
3. Turn off Time Machine backup. Usually by unticking “Back Up Automatically” in the Time Machine preferences, if there is no On/Off switch.
4. Then, use
asr on the command line to copy the
device that hosted the
volume to the
device hosting the new
caffeinate at the same time to stop the computer sleeping instead of copying. In my case that was:
sudo caffeinate asr restore --source /dev/disk3 --target /dev/disk4s2 --erase --puppetstrings --allowfragmentedcatalog
I got this output, and after a few seconds had to type
y to confirm:
XSTA start 512 client
Erase contents of /dev/disk4s2 (/Volumes/LaCie2019)? [ny]:
--puppetstrings option means what most of us might call
--progress although the output is quite limited.
Expect a speed of about 4 days per terabyte. I don’t know why. Watching the Network tab in Activity Monitor I can see that data is rarely going faster than 5MB/s. Even writing to a spinning disk across a 20 year old 100Mbps network should go faster than that. I tried adding
--buffers 10 --buffersize 100MB, but that still only got me to about 3 days per terabyte.