It has been determined already in this question that tar cannot read input from stdin.
How else can a dd output be archived directly possibly without using any compression? The purpose of doing everything in a single task is to avoid to write the dd output to the target disk twice (once as a raw file and once as an archive), and to avoid to perform two different tasks, which is a waste of time (since the input file must be read and written and the output read, processed and written again), and can be impossible if the target drive is almost full.
I'm planning to do multiple backups of drives, partitions and folders, and I'd like to benefit both from the ease of having everything stored into a single file and from the speed of each backup / potential restore task.
dd if=/dev/sdXY | tar ..., but why nottardirectly:tar cf foo.tar /dev/sdXY? – muru Feb 18 '15 at 20:29stdoutand has not a proper entry into the filesystem it cannot be archived directly by simply piping it totar. Anyway my bad, i wrote partitions but i actually want this to work for whole drives, and that's the reason why i need to useddfirst – kos Feb 18 '15 at 21:07tar cf foo.tar /dev/sdXinstead. – muru Feb 18 '15 at 21:07tarjust packs all the files together, while i need to keep the partition scheme as well – kos Feb 18 '15 at 21:27tar cf foo.tar /dev/sdXmakes an archive of the disk image. It does not pack files. It retains whatever structure the disk had, because that is alltarsees. – muru Feb 18 '15 at 21:30tardoes not backupMBR/GPT, since those sectors are outside of the filesystem "scope", therefore no partition table is backed up, and you'll need to recreate the partition scheme manually once you made a back up withtar, while withddthere's no such need – kos Feb 19 '15 at 08:32tardoes back upMBR/GPTright? So how would you restore from a backup? Let's say itar cf sda.tar /dev/sdaand move the tarball tosdb, then i switch the originalsdadisk for a newer one of the same capacity. This disk comes of course with just unallocated space. How would you restore starting from this point? – kos Feb 19 '15 at 08:46tar xf sda.tar -O >/dev/sdb. – muru Feb 19 '15 at 08:52tar cf sdb.tar /dev/sdbjust generates a 10,2KB archive with adevfolder inside and asdb0-lenght binary file insidedev, this both with the drive mounted and unmounted and both withsudoand without – kos Feb 19 '15 at 09:08tar cf sdb1.tar /dev/sdb1– kos Feb 19 '15 at 09:10tarcould read from block devices. In that case, you'll probably have to forgo tar, and usegzipor some other compression utility why doesn't care aout file structure, like so:dd if=/dev/sda | gzip -c >sda.gz– muru Feb 19 '15 at 09:22tar, if for example i wanted to backup multiple drives on the same tarball, or if i wanted to add some file afterward before the compression – kos Feb 19 '15 at 09:41taris that that way i can avoid long on-the-fly compression time, differently from usinggzipand, even more, from using7z– kos Feb 19 '15 at 12:25tar– kos Feb 19 '15 at 12:27--fastforgzip). Maybe the penalty won't be too high. – muru Feb 19 '15 at 12:317zor with some other high compression utility – kos Feb 19 '15 at 12:417z, and i really don't want to deal with this every time. Anyway i'm open to other way to perform the same thing without having to deal with writing the file twice, which is a waste of time, or compressing while archiving – kos Feb 19 '15 at 13:43lzop. It's much faster thangzipand should operate near the speed of I/O throughput on modern CPUs. – David Foerster Mar 02 '15 at 12:41lzop, I didn't test it in the appropriate environment but so far it's the fastest method I've tried, using compression of course. I'd rather find a solution which doesn't involve compression, but it seems like there's no other way to accomplish this, so most likely I'll be forced to use something involving compression. I feel like I should award the bounty to you since it's the fastest method proposed so far, so do you want to write an answer for that? – kos Mar 08 '15 at 20:15