0

What I have

I have 2 servers. Lets call them sen.der and recei.ver.

Sender generates files; these files can range in size from a 20Kb to 30 Gb.

I've written a script that checks to see how big the file is once it has been generated and if it's smaller than 10Mb, sends the file to recei.ver via SFTP. Otherwise, if it's greater than 10Mb, splits it up into 10Mb chunks and then sends them off to recei.ver via SFTP.

The sending time is obviously determined by the file size and the line-speed. That linespeed may be as low as 100Kbps meaning that it could theoretically take as much as 11 hours to transfer the biggest file.

What I Need

What I'm trying to do is get recei.ver to automatically cat the files back into one big one (and run a few extra errands like untar, send notification email, etc).

What I could do

I could try using inotifywait with the -m option, and then check the file size of the files being written and then cat the lot, when the last file that was created is less than 10,485,760 bytes.

I could then cat the chunks, and test the tar with a tar tf <filename>; echo $?

Is there a better way?

It would probably work... But doesn't seem elegant. Is there a better way to do this?

1 Answer 1

1

Using the size of a partial file as an implicit end marker may be brittle. Much better would be to send the parts first and then send a control file which lists the parts (maybe with sha256 checksums to detect transfer problems) so that the receiving program can check whether all parts have been transmitted and start reassembling only then.

1
  • Worth a shot. (about the control file). I've been including the checksum in the tarball, but I see your point of using it instead of trying to check the integrity of the file with a tar tf Commented Aug 31, 2018 at 12:08

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.