Skip to main content
Minor formatting
Source Link
AdminBee
  • 23.6k
  • 25
  • 55
  • 77
  • I have two remote data acquisition machines that are continuously writing data to their own local data.txt files.
  • The remote machines begin acquiring data and appending their data.txt with new data upon powering up/reboot via a systemd service.
  • The host machine enables/starts the service on each remote machine.

I have two remote data acquisition machines that are continuously writing data to their own local data.txt files. The remote machines begin acquiring data and appending their data.txt with new data upon powering up/reboot via a systemd service. The host machine enables/starts the service on each remote machine. I would like to periodically and automatically transfer their respective data files to my host machine using a shell script.

Note:Note: I currently only have access to one remote machine, and will have access to the second in the future. I'm trying to prepare for when this machine comes online. This is why I can't test out my "thoughts" on how to accomplish this.

Here is what I have cooked up to accomplish this for a single remote machine:

#!/usr/bin/env bash

#Enable data acquisition service on remote machine
REMOTE1="user@remote1"
REMOTE2="user@remote2"

ssh $REMOTE1 "systemctl start my.service"
while [ $? -ne 0 ]; do !!; done # waits for service to start, returns 0

ssh $REMOTE2 "systemctl start my.service"
while [ $? -ne 0 ]; do !!; done

# transfer data file every 60 seconds, until SIGINT from host
while sleep 60; do scp $REMOTE1:/data1.txt /host/; done

exit 0

I have run this successfully using one remote machine; I have not noticed any missing data/corruption either. To get this working with two remote machines, my first thought is to add another instance of the while loop:

while sleep 60; do scp $REMOTE1:/data1.txt /host/; done
while sleep 60; do scp $REMOTE2:/data2.txt /host/; done

However, I can see this may not work... given that the first loop will never break to begin the second one (?). My second thought would be to use a single while loop, but adding the another scp command for the second machine:

while sleep 60; do scp $REMOTE1:/data1.txt /host/; scp $REMOTE2:/data2.txt /host/; done

I am aware that this method could end up "drifting" if the command execution time varies. Though I am not worried about the transfer occurring exactly every X amount of time, it just needs to happen (sudo) periodically. Would love to hear comments/suggestions on how to accomplish this.

I have two remote data acquisition machines that are continuously writing data to their own local data.txt files. The remote machines begin acquiring data and appending their data.txt with new data upon powering up/reboot via a systemd service. The host machine enables/starts the service on each remote machine. I would like to periodically and automatically transfer their respective data files to my host machine using a shell script.

Note: I currently only have access to one remote machine, and will have access to the second in the future. I'm trying to prepare for when this machine comes online. This is why I can't test out my "thoughts" on how to accomplish this.

Here is what I have cooked up to accomplish this for a single remote machine:

#!/usr/bin/env bash

#Enable data acquisition service on remote machine
REMOTE1="user@remote1"
REMOTE2="user@remote2"

ssh $REMOTE1 "systemctl start my.service"
while [ $? -ne 0 ]; do !!; done # waits for service to start, returns 0

ssh $REMOTE2 "systemctl start my.service"
while [ $? -ne 0 ]; do !!; done

# transfer data file every 60 seconds, until SIGINT from host
while sleep 60; do scp $REMOTE1:/data1.txt /host/; done

exit 0

I have run this successfully using one remote machine; I have not noticed any missing data/corruption either. To get this working with two remote machines, my first thought is to add another instance of the while loop:

while sleep 60; do scp $REMOTE1:/data1.txt /host/; done
while sleep 60; do scp $REMOTE2:/data2.txt /host/; done

However, I can see this may not work... given that the first loop will never break to begin the second one (?). My second thought would be to use a single while loop, but adding the another scp command for the second machine:

while sleep 60; do scp $REMOTE1:/data1.txt /host/; scp $REMOTE2:/data2.txt /host/; done

I am aware that this method could end up "drifting" if the command execution time varies. Though I am not worried about the transfer occurring exactly every X amount of time, it just needs to happen (sudo) periodically. Would love to hear comments/suggestions on how to accomplish this.

  • I have two remote data acquisition machines that are continuously writing data to their own local data.txt files.
  • The remote machines begin acquiring data and appending their data.txt with new data upon powering up/reboot via a systemd service.
  • The host machine enables/starts the service on each remote machine.

I would like to periodically and automatically transfer their respective data files to my host machine using a shell script.

Note: I currently only have access to one remote machine, and will have access to the second in the future. I'm trying to prepare for when this machine comes online. This is why I can't test out my "thoughts" on how to accomplish this.

Here is what I have cooked up to accomplish this for a single remote machine:

#!/usr/bin/env bash

#Enable data acquisition service on remote machine
REMOTE1="user@remote1"
REMOTE2="user@remote2"

ssh $REMOTE1 "systemctl start my.service"
while [ $? -ne 0 ]; do !!; done # waits for service to start, returns 0

ssh $REMOTE2 "systemctl start my.service"
while [ $? -ne 0 ]; do !!; done

# transfer data file every 60 seconds, until SIGINT from host
while sleep 60; do scp $REMOTE1:/data1.txt /host/; done

exit 0

I have run this successfully using one remote machine; I have not noticed any missing data/corruption either. To get this working with two remote machines, my first thought is to add another instance of the while loop:

while sleep 60; do scp $REMOTE1:/data1.txt /host/; done
while sleep 60; do scp $REMOTE2:/data2.txt /host/; done

However, I can see this may not work... given that the first loop will never break to begin the second one (?). My second thought would be to use a single while loop, but adding the another scp command for the second machine:

while sleep 60; do scp $REMOTE1:/data1.txt /host/; scp $REMOTE2:/data2.txt /host/; done

I am aware that this method could end up "drifting" if the command execution time varies. Though I am not worried about the transfer occurring exactly every X amount of time, it just needs to happen (sudo) periodically. Would love to hear comments/suggestions on how to accomplish this.

Source Link
earl
  • 103
  • 3

Periodically transfer files from two remote servers to single host using shell script

I have two remote data acquisition machines that are continuously writing data to their own local data.txt files. The remote machines begin acquiring data and appending their data.txt with new data upon powering up/reboot via a systemd service. The host machine enables/starts the service on each remote machine. I would like to periodically and automatically transfer their respective data files to my host machine using a shell script.

Note: I currently only have access to one remote machine, and will have access to the second in the future. I'm trying to prepare for when this machine comes online. This is why I can't test out my "thoughts" on how to accomplish this.

Here is what I have cooked up to accomplish this for a single remote machine:

#!/usr/bin/env bash

#Enable data acquisition service on remote machine
REMOTE1="user@remote1"
REMOTE2="user@remote2"

ssh $REMOTE1 "systemctl start my.service"
while [ $? -ne 0 ]; do !!; done # waits for service to start, returns 0

ssh $REMOTE2 "systemctl start my.service"
while [ $? -ne 0 ]; do !!; done

# transfer data file every 60 seconds, until SIGINT from host
while sleep 60; do scp $REMOTE1:/data1.txt /host/; done

exit 0

I have run this successfully using one remote machine; I have not noticed any missing data/corruption either. To get this working with two remote machines, my first thought is to add another instance of the while loop:

while sleep 60; do scp $REMOTE1:/data1.txt /host/; done
while sleep 60; do scp $REMOTE2:/data2.txt /host/; done

However, I can see this may not work... given that the first loop will never break to begin the second one (?). My second thought would be to use a single while loop, but adding the another scp command for the second machine:

while sleep 60; do scp $REMOTE1:/data1.txt /host/; scp $REMOTE2:/data2.txt /host/; done

I am aware that this method could end up "drifting" if the command execution time varies. Though I am not worried about the transfer occurring exactly every X amount of time, it just needs to happen (sudo) periodically. Would love to hear comments/suggestions on how to accomplish this.