Skip to main content
deleted 36 characters in body
Source Link
Rui F Ribeiro
  • 58k
  • 28
  • 156
  • 237

It turns out, one cannot open unix pipes directly over ssh since Python throws an IOerror. After some wrangling, I ended up doing as follows. Using Python/Paramiko I run these commands on the Solaris server (via SSH):

mkfifo input
sleep 10800 > input & 

This creates the input pipe and 'sleep' makes sure that the pipe stays open (10800 is 3 hours in seconds). If you don't use sleep, the pipe will close after the first time you pass a batch of commands to it.

The I run my process, by sending this over ssh:

nohup process_name < input > output.txt &

This starts the process, attaches the input pipe to its stdin, and its output to a simple text file output.txt. nohup makes sure the process stays alive if I disconnect the ssh session.

Note: I at first intended to use an output pipe also for the output but since it turns out I cannot open pipes via SSH, there is no advantage using a pipe. I still use a pipe for the input, as this makes sure that my process continues accepting commands as I send them.

The output is easy to read from the output.txt, but for input I send each batch of commands (that I wanted the process to execute). First the Python script creates a text file with the new batch of commands called redir.txt, and then send the contents of that text file into the pipe:

cat redir.txt > input

This redirects the commands first into the input pipe, and then the process reads the commands from the pipe.

Hope this helps someone out there.

It turns out, one cannot open unix pipes directly over ssh since Python throws an IOerror. After some wrangling, I ended up doing as follows. Using Python/Paramiko I run these commands on the Solaris server (via SSH):

mkfifo input
sleep 10800 > input & 

This creates the input pipe and 'sleep' makes sure that the pipe stays open (10800 is 3 hours in seconds). If you don't use sleep, the pipe will close after the first time you pass a batch of commands to it.

The I run my process, by sending this over ssh:

nohup process_name < input > output.txt &

This starts the process, attaches the input pipe to its stdin, and its output to a simple text file output.txt. nohup makes sure the process stays alive if I disconnect the ssh session.

Note: I at first intended to use an output pipe also for the output but since it turns out I cannot open pipes via SSH, there is no advantage using a pipe. I still use a pipe for the input, as this makes sure that my process continues accepting commands as I send them.

The output is easy to read from the output.txt, but for input I send each batch of commands (that I wanted the process to execute). First the Python script creates a text file with the new batch of commands called redir.txt, and then send the contents of that text file into the pipe:

cat redir.txt > input

This redirects the commands first into the input pipe, and then the process reads the commands from the pipe.

Hope this helps someone out there.

It turns out, one cannot open unix pipes directly over ssh since Python throws an IOerror. After some wrangling, I ended up doing as follows. Using Python/Paramiko I run these commands on the Solaris server (via SSH):

mkfifo input
sleep 10800 > input & 

This creates the input pipe and 'sleep' makes sure that the pipe stays open (10800 is 3 hours in seconds). If you don't use sleep, the pipe will close after the first time you pass a batch of commands to it.

The I run my process, by sending this over ssh:

nohup process_name < input > output.txt &

This starts the process, attaches the input pipe to its stdin, and its output to a simple text file output.txt. nohup makes sure the process stays alive if I disconnect the ssh session.

Note: I at first intended to use an output pipe also for the output but since it turns out I cannot open pipes via SSH, there is no advantage using a pipe. I still use a pipe for the input, as this makes sure that my process continues accepting commands as I send them.

The output is easy to read from the output.txt, but for input I send each batch of commands (that I wanted the process to execute). First the Python script creates a text file with the new batch of commands called redir.txt, and then send the contents of that text file into the pipe:

cat redir.txt > input

This redirects the commands first into the input pipe, and then the process reads the commands from the pipe.

Source Link
delica
  • 121
  • 3

It turns out, one cannot open unix pipes directly over ssh since Python throws an IOerror. After some wrangling, I ended up doing as follows. Using Python/Paramiko I run these commands on the Solaris server (via SSH):

mkfifo input
sleep 10800 > input & 

This creates the input pipe and 'sleep' makes sure that the pipe stays open (10800 is 3 hours in seconds). If you don't use sleep, the pipe will close after the first time you pass a batch of commands to it.

The I run my process, by sending this over ssh:

nohup process_name < input > output.txt &

This starts the process, attaches the input pipe to its stdin, and its output to a simple text file output.txt. nohup makes sure the process stays alive if I disconnect the ssh session.

Note: I at first intended to use an output pipe also for the output but since it turns out I cannot open pipes via SSH, there is no advantage using a pipe. I still use a pipe for the input, as this makes sure that my process continues accepting commands as I send them.

The output is easy to read from the output.txt, but for input I send each batch of commands (that I wanted the process to execute). First the Python script creates a text file with the new batch of commands called redir.txt, and then send the contents of that text file into the pipe:

cat redir.txt > input

This redirects the commands first into the input pipe, and then the process reads the commands from the pipe.

Hope this helps someone out there.