It is sort of difficult to explain in the title, please see below:
A bash script I used to make calls to caffe functions, this specific example trains a model using solver prototxt:
#!/bin/bash
TOOLS=../../build/tools
export HDF5_DISABLE_VERSION_CHECK=1
export PYTHONPATH=.
#for debugging python layer
GLOG_logtostderr=1  $TOOLS/caffe train -solver    lstm_solver_flow.prototxt -weights single_frame_all_layers_hyb_flow_iter_50000.caffemodel  
echo "Done."
I have worked with this many times, no problems. What it does is to use caffe framework's built in functions, such as "train" and pass arguments. The train code is mainly built in C++ but it calls a Python script for a custom data layer. With the shell, everything runs smootly.
Now, I am calling these exact commands in a python script using subprocess.call() with the Shell=True
import subprocess
subprocess.call("export HDF5_DISABLE_VERSION_CHECK=1",shell=True))
subprocess.call("export PYTHONPATH=.",shell=True))
#for debugging python layer
subprocess.call("GLOG_logtostderr=1  sampleexact/samplepath/build/tools/caffe train -solver    lstm_solver_flow.prototxt -weights single_frame_all_layers_hyb_flow_iter_50000.caffemodel",shell=True))
When running the bash commands from within a python script (init),it is able to start the train process, however the train process makes a call to another python module for a custom layer, and can not find it. Both the init and custom layer modules are in the same folder.
How can I fix this problem? I really need to run it from Python so that I can debug. Is there a way to make -any- python module in the project reachable to any calls from others?