The given file must not be stored in variable and then traversed due to memory size restrictions:
Example:
var=$(cat FILE)
for i in $var
do
echo $i
done
How do you traverse all strings in a file in the same way as the example above but extract each whitespace-separated directly from the file?
Example:
fileindex=1
totalfilecount=$(cat FILE | wc -w)
while (( ${fileindex} <= ${totalfilecount} ))
do
onefilename= ??? missing command using fileindex
((fileindex+=1))
done
Is there a command that can view a file as an array and allow you to extract words using their index positions?
The idea is to process every word in the file as though the file were an array.
Input file example:
one two
three four
five six
Here is the scenario that requires the above funtionality:
- we have server_A and server_B
- server_A needs to connect to server_B via sftp (sftp only) and 'get' some files
- BOTH 'ls' or 'ls -l' commands in sftp can be using wild cards to filter specific files
- each file needs to be processed individually (for various reasons) on the fly
- the files cannot be copied as a group to server_B and then processed individually
- a list of files must first be created on server_A and then each file in that list is copied from server_B and processed one file at a time
Where is the problem?
The problem is how the 'ls' command can create a dual column list of words if the list is long thus not allowing simple processing as with 'ls -l' which always creates a single column list.
This leads us to my initial question, if such a solution exists.
stringswon't do what is being asked here.stringsallow extraction of individual strings as stated in the question?