We could transform the list of numbers into a sequence of sed commands and run them as a sed editing script in a single sed invocation:
sed 's/$/p/' lines.list | sed -n -f /dev/stdin file.txt
Here, the first sed creates a sed script consisting of commands such as 1p, 4p etc., by simply inserting p at the end of each line. This script is then sent to the second sed after the pipe, which reads it with -f /dev/stdin and applies it with the text file as input.
This would require reading each file only once.
Using awk, read the line numbers into an associative array as keys, then, while reading the other file, see if the current line number is one of the ones that was previously made a key in the array:
awk 'FNR == NR { lines[$0]; next } (FNR in lines)' lines.list file.txt
In awk, the special variables NR and FNR are the total number of records (lines) read so far, and the total number of records (lines) read in the current file, respectively. If NR is equal to FNR, we're reading from the first input file, and we create an array entry using the current line, $0, as the key (no value is given), and immediately skip to the next line of input.
If we're not reading from the current line, we test with FNR in lines to see whether FNR, the line number in the current file, is a key in the array called lines. If it is, the current line will be printed.
Without heavy support from other tools, the grep utility is not really made for performing this type of task. It extracts lines from text files whose contents match (or do not match) a given pattern. The pattern is therefore supposed to match the line, not the line number.
The following is just for fun and should not be considered a suggestion for how to actually solve this issue.
You can insert line numbers with grep using
grep -n '.*' file.txt
This inserts line numbers at the start of all lines in the file, directly followed by : and the original contents of the line.
We may then, as with the sed solution, modify the pattern file to make it match a selection of those specific numbers:
sed 's/.*/^&:/' lines.list
This would output regular expressions such as ^1: and ^4:, each matching a particular line number at the start of a line.
We may then get grep to use these expressions (here with the help of a process substitution). Finally, we remove the temporary line numbers using cut:
grep -n '.*' file.txt | grep -f <(sed 's/.*/^&:/' lines.list) | cut -d : -f 2-
... but this is too contrived to even be considered a reasonable solution.
Each of the above solutions will always display the selected lines in the order in which they occur in the text file. If you want to lines outputted in the order they occur in the line number file, then you may instead use sed (or awk, see further down):
sed 's/$/p/' lines.list | ed -s file.txt
Again, we create an editing script from our line number file by simply adding p at the end of each line.
This script is then passed as the command input to the ed editor, which applies the commands, in order, to the text file.
Testing:
$ cat lines.list
4
1
$ sed 's/$/p/' lines.list | ed -s file.txt
She went to school.
He is a boy.
Note that sed reads the whole file into memory, just like the following equivalent awk program does:
awk 'NR == FNR { lines[FNR] = $0; next } { print lines[$0] }' file.txt lines.list
Note that the input files are switched in comparison to the previous awk solution. This allows us to first read the text file into the lines array, line by line, and then select lines randomly out of that while reading the file with line numbers.