This should do it in two lines:
sed -n 's/\s*URL\s*=\s*\(.*\)/\1/p' /tmp/curl.conf|xargs -I {} curl -O "{}"
sed -n 's/\s*URL\s*=\s*\(.*\)/\1/p' /tmp/curl.conf|xargs -I {} basename "{}"|xargs -I {} sed '/mortgage/q' "{}"
The first sed command on each line extracts the URLs from your urls file (/tmp/curl.conf in the example). In the first line we use curl's -O option to save the output from each page into a file that has the page's name. In the second line we re-examine each of those files and show only the text you are interested in. Of course if the word 'mortgage' doesn't occur in a file then the whole file will be output.
This will leave you with a temporary file for each url in the current directory.
EDIT:
here's a short script that avoids any left-over files, it outputs the result to standard output, you can redirect it from there as you wish:
#!/bin/bash
TMPF=$(mktemp)
# sed command extracts URLs line by line
sed -n 's/\s*URL\s*=\s*\(.*\)/\1/p' /tmp/curl.conf >$TMPF
while read URL; do
    # retrieve each web page and delete any text after 'mortgage' (substitute whatever test you like)
    curl "$URL" 2>/dev/null | sed '/mortgage/q'
done <$TMPF<"$TMPF"
rm "$TMPF"
 
                