If all of the URLs you're requesting follow a simple pattern (such as all of the numbered pages from page1.html through page2000.html), then curl itself can easily download them all in one command line:
# Downloads all of page1.html through page2000.html. Note the quotes to
# protect the URL pattern from shell expansion.
curl --remote-name-all 'http://www.example.com/page[1-2000].html'
See the section labeled "URL" in the manual page for more information on URL patterns.
If you have a lot of URLs which don't follow a numeric pattern, you can put all of the URLs into a file use the -K option of curl to download them all in one go. So, using your example, what you'd want to do is modify your file to convert the usernames into URLs with a prefix of url =. One way to do that is with the sed(1) utility
# Convert list of usernames into a curl options file
sed 's|^\(.*\)$|url = http://www.***.com/getaccount.php?username=\1|' users > curl.config
# Download all of the URLs from the config file
curl --remote-name-all -K curl.config
This will be much faster than downloading individual files in separate commands, because curl can then enable HTTP pipelining within a single process. That way, it sets up a single TCP stream that gets reused for multiple requests, instead of needing to setup a new TCP stream for each request just to tear it down again, which is what would happen if you made each request in a separate process.
Do note, though, that such a large automated download may violate a site's terms of use. You should check a site's robots.txt file before doing such a task, and make sure you're not exceeding their rate limits.