-1

I have a public website. I use a VM version of it to test changes to nginx.conf, script changes etc.

I currently have a python script that I use to parse the logs from the public website, and then use a python script to run those against the VM version. Right now it takes quite a while (~8 minutes) to run all the URLs.

I tried adding multiple threads (10) to run these in parallel. The queues for each thread was filled on as needed basis (each has a max 100 URLs).

The odd thing was it still took about 8 minutes to go through all the URLs.

Are there any clues/help you can give me on where to look next? For example:

  • I'm assuming nginx can handle multiple URL requests at once.
  • nginx does a proxypass to a website handler called pelican. I'm assuming pelican can handle multiple URL requests at once as well.
  • the VM (and public site) is running on Ubuntu Server. I'm assuming it has no significant constraints on running all this at once.
  • this not a normal scenario since all threads in the test script run from the same source IP address (i.e. my PC to the VM). I'm assuming that's not problem for nginx, Pelican, etc.
  • Python is single threaded under the covers, but since each thread starts a url request, it sleeps until it receives a response and so other threads can run as well.

Any other ideas on what I can use to fix this scenario?

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.