I am C++ backend developer. I develop server side for realtime game. So, application architecture look like this:
1) I have class Client, which process requests from game client. Examples of requests: login, buy something in store (game internal store), or make some stuff. Also this Client handle user input events from game client (It's very often events, which sends ten times in second from game client to server, when player play in gameplay).
2) I have thread pool. When game client connect to server I create Client instance and bind them to one of threads from pool. So, we have relationships one to many: one thread - many Clients. Round-robin used to chose thread for binding.
3) I use Libev to manage all events inside server. It's mean when Client instance receive some data from game client through network, or handle some request, or trying to send some data through network to game client he lock hi's thread. While he make some stuff other Clients, which share same thread will be locked.
So, thread pool is bottleneck for application. To increase number concurrent players on server, who will play without lags I need increase number of threads in thread pool.
Now application work on server with 24 logical cpus ( cat /proc/cpuinfo
say it). And I set thread pool size to 24 (1 processor - 1 thread). It's mean, that with current online 2000 players every thread serves about 84 Client instances. top
say that processors used less then 10 percents.
Now question. If I increase number of threads in thread pool is It increase or decrease server performance (Context switching overhead vs locked Clients per thread)?
UPD 1) Server has async IO (libev + epoll), so when I says that Client locked when send and receive data I mean coping to buffers. 2) Server also has background threads for slow tasks: database operations, hard calculation operations, ...