I've developed quickly two kinds of socket use,: the first with blocking mode, and the second with non-blocking mode. The sockets are Unix domain sockets.
My My problem is that the kernel consume a huge amount of CPU (approx: 85 %).
My goal is to minimize the KERNEL CPU usage and to increase the THROUGHPUT My goal is to minimize the kernel CPU usage and to increase the throughput.
The blocking mode unixUnix socket shows performances of approx 1.3 GB/s
The. The non-blocking mode unixUnix socket shows performances of approx 170 MB/s.
The blocking version is faster than the non-blocking (+ epoll) version by approximately 8 ×8×.
The code of the blockingBlocking version is as follow:
client.cclient.c
server.cserver.c
The code of the nonNon-blocking version is:
client.cclient.c
server.cserver.c
According to what I have read on the Internet, non-blocking mode should be faster than blocking mode. Why am I observing the reverse performance? Why am I observing the reverse performance?
- Is there a way to increase the throughput more than 1.3 GB/s?
- Is there a way to minimize the KERNELkernel CPU usage?
NB:
Programs were compiled using:
Programs were compiled using:
gcc -std=gnu99 -O3 {file}.c {bin-name}
gcc -std=gnu99 -O3 {file}.c {bin-name}