I have a application running on a Windows XP platform (i7 2.1 Ghz processor). This application is a master/slave based communication between master and slave nodes, over UDP.
The master sends a request and slave node sends in its response (Burst Mode), packets of data every 5 ms, each data packet 1300 Byte long including header.
Back in the master node the main thread receives the data and writes it to a queue, triggering a parallel thread to read out from the thread.
The execution time for the Winsock API is very long while reading the next packet, and so the data is being lost from the buffer.
Execution time: Recvfrom() - 200 - 400 Microseconds.
rc = recvfrom(socket,buf,len,0,&s_addr,&cln_alen)
I am sure the Winsock API does not require such a long time just to fetch the next packet.
But I cannot find any information on what the real execution times should be. Any help in the direction is really appreciated.
You probably hit the combination of sending/receiving buffer size and OS scheduler issues. On Windows platform context switch between threads is not too frequent, so there are two options which you can use:
Increase priority of the server process
This will reduce time your server application is staying in the queue.
Increase the receiving buffer size
You need to do it on both ends. You can use setsockopt with SO_RCVBUF option:
int size = 1 * 1024 * 1024; setsockopt(socket, SOL_SOCKET, SO_RCVBUF, (const char*)&size, sizeof(int));
If losing packets is an issue, use TCP. Using TCP, I achieved a response time of less then one millisecond on a less modern machine for a simple loopback connection. Some important points there: