I may be missing something, but don't websockets have a pretty substantial benefit if you don't wait for the response before sending the next request? Because we don't have http pipelining, the AJAX version would wait for the response before sending the next request even if they're being submitted asynchronously (once we're using max-connections)
Comment
I may be missing something, but don't websockets have a pretty substantial benefit if you don't wait for the response before sending the next request? Because we don't have http pipelining, the AJAX version would wait for the response before sending the next request even if they're being submitted asynchronously (once we're using max-connections)
Replies
You mean we'd bombard the server X number of messages and count how long it took to send them. Could do that.
Sending stuff back and forth like I do now is, I guess, a bit more realistic.
However, a more common use case would be to bombard the client with messages. E.g.
for i in range(1000):
self.send({'msg': i})
I mean send the requests as fast as possible and time how long it takes to receive all the responses.
I wouldn't expect to see much improved latency otherwise when the network latency dominates the transfer time for the http headers.
Ooo, I like this idea. Find out how long it takes to send and receive all the messages as a whole between both HTTP and websockets.