当前位置:网站首页>TCP Performance Analysis and optimization strategy

TCP Performance Analysis and optimization strategy

2020-11-10 11:03:15 Tan Yingzhi

Network transmission

  • Propagation delay : The time it takes for a message to travel from the sender to the receiver , It's a function of the distance and speed of the signal
  • Transmission delay : The time required to transfer all bits in the message to the link , Is a function of message length and link rate
  • Processing delay : Processing group header 、 Time required to check bit errors and determine grouping targets
  • Queue delay : Waiting time for incoming packets to be processed

CDN Shorten the distance , To speed up access

The last mile of delay

A large part of the delay is often spent in the last few kilometers , Because the way that the client connects to the public network and the access link are poor

TCP

web-sync

Time delay

Every http It takes three handshakes to connect , From New York to London , Start once TCP Connect , It takes at least three handshakes 56ms, Sending packets to London requires 28ms, The response should be 28ms. You can see that the delay caused by three handshakes is very large

Congestion control

congestion collapse

The round trip time may have exceeded the maximum interruption interval for all hosts , So the corresponding host will produce more and more copies , Paralyzed the whole network , The buffer of the last switch node is filled , The extra groups must be deleted

flow control

By zooming in and out of the receiving window (rwnd) To control the amount of traffic sent ( Matching )

web-rwnd

Slow start

Through a dynamically variable congestion window (cwnd) Size to control the sending of traffic , The maximum data that the network can send is rwnd and cwnd The minimum value of

web-slowstart
As shown in figure shows , A request needs to go through 220ms To get the maximum rate . Because slow start limits available throughput , It's bad for the transfer of small files

Slow start restart : After the connection has been idle for a while , Reset congestion window to a safe default value . without doubt ,SSR For long period idle and burst request TCP Connections have a big impact , Suggest server Jin Yong SSR

web-cwnd

Congestion prevention

Slow start, each round trip will double the amount of data transferred , Until the flow control window of the receiver is exceeded or there is packet loss . At this time, the congestion prevention algorithm is connected

web-defint

Bandwidth delay product

BDP Represents the product of the capacity of a data link and its end-to-end delay , The result is the maximum amount of unconfirmed data in transit at any time

The sender and receiver send more than the maximum amount of unacknowledged data , Will stop and wait for each other ACK, This creates a data gap . To solve this problem, you should set the window large enough , Too small a window will limit the throughput of the connection . The minimum size of the window should be set to BDP

The head of the line is blocked

TCP Sequential delivery versus reliable delivery , If you lose a packet sometimes , Then the subsequent packets must wait until the data of the lost packet is retransmitted and received , To deliver to the application , This leads to a feeling of delayed delivery when reading data

In the case of applications that are not related to sequential and reliable delivery TCP It's not the best choice . For example, audio , If you lose a bag, you can insert a small gap in the audio , You can continue to process the package later , As long as the gap is small enough , Users don't notice , Waiting for packet loss may lead to unexpected audio output . Relatively speaking , The latter has a worse user experience .

tuning

reason
  • TCP Three handshakes add a whole round trip time ;
  • TCP Slow start will be applied to every new connection ;
  • TCP Traffic and congestion control will affect the throughput of all connections ;
  • TCP The throughput is controlled by the current congestion window size .
programme
  • Upgrade the server kernel to the latest version (Linux:3.2+);
  • Make sure cwnd The size is 10;
  • Disable idle slow start ;
  • Make sure the startup window zooms ;
  • Reduce the transmission of redundant data ;
  • Compress the data to be transmitted ;
  • Put the server close to the user to reduce round trip time ;
  • Reuse established as much as possible TCP Connect .

UDP

Network address translation

web-nat
web-nataddr
These three addresses are only allowed to be owned by private networks , Public networks are not allowed to have these ip

Connectome timeout

transit UDP The routing , because UDP There is no concept of connection and termination , This leads to the intermediate route not knowing when to delete the connection state . To solve this problem , Routers clean up regularly , Once the routing state is cleared ,UDP It needs to be re established . The solution is to send it back and forth on a regular basis keep-alive grouping . According to reason TCP There is a definite connection state , The router should be able to grasp TCP Life cycle of , But routers don't do that , The same goes for TCP Set the action of timeout cleaning , This leads to a long period of inactivity TCP, There will be no end of the connection will be broken .

P2P

STUN: Session Traversal Utilities for NAT It's an agreement , Can let the intranet application obtain an extranet ip And port ,STUN The server is set up on the public network

web-stun
Each intranet application uses STUN after , You can get an extranet ip,STUN Server pass keepalive Mode to keep the route without timeout , Each application can directly UDP Communication

TURN: Traversal Using Relays around NAT. When the intranet cannot be used NAT when , have access to TURN The server , Application through TCP Connect TURN The server , The server transfers messages

libjingle It's a Google implementation STUN/TURN/ICE Open source library .

92% Time can be connected directly to (STUN);

8% Time to use a repeater (TURN).

ICE: Interactive Connectivity Establishment agreement , If you can connect directly to , If you can't connect directly, use STUN, If not, use TURN

Design principles

  • Applications must tolerate various Internet path conditions ;
  • The application should control the transmission speed ;
  • The application should have congestion control for all traffic ;
  • Applications should use and TCP Similar bandwidth ;
  • The application should prepare a retransmission counter based on packet loss ;
  • Applications should not send greater than path MTU Datagram ;
  • The application should deal with datagram loss 、 Repeat and rearrange ;
  • The application should be stable enough to support 2 More than minutes of delivery delay ;
  • The application should support IPv4 UDP The checksum , Must support IPv6 The checksum ;
  • Applications can be used when needed keep-alive( Minimum spacing 15 second ).

It is recommended to use WebRTC

bandwidth / The relationship between latency and page load time

web-pageload

The reason for the delay

TCP handshake / Traffic congestion control / Packet loss / The head of the team is congested

User experience measurement of various resources on the website

adopt Navigation Timing/User Timing/resource timing To measure

Browser optimization

  • Resource prefetching and prioritization
  • DNS Pre parse
  • TCP Pre connect
  • Page pre rendering
How the server takes advantage of these optimizations
  • CSS and JavaScript Important resources should appear in the document as soon as possible ;
  • It should be delivered as soon as possible CSS, To remove rendering blocking and let JavaScript perform ;
  • Non critical JavaScript It should be postponed , To avoid blocking DOM and CSSOM structure ;
  • HTML The document is parsed incrementally by the parser , This ensures that documents can be sent intermittently , To get the best performance

web-preload

http Optimize

  • Reduce DNS Inquire about
  • Reduce HTTP request
  • Use CDN
  • add to Expires Head and configure ETag label
  • Gzip resources
  • avoid HTTP Redirect
  • Using persistent connections

keep alive And the limitations of connection pooling

For each server ip, The client maintains a long connection pool , If there are multiple requests going to the server , The number of pools and connections exceeds , This forces the client to wait for the connection pool to be idle , And enable multiple socket It takes up a lot of system resources .

http The limitations of the agreement

Send each time http request , You have to add the head , And the head is not compressed , This directly leads to the possibility that the length of the head may exceed body The length of .

版权声明
本文为[Tan Yingzhi]所创,转载请带上原文链接,感谢