当前位置:网站首页>40 pictures to help you understand TCP and UDP

40 pictures to help you understand TCP and UDP

2020-12-07 16:05:23 Senior brother Wu, programmer

The organization of this article is as follows

Transport layer Between the application layer and the network layer , yes OSI The fourth layer in a hierarchical system , It is also an important part of network architecture . The transport layer is mainly responsible for end-to-end communication on the network .

The transport layer plays an important role in communication between applications running on different hosts . Let's talk about the transport layer agreement

Transport layer overview

The transport layer of a computer network is very similar to a highway , The highway is responsible for transporting people or goods from one end to the other , The transport layer of a computer network is responsible for transporting messages from one end to the other , This end refers to   End system . In a computer network , Any medium that can exchange information can be called an end system , Like mobile phone. 、 Network media 、 The computer 、 Operators, etc. .

In the process of transporting messages at the transport level , Will comply with certain protocol specifications , For example, the data limit of one transmission 、 What kind of transportation agreement to choose, etc . The transport layer allows two unrelated hosts to do Logical communication The function of , It looks like connecting two hosts .

The transport layer protocol is implemented in the end system , It's not implemented in routers . Routing is just the function of identifying address and forwarding . It's like a courier delivering a delivery , Of course, by the addressee, that is xxx building xxx unit xxx The man in the room came to judge !

TCP How to determine which port it is ?

Remember the packet structure , Here's a look back

After the packet passes through each layer , This layer protocol will attach the packet header to the packet , A complete package header is shown above .

After the data is transferred to the transport layer , It will be attached with TCP The first one , The header contains the source port number and the destination port number .

At the sending end , Transfer the message from the transport layer to the application grouping , Grouping in computer networks is also known as   Message segment (segment). The transport layer generally divides the message segments , Split into smaller blocks , Add a transport header to each piece and send it to the destination .

In the process of sending , Optional transport layer protocol ( It's transportation ) There are mainly  TCP  and  UDP , The choice and characteristics of these two transport agreements are also the focus of our discussion .

TCP and UDP Pre knowledge

stay TCP/IP The protocol can realize the function of transport layer , The most representative TCP and UDP. mention TCP and UDP , Let's start with the definitions of these two agreements .

TCP be called Transmission control protocol (TCP,Transmission Control Protocol), You can get a general idea of TCP The protocol has the function of controlling transmission , It is mainly reflected in its controllability , Controllable means reliable , It's true ,TCP Provides a kind of application layer reliable 、 Connection oriented Service for , It can reliably transmit packets to the server .

UDP be called   User datagram protocol (UDP,User Datagram Protocol), You can know by name that UDP Focus on the datagram , It provides a way for the application layer to send datagrams directly without establishing a connection .

How can computer network terms describe so many data ?

In a computer network , There will be different descriptions between different layers . We mentioned above that the transport layer packet is called a segment , besides , Will also TCP The packets in are also called segments , However, the UDP The packets of are called datagrams , At the same time, the packet of network layer is called datagram

But in order to unify , Generally speaking, in computer networks we call TCP and UDP The message is   Message segment , This is the agreement , In the end, there is no need to worry too much about how to address it .

Socket

stay TCP perhaps UDP Before sending specific message information , You need to go through a fan first   door , This door is Socket (socket), The socket connects up to the application layer , Down to the network layer . In the operating system , The operating system provides applications and hardware with Interface (Application Programming Interface). And in computer networks , Socket is also an interface , It also has interfaces API Of .

Use TCP or UDP When communication , Socket will be widely used API, Use this set of API Set up IP Address 、 Port number , Realize the sending and receiving of data .

Now we know , Socket and TCP/IP No necessary connection ,Socket It's just convenient TCP/IP Use , How to use it conveniently ? You can use the following directly Socket API These methods of .

Socket type

There are three main types of sockets , Let's talk about

  • Socket datagram (Datagram sockets): Datagram socket provides a kind of There is no connection Service for , And it doesn't guarantee the reliability of data transmission . Data may be lost or duplicated during transmission , And there is no guarantee that data will be received in sequence . Datagram socket use UDP( User DatagramProtocol) agreement Data transmission . Because datagram socket can not guarantee the reliability of data transmission , For possible data loss , Need to do the corresponding processing in the program .

  • Stream Socket (Stream sockets): Stream sockets are used to provide connection oriented 、 Reliable data transfer service . Can guarantee the reliability of the data 、 Sequence . The reason why streaming socket can realize reliable data service , The reason is that it uses transmission control protocol , namely  TCP(The Transmission Control Protocol) agreement

  • Raw socket (Raw sockets): Raw sockets allow direct sending and receiving IP Data packets , Without any protocol specific transport layer format , The raw socket can read and write to the kernel that has not been processed IP Data packets .

Socket processing

In a computer network , To achieve communication , There must be at least two end systems , At least a couple of sockets are required . Here is the socket communication process .

  1. socket Medium API Used to create endpoints in a communication link , After creation , Will return... Describing the socket socket descriptor .

It's like using a file descriptor to access a file , Socket descriptors are used to access sockets .

  1. When the application has a socket descriptor , It can bind a unique name to the socket , The server must be bound to a name in order to access it on the network

  2. The server is assigned socket And use the name bind After binding to the socket , Will call listen api.listen  Indicates the client's willingness to wait for a connection ,listen Must be in accept api Previous call .

  3. Client application in stream socket ( be based on TCP) On the call  connect  Initiate a connection request to the server .

  4. Server applications use acceptAPI Accept client connection request , The server must first successfully call bind and listen after , Call again accept api.

  5. After establishing a connection between stream sockets , The client and server can initiate read/write api Called .

  6. When the server or client wants to stop the operation , Will call  close API Release all system resources obtained by socket .

Socket though API In the communication model between the application layer and the transport layer , But the socket API It's not a communication model . Socket API Allow applications to interact with transport and network layers .

Before we move on , Let's start with a little episode , Just a little chat IP.

Chat IP

IP  yes Internet Protocol( Internet Protocol ) Abbreviation , yes TCP/IP In the system The network layer agreement . Design IP There are two kinds of problems to be solved

  • Improve network scalability : Realize large-scale network interconnection

  • Decouple application layer and link layer , Let the two develop independently .

IP As a whole TCP/IP The core of protocol family , It's also the foundation of the Internet . In order to realize the interconnection of large-scale networks ,IP More emphasis on adaptability 、 Simplicity and operability , And made a certain sacrifice in reliability .IP There is no guarantee of grouping Delivery time and reliability , The transmitted packets may appear The loss of 、 repeat 、 Delay or disorder Other questions .

We know ,TCP The next layer of the agreement is IP Protocol layer , since IP unreliable , So how to ensure that the data can arrive accurately ?

This involves TCP There's a problem with the transport mechanism , We'll talk about TCP Let's talk about it .

Port number

In front of the slogan , Let's talk about the file description and socket And the port number

In order to facilitate the use of resources , Improve the performance of the machine 、 Utilization and stability and so on , Our computers have a layer of software called the operating system , It's used to help us manage the resources that computers can use , When our program uses a resource , You can apply to the operating system for , The operating system allocates and manages resources for our programs . Usually when we want to access a kernel device or file , Programs can call system functions , The system will open the device or file for us , Then it returns a file descriptor fd( Also known as ID, It's an integer ), We want to access the device or file , Only through the file descriptor . It can be thought that the number corresponds to the open file or device .

And when our programs are going to use the Internet , To use the corresponding operating system kernel operation and network card device , So we can apply to the operating system , Then the system will create a socket for us Socket, And return this Socket Of ID, In the future, our program will use network resources , Just to this Socket The number of ID Just operate . And every process of our network communication corresponds to at least one Socket. towards Socket Of ID Write data in , Equivalent to sending data to the network , towards Socket Middle reading data , It's equivalent to receiving data . And these sockets all have unique identifiers —— File descriptor fd.

The port number is  16  A nonnegative integer of bits , Its scope is 0 - 65535 Between , This range is divided into three different port segments , from Internet Numbering agency IANA Distribute

  • Generally known / Standard port number , Its scope is 0 - 1023

  • Registered port number , The scope is 1024 - 49151

  • Private port number , The scope is 49152 - 6553

Multiple applications can run on one computer , When a segment arrives at the host , Which application should be transferred to ? How do you know that this segment is passed to HTTP Servers, not SSH What about the server ?

Is it by port number ? When a message arrives at the server , It's the port number that distinguishes between applications , So it should be distinguished by port number .

Take an example to refute cxuan, If two pieces of data arrive at the server by 80 How do you distinguish between the ports ? Or the two data ports arriving at the server are the same , Different agreements , How to distinguish ?

So it is obviously not enough to determine a message by port number .

The Internet generally uses   Source IP Address 、 The goal is IP Address 、 Source port number 、 Target port number   To distinguish . If one of them is different , It's considered a different segment . These are also Demultiplexing and multiplexing   The basis of .

Determine the port number

Before actual communication , We need to confirm the port number first , There are two ways to determine the port number :

  • Standard established port number

The standard port number is assigned statically , Each program will have its own port number , Each port number has a different purpose . The port number is a 16 The number of bits , It's the size of 0 - 65535 Between ,0 - 1023 The port numbers in the range are all dynamically assigned fixed port numbers , for example HTTP Use 80 Port to identify ,FTP Use 21 Port to identify ,SSH Use 22 To mark . This type of port number has a special name , be called   Zhou Zhiduan's slogan (Well-Known Port Number).

  • Port number of timing assignment

The second way to assign port numbers is to assign them dynamically , In this way , The client application can not set its own port number at all , Distribution by operating system , The operating system can assign non conflicting port numbers to each application . This dynamic port number assignment mechanism is even initiated by the same client TCP Connect , Can also identify different connections .

Multiplexing and demultiplexing

We mentioned above that each socket on the host will be assigned a port number , When a segment reaches the host , The transport layer checks the destination port number in the message segment , And direct it to the corresponding socket , The data in the segment then enters the connected process through the socket . Let's talk about the concept of multiplexing and demultiplexing .

There are two types of multiplexing and demultiplexing , namely There is no connection Multiplexing of ( Multichannel decomposition ) and Connection oriented Multiplexing of ( Multichannel decomposition )

Connectionless multiplexing and demultiplexing

Developers will write code to determine whether the port number is a well-known port number or a sequential port number . If the host A One of them 10637 Port to host B Medium 45438 Port send data , The transport layer uses  UDP  agreement , After the data is generated in the application layer , It will be processed in the transport layer , Then encapsulate the data in the network layer to get IP The datagram ,IP Data packets are delivered to the host as best they can through the link layer B, And then the mainframe B It will check the port number in the message segment to determine which socket it is , The process is as follows

UDP A socket is a binary , The binary contains the purpose IP Address and destination port number .

therefore , If two UDP Message segments have different sources IP Address and / Or the same source port number , But with the same purpose IP Address and destination port number , Then the two messages will be located to the same destination process through the socket .

Here's a question , host A Host computer B Send a message , Why do you need to know the source port number ? For example, I give my sister a message that I'm interested in you , Sister, do you still need to know which organ this message is from ? It's over to know that I'm a bit interested in you ? It's actually needed , Because if my sister wants to express her interest in you , Is it possible that she will kiss you , Then she has to know where to kiss ?

This is it. , stay A To B In the message segment of , The source port number will be used as   The return address   Part of , When B A message segment needs to be sent back to A when ,B Need from A To B The value of the source port number in the , As shown in the figure below

Connection oriented multiplexing and demultiplexing

If we say connectionless multiplexing and demultiplexing, we mean UDP Words , So connection oriented multiplexing and demultiplexing refer to TCP 了 ,TCP and UDP The difference in message structure is ,UDP It's a binary and TCP It's a quadruple , namely Source IP Address 、 The goal is IP Address 、 Source port number 、 Target port number  , We also mentioned this above . When one TCP When a message segment arrives at a host from the network , The host will be disassembled to the corresponding socket according to these four values .

The figure above shows the process of connection oriented multiplexing and demultiplexing , The host in the picture C Host computer B Two have been launched HTTP request , host A Host computer C Launched a HTTP request , host A、B、C All have their own unique IP Address , When the host C issue HTTP After the request , host B Can break down these two HTTP Connect , Because the mainframe C The two source port numbers of the request are different , So for the mainframe B Come on , These are two requests , host B Be able to decompose . For hosts A And host C Come on , The two hosts are different IP Address , So for the mainframe B Come on , It can also be decomposed .

UDP

finally , We're starting. Yeah UDP Discussion of the agreement , Rise up !

UDP The full name is   User datagram protocol (UDP,User Datagram Protocol),UDP Provides a kind of No connection needed You can send encapsulated IP The packet approach . If the application developer chooses UDP instead of TCP Words , Then the application is equivalent to and IP Dealing directly with .

Data from the application , Will add multiplexing / Source and destination port number fields for demultiplexing , And other fields , Then the formed message is passed to the network layer , The network layer encapsulates the transport layer message segment to IP In the datagram , And deliver it to the target host as best you can . The key point is that , Use UDP The protocol delivers datagrams to the target host , There is no transport layer entity between the sender and the receiver handshake Of . Because of that ,UDP Known as yes There is no connection The agreement .

UDP characteristic

UDP Protocols are generally used as streaming media applications 、 Voice communication 、 The transport layer protocol used in video conferencing , We all know DNS The underlying protocol also uses UDP agreement , These applications or protocols are chosen UDP Mainly because of the following points

  • Fast , use UDP When the agreement , As long as the application process passes data to UDP,UDP This data will be packaged into UDP The message segment is immediately passed to the network layer , then TCP It has the function of congestion control , It will judge the congestion of the Internet before sending it , If the Internet is extremely congested , Then it will inhibit TCP The sender of . Use UDP The goal is to hope for real-time .

  • There is no need to establish a connection ,TCP Three handshakes are required before data transmission , and UDP Data transmission can be carried out without any preparation . therefore UDP There is no connection delay . If you use TCP and UDP To compare developers :TCP It's the kind of design everything , Engineers who can't develop without design , You need to take all factors into account before you start ! So very By spectrum ; and UDP It's the kind of thing that comes up and does it , We will start the project immediately , It doesn't matter the design , It doesn't matter what technology you choose , Just do it , This kind of developer is very unreliable , But suitable for rapid iterative development , Because you can do it right away !

  • No connection state ,TCP It needs to be maintained in the end system Connection status , Connection state includes receive and send buffers 、 Congestion control parameters, as well as the parameters of serial number and confirmation number , stay UDP There are no parameters in , There's no send cache or receive cache . therefore , Some servers are dedicated to a particular application when the application is running on UDP On , Generally, it can support more active users

  • The overhead of group header is small , Every TCP Message segments all have 20 First overhead of bytes , and UDP only 8 The cost of bytes .

Here's a little bit of attention , Not all uses UDP The application layer of the protocol is unreliable Of , Applications can deliver reliable data on their own , By adding the confirmation and retransmission mechanism . So use UDP The biggest feature of the protocol is its speed .

UDP Message structure

Now let's take a look at UDP Message structure of , Every UDP The message is divided into UDP Headlines and UDP Two parts of the data area . The newspaper is headed by 4 individual 16 Bit length (2 byte ) Field composition , Respectively describe the source port of the message 、 Destination port 、 Message length and check value .

  • Source port number (Source Port) : This field occupies UDP Before the header 16 position , Usually contains the... Used by the application that sends the datagram UDP port . The receiving application uses the value of this field as the destination address for sending the response . This field is optional , Sometimes the source port number is not set . If there is no source port number, it will default to 0 , It is usually used in communications that do not require a return message .

  • Target port number (Destination Port): Indicates the receiving port , The field length is 16 position

  • length (Length): This field occupies 16 position , Express UDP Datagram length , contain UDP Header and UDP Data length . because UDP The length of the message header is 8 Bytes , So the minimum value is 8, Maximum length is 65535 byte .

  • The checksum (Checksum):UDP Using checksums to ensure data security ,UDP The checksums also provide error detection , Error detection is used to verify the message segment from the source to the target host , Whether the integrity of the data has changed . The sender's UDP In the message segment 16 The sum of bit words is used to carry out inverse code operation , Any bit overflow encountered in summation is ignored , Here's an example , Three 16 Add the number of bits

these 16 The sum of the first two bits is

And then I'll take the above result and the third one 16 Add the number of bits

The last bit added will overflow , Overflow position 1 To be abandoned , Then we do the inverse code operation , The inverse code operation is to take all the 1 Turn into 0 ,0 Turn into 1. therefore  1000 0100 1001 0101  The opposite of  0111 1011 0110 1010, This is the check sum , If in the receiver , There is no error in the data , So all of it 4 individual 16 The number of bits , It also includes checksums , If the value of the final result is not 1111 1111 1111 1111 Words , Then it means that there is an error in the data transmission .

Here's a question , Why? UDP Will provide error detection function ?

This is actually a   End to end   Design principles , This principle says To reduce the probability of various errors in transmission to an acceptable level .

File from host A To the host B, in other words AB The host wants to communicate , It takes three steps : The first is the mainframe A Read files from the disk and divide the data into packets packet,, Then the packets are connected to the host A And host B To the host B, Finally, the mainframe B Receive packets and write them to disk . In this seemingly simple but complicated process, normal communication may be affected for some reasons . such as : Error in reading and writing files on disk 、 Buffer overflow 、 Memory error 、 These data packets may be lost due to error or congestion , It can be seen that the network used for communication is unreliable .

Because the realization of communication only needs to go through the above three links , So we want to add an error detection and error correction mechanism in one of the links to check the information ?

The network layer certainly can't do this , Because the main purpose of the network layer is to increase the speed of data transmission , The network layer does not need to consider data integrity , The integrity and correctness of the data is left to the end system to check , So in data transmission , For the network layer, it can only be required to provide the best possible data transmission service , It is impossible to expect the network layer to provide data integrity services .

UDP The reason it's unreliable is that although it provides error detection , however There is no recovery ability for errors, and there is no retransmission mechanism .

TCP

UDP It's a control without complexity , A protocol that provides connectionless communication services , let me put it another way , It leaves part of the control to the application , It only provides the most basic functions as transport layer protocol .

And with the UDP The difference is , Also as transport layer protocol ,TCP The agreement is better than UDP It's a lot more .

TCP  The full name is  Transmission Control Protocol, It's called a kind of Connection oriented (connection-oriented)  The agreement , This is because before one application starts sending data to another , These two processes have to be carried out first handshake , Handshake is a logical connection , It's not a real handshake between two hosts .

This connection refers to various devices 、 Two applications that communicate on a line or network are proprietary for the purpose of passing messages to each other 、 Virtual communication links , It's also called virtual circuits .

Once the host A And host B It establishes the connection , Then the communication application only uses this virtual communication line to send and receive data to ensure the transmission of data ,TCP The protocol controls the establishment of connections 、 To break off 、 Keep waiting for work .

TCP The connection is Full duplex service (full-duplex service)  Of , What does full duplex mean ? Full duplex refers to the host A With another host B There is one TCP Connect , Then the application data can be transferred from the host B To the mainframe A At the same time , Also from the host A To the mainframe B.

TCP It can only be carried out   Point to point (point-to-point)  Connect , So the so-called multicast , In other words, the situation that a host sends messages to multiple receivers does not exist ,TCP The connection can only be connected to two pairs of hosts .

TCP It takes three handshakes to establish a connection , Let's talk about it later . once TCP After the connection is established , The hosts can send data to each other , The client process transmits the data stream through the socket . Once the data has passed through the socket , It's run by the client TCP The agreement controls .

TCP Data will be temporarily stored in the connected Send cache (send buffer)  in , This send buffer Is one of the caches set up between three handshakes , then TCP Send the data in the sending buffer to the receiving cache of the target host at the appropriate time , actually , Each end will have a send cache and a receive cache , As shown below

The sending between hosts is based on   Message segment (segment)  On going , So what is Segement Well ?

TCP The data stream to be transmitted will be divided into several block (chunk), And then to each chunk Add TCP header , And that's what makes a TCP A segment is a message segment . The length of each segment is limited , No more than Maximum data length (Maximum Segment Size), Be commonly called  MSS. During the downward transmission of a segment , It goes through the link layer , The link layer has a  Maximum Transmission Unit , Maximum transmission unit MTU, That is, the size of the maximum packet that can pass through the data link layer , The maximum transmission unit is usually related to the communication interface .

that MSS and MTU What does it matter ?

Because the computer network is layered , This is very important , Different layers have different names , For the transport layer , It's called a segment, and for the network layer it's called IP Data packets , therefore ,MTU It can be considered as the largest network layer can transmit IP Data packets , and MSS(Maximum segment size) It can be thought of as the concept of transport layer , That is to say TCP The maximum number of packets that can be transmitted at a time .

TCP Segment structure

In a simple chat TCP After connection , Now let's talk about TCP The message segment structure of , As shown in the figure below

TCP The structure of message segment is compared with UDP There is a lot more in the message structure . But the first two 32 The fields of bits are the same . They are   Source port number   and   Target port number , We know , These two fields are used for multiplexing and demultiplexing . in addition , and UDP equally ,TCP Also contains The checksum (checksum field) , besides ,TCP There are the following at the beginning of the segment

  • 32 Bit Sequence number field (sequence number field)  and 32 Bit Confirmation number field (acknowledgment number field) . These fields are TCP The sender and receiver are used to achieve reliable data transmission .

  • 4 Bit The length of the first field (header length field), This field indicates with 32 Bits of words are units of TCP The length of the first .TCP The length of the head is variable , But usually , The option field is empty , therefore TCP The length of the header field is 20 byte .

  • 16 Bit   Accept window fields (receive window field) , This field is used for flow control . It is used to indicate that the receiver can / The number of bytes you are willing to accept

  • Variable Option fields (options field), This field is used to negotiate the maximum message length between sender and receiver , That is to say MSS When using

  • 6 Bit   Flag fields (flag field)ACK  Flag is used to indicate that the value in the confirmation field is valid , This segment contains an acknowledgement that the segment has been successfully received ;RSTSYNFIN  Flags are used to establish and close connections ;CWR  and  ECE  For congestion control ;PSH  The flag is used to indicate that the data is immediately handed over to the upper level for processing ;URG  The flag is used to indicate that there is something in the data that needs to be processed by the upper layer   emergency   data . The last byte of emergency data is from 16 Bit Urgent data pointer field (urgeent data pointer field)  Pointed out that . In general ,PSH and URG Not used .

TCP The various functions and features of are all through TCP Message structure to reflect , At the end of the conversation TCP After the message structure , Let's talk about TCP What are the functions and features .

Serial number 、 The confirmation number realizes the transmission reliability

TCP The two most important fields in the header of a message segment are   Serial number   and   Confirmation no. , These two fields are TCP The basis for achieving reliability , So you must be curious about how to achieve reliability ? To understand this , First of all, we have to know what contents are stored in these two fields ?

The serial number of a segment is the byte number of the data stream  . because TCP It splits the data stream into byte streams , Because the byte stream itself is ordered , So the byte number of each segment is the byte stream indicating which segment it is . such as , host A To the host B Send a piece of data . After the data is generated by the application layer, there will be a series of data streams , The data stream goes through TCP Division , The basis of division is MSS, Suppose the data is 10000 byte ,MSS yes 2000 byte , that TCP It will split the data into 0 - 1999 , 2000 - 3999 Section of , By analogy .

therefore , The first data 0 - 1999 The first byte number of is 0 ,2000 - 3999 The first byte number of is 2000 .

then , Each serial number will be filled with TCP In the serial number field at the beginning of a message segment .

As for the confirmation number , It's a little bit more cumbersome than serial numbers . Let's expand the following communication models .

  • Simplex communication : Simplex data transmission only supports data transmission in one direction ; Only one party can receive or send messages at the same time , Two way communication is not possible , Such as radio, 、 TV, etc .

  • Duplex communication is a point-to-point system , It consists of two or more connectors or devices communicating with each other in two directions . There are two models of duplex communication : full duplex (FDX) And half duplex (HDX)

  • full duplex : In a full duplex system , The two sides of the connection can communicate with each other , One of the most common examples is telephone communication . Full duplex communication is a combination of two simplex communication modes , It requires that both the transmitting device and the receiving device have independent receiving and transmitting capabilities .

  • Half duplex : In a half duplex system , Both sides of the connection can communicate with each other , But not at the same time , For example, walkie talkie , Only those who hold the button down can speak , Only when one person has finished speaking can another speak .

Simplex 、 Half duplex 、 Full duplex communication is shown in the figure below

TCP Is a full duplex communication protocol , So the host A To the host at B In the process of sending a message , Also accepting from the host B The data of . host A The acknowledgement number filled into the message segment is the expected slave B The sequence number of the next byte received . It's a little winding , Let's take a look at . For example, mainframe A Received from the host B The number sent is 0 - 999 Byte segment , This segment will be written into the serial number , Then the main engine A Expect to be able to B received 1000 - The rest of the message segment , therefore , host A Send to host B In the message segment of , Its confirmation number is 1000 .

Cumulative confirmation

Here's another example , For example, mainframe A Sending 0 - 999 After message segment , Expect to be able to accept 1000 The next segment , But the host B But to the host A Sent a 1500 The next segment , So the mainframe A Will we continue to wait ?

The answer is obviously yes , because TCP Only the bytes in the stream up to the first missing byte are acknowledged , because 1500 Although it belongs to 1000 The next byte , But the host B It didn't give the host A send out 1000 - 1499 Bytes between , So the mainframe A Will continue to wait .

After knowing the serial number and confirmation number , Let's talk about TCP The sending process of . Here is a normal sending process

TCP By affirming Confirm response (ACK)  To achieve reliable data transmission , When the host A After sending the data, it will wait for the host B Response . If there is a confirmation response (ACK), It indicates that the data has reached the opposite end successfully . conversely , Data is likely to be lost .

As shown in the figure below , If within a certain period of time the host A Didn't wait for a confirmation response , The host computer B The segment sent has been lost , And resend .

host A Host computer B The response may not arrive due to network jitter , So after a certain time interval , host A Segment will be resend .

host A No host received B The response may also be due to the host B Sending to the host A Lost in the process of .

As shown in the figure above , By host B Return confirmation response , Lost in the process of transmission due to network congestion and other reasons , It didn't reach the mainframe A. host A Will wait for a while , If during this time the host A Still not waiting for the host B Response , So the mainframe A It will resend the segment .

Now there's a problem , If the host A Host computer B After sending a segment , host B Receive a message segment and send a response , Now because of the Internet , This segment did not arrive , After a while, the host A Resend segment , And then the host B The response sent is on the host A After the second transmission, it will arrive at the host in disorder A, So the mainframe A How to deal with it ?

TCP RFC There is no provision for this , in other words , We can decide for ourselves what to do with out of order arriving segments . There are two ways to deal with it

  • The receiver immediately discards the out of order segment

  • The receiver accepts the message segment arriving out of sequence , And wait for the next segment

Generally speaking, the second method is usually adopted .

Transmission control

Use window control to speed up

We introduced TCP It is sent in the form of data segments , If after a period of time the host A Can't wait for the host B Response , host A It will resend the segment , Receive host B Response , We will continue to send the following segments , We see now , There are many conditions for this question and answer form , For example, the response was not received 、 Waiting for a response , So for the performance oriented Internet , This form of performance should not be very high .

So how to improve performance ?

To solve this problem ,TCP Introduced   window   The concept , Even if the round trip time is long 、 With a lot of frequency , It can also control the degradation of network performance , That sounds like a lot of fun , So how does it come true ?

As shown in the figure below

We used to send every request in the form of message segments , After introducing the window , Each request can send multiple segments , In other words, a window can send multiple message segments . The window size is the maximum number of segments that can continue to be sent without waiting for an acknowledgement response .

In this window mechanism , Used a lot   buffer  , The function of confirming and responding to multiple segments at the same time .

As shown in the figure below , The highlighted part of the sending message segment is the window we mentioned , In the window , You can send a request even if you don't receive a confirmation . however , Before the confirmation response of the whole window does not arrive , If some segments are lost , So the mainframe A It will still be retransmitted . So , host A You need to set up a cache to hold these segments that need to be retransmitted , Until receiving their confirmation .

Outside the sliding window are the unsent segments and the received segments , If a message segment has been acknowledged, it cannot be retransmitted , The segment can be cleared from the buffer .

In the case of confirmation , The window will slide to the position of the confirmation number in the confirmation response , As shown in the figure above , In this way, multiple segments can be sent simultaneously in sequence , To improve communication performance , This kind of window is also called   The sliding window (Sliding window).

Window control and resend

The sending and receiving of message segments , It must be accompanied by the loss and retransmission of message segments , The same is true of windows , What to do if the message segment is lost in the process of sending in the window ?

First of all, let's consider the situation that the confirmation response does not return . under these circumstances , host A The sent message segment arrives at the host B, There is no need to resend . This is not the same as sending a single segment , If a single segment is sent , Even if the confirmation response does not return , We have to resend .

When the window is somewhat large , Even if there is a small loss of acknowledgement responses , And it doesn't resend segments .

We know , If, in a certain case, the message segment sent is lost , The receiving host did not receive the request , Or the response returned by the host does not reach the client , The message will be retransmitted over a period of time . So in the case of windows , What happens when a segment is lost ?

As shown in the figure below , Message segment 0 - 999 After loss , But the host A Not waiting , host A The remaining segments will continue to be sent , host B The confirmation response sent is always 1000, Reply messages with the same acknowledgement number will be returned continuously , If the sending host is continuous 3 After receiving the same confirmation response , The corresponding data will be retransmitted , This mechanism is more efficient than the timeout retransmission mentioned earlier , This mechanism is also known as   High speed retransmission control . This retransmission acknowledgement response is also known as   redundancy ACK( Respond to ).

host B When the message segment with the expected sequence number is not received , It will confirm the data received before . Once the sender receives a certain confirmation response , The same acknowledgement was received three times in a row , Then the segment is considered to have been lost . Need to resend . Using this mechanism can provide faster retransmission service .

flow control

We talked about transmission control , below cxuan I'll talk to you again   flow control . We know , At every TCP There will be a host on either side of the connection socket buffer , The buffer sets the receive and send buffers for each connection , When TCP Once the connection is established , The data generated from the application goes to the receiver's receive buffer , The receiver's application does not necessarily read the buffer immediately , It needs to wait for the operating system to allocate time slices . If the sender's application generates data too fast , If the receiver is relatively slow to read the data in the receive buffer , Then the data in the buffer in the receiver will overflow .

But it's okay ,TCP Yes   Flow control service (flow-control service)  Used to eliminate buffer overflow . Flow control is a speed matching service , That is, the sending rate of the sender matches the reading rate of the receiver application .

TCP By using a   Receiving window (receive window)  To provide flow control . The accept window will give the sender an indication How much more cache space is available . The sender will control the amount of data sent according to the actual receiving ability of the receiver .

The receiving host notifies the sender of the size of the data it can receive , The sender will send data up to this limit , This size limit is the window size , Remember TCP The first one of , There's a receive window , When we talked above, we said that this field is used for flow control . It is used to indicate that the receiver can / The number of bytes you are willing to accept .

Then only know that this field is used for flow control , So how to control ?

The sending host will send a Window detection package , This packet is used to detect whether the receiving host can still accept data , When the buffer at the receiving end faces the risk of data overflow , The window size value is then set to a smaller value to notify the sender , So as to control the amount of data sent .

Here is a flow control diagram

The sending host controls the flow according to the window size of the receiving host . This can also prevent the sending end host from sending too large data at a time, resulting in the receiving end host being unable to process .

As shown in the figure above , When the host B Received segment 2000 - 2999 Then the buffer is full , We have to stop receiving data temporarily . And then the mainframe A Send window probe packets , The window probe packet is very small, just a byte . And then the mainframe B Update buffer receive window size and send window update notification to host A, And then the mainframe A Then continue to send the message segment .

In the sending process above , Window update notifications may be lost , Once lost, the sender will not send data , So window probe packets are sent randomly , To avoid this happening .

Connection management

Before moving on to the following interesting features , Let's focus on TCP Of Connection management On , Because no TCP Connect , There will be no follow-up series TCP What's the matter . Suppose that a process running on one host wants to establish a TCP Connect , So... Among the customers TCP You will use the following steps with the server TCP Establishing a connection .

  • First , The client first sends a special TCP Message segment . The header of this segment does not contain application layer data , But there's one in the head of the segment  SYN Sign a   Be set to 1. therefore , This particular segment can also be called SYN Message segment . then , The client randomly selects one Initial serial number (client_isn) , And put this number in the initial TCP SYN In the serial number field of the segment ,SYN The segment is encapsulated in the IP Data segment sent to the server .

  • Once included IP After the data segment reaches the server , The server will IP Extract segment from data TCP SYN paragraph , take TCP Buffers and variables are assigned to the connection , Then send the client a message segment allowed by the connection . This The segment allowed for the connection It doesn't include any application layer data . However , It contains three very important messages .

The allocation of these buffers and variables makes TCP Easy to be called SYN Flooding denial of service attacks .

  • First ,SYN Bits are set to 1 .

  • then ,TCP The first acknowledgement number of the message segment is set to  client_isn + 1.

  • Last , The server chooses its own Initial serial number (server_isn), And place it in TCP In the serial number field at the beginning of a message segment .

    If you explain it in plain English, it is , I received your request to establish a connection SYN Message segment , This segment has a header field client_isn. I agree to establish the connection , My own initial serial number is server_isn. The segment that allows connections is called  SYNACK Message segment

  • The third step , Upon receipt of SYNACK After message segment , The client also allocates buffers and variables for the connection . The client host sends another message segment to the server , The last segment confirms the response message sent by the server , The standard of confirmation is that the confirmation number in the data segment sent by the client is server_isn + 1, Because the connection has been established , therefore SYN Bits are set to 0 . That's all TCP The process of sending three data segments to establish a connection , Also known as   Three handshakes .

Once these three steps have been completed , Client and server host can send message segment to each other , In each subsequent segment ,SYN Bits are set to 0 , The whole process is described as shown in the figure below

After establishing the connection between the client host and the server host , Take part in a TCP Either of the two connected processes can terminate TCP Connect . After the connection ends , The cache and variables in the host will be released . Suppose the client host wants to terminate TCP Connect , It goes through the following process

The client application process issues a shutdown command , Customer TCP Send a special TCP Message segment , The header of this particular segment FIN Set to 1 . When the server receives this segment , An acknowledgement message segment is sent to the sender . then , The server sends its own termination segment ,FIN Bits are set to 1 . The client confirms the termination segment . here , On both hosts, all resources used for the connection have been released , As shown in the figure below

In a TCP Within the life cycle of the connection , Running on each host TCP The agreement will be in all kinds of  TCP state (TCP State)  Change between ,TCP The main states of are  LISTEN、SYN-SEND、SYN-RECEIVED、ESTABLISHED、FIN-WAIT-1、FIN-WAIT-2、CLOSE-WAIT、CLOSING、LAST-ACK、TIME-WAIT and CLOSED . These states are explained as follows

  • LISTEN: It means waiting for anything from remote TCP Connection request to port .

  • SYN-SEND: Indicates that after sending a connection request, waiting for a matching connection request .

  • SYN-RECEIVED: Indicates that a connection request has been received and sent and is waiting for connection confirmation , That is to say TCP The state of the server after the second step in the three handshakes

  • ESTABLISHED: Indicates that a connection has been established , Application data can be sent to other hosts

The four states above are TCP Three handshakes involve .

  • FIN-WAIT-1: It means waiting from a remote place TCP Connection termination request for , Or wait for confirmation of a previously sent connection termination request .

  • FIN-WAIT-2: It means waiting from a remote place TCP Connection termination request for .

  • CLOSE-WAIT: Indicates waiting for a connection termination request from a local user .

  • CLOSING: It means waiting from a remote place TCP Connection termination request confirmation for .

  • LAST-ACK: Indicates waiting to be sent to the remote previously TCP Confirmation of connection termination request for ( Including confirmation of its connection termination request ).

  • TIME-WAIT: Wait for enough time to ensure remote TCP Received confirmation of its connection termination request .

  • CLOSED: Indicates that the connection has been closed , No connection state .

above 7 The state is TCP Four waves , That's what disconnection is designed for .

TCP The connection state of the system will be switched , these TCP Connection switching is based on events , These events are called by the user :OPEN、SEND、RECEIVE、CLOSE、ABORT and STATUS. involves TCP The message segment is marked with  SYN、ACK、RST and FIN , Of course , And overtime .

Let's add TCP After the connection state , Let's look at the process of three handshakes and four waves .

Three handshakes to establish a connection

The picture below shows TCP The process of connection establishment . Suppose the left end of the diagram is the client host , On the right is the server host , In limine , At both ends CLOSED( close ) state .

  1. The server process is ready to receive from the outside TCP Connect , In general, it calls bind、listen、socket Three functions complete . This way of opening is thought to be   Passive on (passive open). Then the server process is in  LISTEN  state , Waiting for client connection request .

  2. Client pass  connect  launch Take the initiative to open (active open), Send a connection request to the server , The first synchronization bit in the request SYN = 1, At the same time, select an initial sequence number sequence , Abbreviation seq = x.SYN The message segment is not allowed to carry data , Only one sequence number is consumed . here , Client access  SYN-SEND  state .

  3. After the server receives the client connection ,, The message segment of the client needs to be confirmed . In the acknowledgment segment , hold SYN and ACK All bits are set to 1 . The confirmation number is ack = x + 1, Also choose an initial sequence number for yourself seq = y. Please note that , This segment can't carry data either , But also consume a sequence number . here ,TCP Server entry  SYN-RECEIVED( Sync received )  state .

  4. The client receives the response from the server , Also need to give confirmation connection . Confirm the connection ACK Set as 1 , Serial number is seq = x + 1, The confirmation number is ack = y + 1.TCP Regulations , This segment can carry data or not , If you don't carry data , Then the sequence number of the next data segment is still seq = x + 1. At this time , Client access  ESTABLISHED ( Connected )  state

  5. After the server receives the customer's confirmation , Also enter  ESTABLISHED  state .

TCP Establishing a connection requires three segments , Releasing a connection requires four segments .

Four waves

After data transmission , Both sides of the communication can release the connection . After the end of data transmission, both the client host and the server host are in ESTABLISHED state , And then we go into the process of releasing the connection .

TCP The process of disconnection is as follows

  1. The client application sends a message segment to release the connection , And stop sending data , Active shut down TCP Connect . The client host sends a message segment to release the connection , The first part of a message segment FIN The position is 1 , No data , Serial number position seq = u, At this point, the client host enters  FIN-WAIT-1( Stop waiting 1)  Stage .

  2. After the server host receives the message segment sent by the client , That is to send an acknowledgement response message , In the acknowledgement response message ACK = 1, Generate your own sequence number seq = v,ack = u + 1, Then the server host enters  CLOSE-WAIT( Turn off waiting )  state , At this time, the client host -> The connection of the server host in this direction is released , Client host has no data to send , At this time, the server host is in a semi connected state , But the server host can still send data .

  3. After the client host receives the confirmation response from the server host , Enter  FIN-WAIT-2( Stop waiting 2)  The state of . Waiting for the client to send a connection release message segment .

  4. When the server host has no data to send , The application process will notify TCP Release the connection . At this time, the server host will send the disconnected message segment , In the message segment ACK = 1, Serial number seq = w, Because some data may have been sent in between , therefore seq Not necessarily equal to v + 1.ack = u + 1, After sending the disconnection request message , The server host enters  LAST-ACK( Final confirmation ) The stage of .

  5. When the client receives the request, the server is disconnected , The client needs to respond , The client sends a disconnected message segment , In a message segment ,ACK = 1, Serial number seq = u + 1, Because the client does not send data again after it is disconnected from the connection ,ack = w + 1, And then into  TIME-WAIT( Time waits )  state , Please note that , This is the time TCP The connection has not been released . Must go through the setting of time waiting , That is to say  2MSL  after , The client will enter  CLOSED  state , Time MSL be called Maximum segment life (Maximum Segment Lifetime).

  6. The server mainly receives the disconnection confirmation from the client , Will enter CLOSED state . Because the end of the server TCP The connection time is earlier than the client , And the whole disconnection process needs to send four message segments , So the process of releasing the connection is also known as the four wave .

What is? TIME-WAIT

I just mentioned it briefly TIME-WAIT State and 2MSL What is it , Let's talk about these two concepts .

MSL  yes TCP The longest time a segment can survive or reside in the network .RFC 793 Defined MSL It's two minutes , But the specific implementation depends on the programmer , Some implementations use 30 The maximum survival time in seconds .

So why wait  2MSL  Well ?

Mainly for two reasons

  • To ensure that the last response reaches the server , Because of the computer , the last one ACK Segments may be lost , As a result, the client has been in  LAST-ACK  State waiting for client response . At this point, the server will retransmit  FINACK  Disconnection message , The client will reconfirm after receiving it , Restart timer . If the client is not 2MSL , Send on client ACK If it's closed directly after , If the message is lost , Then both hosts will not be able to enter CLOSED state .

  • It can also prevent Has lapsed Message segment . The client is sending the last ACK after , And then through 2MSL, All message segments generated during the duration of this link will disappear from the network . In order to ensure that after the connection is closed, there will be no message segment remaining in the network to harass the server .

Pay attention here : The server sent FIN-ACK after , Will start the timeout retransmission timer immediately . The client is sending the last ACK After that, the time waiting timer will be started immediately .

Well said RST Well

Well said  RSTSYNFIN  Flags are used to establish and close connections , that SYN and FIN They all show up , that RST Well ? Yeah , What we discussed above is an ideal situation , That is, both the client and server will accept the transmission segment , There is also the case when the host receives TCP After message segment , Its IP And the port number does not match . Suppose the client host sends a request , And the server host goes through IP And the port number of the server , Then the server will send out a  RST  Special message segment to client .

therefore , When the server sends a RST When a special segment is sent to the client , It will tell the client that there is no matching socket connection , Please stop sending .

What is discussed above is TCP The situation of , that UDP Well ?

Use UDP As a transport protocol , If the sockets don't match ,UDP The host will send a special ICMP The datagram .

SYN Flooding attack

Let's talk about what is  SYN Flooding attack .

We are TCP I've seen it in three handshakes , The server responds to a received SYN, Allocate and initialize variable connections and caches , Then the server sends a SYNACK As a response , Then wait for... From the client ACK message . If the client doesn't send ACK To complete the last step , Then the connection is in a suspended state , It's a semi connected state .

Attackers usually send a large number of TCP SYN Message segment , The server continues to respond , But each connection doesn't make three handshakes . With SYN The increase of , The server will continue to allocate resources for these semi open connections , As a result, the connection to the server is finally exhausted . This kind of attack also belongs to  Dos  A type of attack .

The way to defend against this attack is to use  SYN cookie , Here's how it works

  • When the server receives a SYN In the message segment , It doesn't know where the segment comes from , Is it from the attacker host or the client host ( Although the attacker is also a client , But it's easier to distinguish ) . Therefore, the server does not generate a semi open connection for a segment . On the contrary , The server generates an initial TCP Serial number , This serial number is SYN Source and purpose of message segment IP Address and port number this quadruple structure of a complex hash function , This hash function generates TCP The serial number is  SYN Cookie, Used to cache SYN request . then , The server will send with SYN Cookie Of SYNACK grouping . One thing to note is that , The server won't remember this Cookie or SYN Other status information for .

  • If the client is not the attacker , It will return a ACK Message segment . When the server receives this ACK after , Need to verify this ACK And SYN Is it the same sent , The standard of verification is the confirmation number and serial number in the confirmation field , Source and purpose IP Whether the address is consistent with the port number and hash function , The result of the hash function + 1 Whether and SYNACK The confirmation values in are the same .( Roughly so , What you said is wrong, please correct it ) . If you are interested, readers can learn more about . If it's legal , The server will generate a full open connection with a socket .

  • If the client does not return ACK, That is to say, the attacker , So it doesn't matter , The server did not receive ACK, Does not allocate variables and cache resources , No harm to the server .

Congestion control

With TCP After window control , So that two hosts in the computer network are no longer sent in the form of a single data segment , Instead, it can send a large number of packets continuously . However , A large number of packets are accompanied by other problems , For example, network load 、 Network congestion and other issues .TCP In order to prevent this kind of problem , Used   Congestion control   Mechanism , Congestion control mechanism will restrain the sender's data transmission in the face of network congestion .

There are two main methods of congestion control

  • End to end congestion control : Because the network layer does not provide display support for transport layer congestion control . So even if there is congestion in the network , The end system should also infer from the observation of network behavior .TCP Is the use of end-to-end congestion control .IP Layer does not provide feedback about network congestion to the end system . that TCP How to infer network congestion ? If a timeout or three redundant confirmations are made, it is considered that the network is congested ,TCP Will reduce the size of the window , Or increase the round-trip delay to avoid .

  • Network assisted congestion control : In network assisted congestion control , The router provides feedback to the sender about the congestion status in the network . This feedback is a bit of information , It indicates congestion in the link .

The figure below shows the two congestion control methods

TCP Congestion control

If you see here , I'll take it for the time being that you understand TCP The foundation of reliability is realized , That's using the serial number and confirmation number . besides , Another implementation TCP Reliability is based on TCP Congestion control . if

TCP The method adopted is to let each sender limit the rate of sending packets according to the perceived congestion degree of the network , If TCP The sender perceives that there is no congestion , be TCP The sender will increase the sending rate ; If the sender perceives a blockage along the path , Then the sender will reduce the sending rate .

But there are three problems with this approach

  1. TCP How does the sender limit the rate at which it sends segments to other connections ?

  2. One TCP How does the sender perceive network congestion ?

  3. When the sender perceives end-to-end congestion , What kind of algorithm is used to change the transmission rate ?

Let's talk about the first question first ,TCP How does the sender limit the rate at which it sends segments to other connections

We know TCP It's by the receive cache 、 Send cache and Variable (LastByteRead, rwnd, etc. ) form . The sender's TCP Congestion control mechanisms track a variable , namely   Congestion window (congestion window)  The variable of , The congestion window is expressed as  cwnd, Used to limit the TCP On receiving ACK The amount of data that could have been sent to the network before . and Receiving window (rwnd)  It is used to tell the receiver how much data it can accept .

Generally speaking , The amount of data not confirmed by the sender shall not exceed cwnd and rwnd The minimum value of , That is to say

LastByteSent - LastByteAcked <= min(cwnd,rwnd)

Because the round trip time for each packet is RTT, We assume that the receiver has enough buffer space for receiving data , We don't have to think about rwnd 了 , Focus only on cwnd, that , The sending rate of the sender is about  cwnd/RTT byte / second  . By adjusting the cwnd, The sender can therefore adjust the rate at which it sends data to the connection .

One TCP How does the sender perceive network congestion

We discussed this above , yes TCP According to overtime or 3 Redundancy ACK The perception comes from .

When the sender perceives end-to-end congestion , What kind of algorithm is used to change the transmission rate  ?

This problem is more complicated , Let me talk about it , Generally speaking ,TCP We will follow the following guiding principles

  • If it is lost during the transmission of a segment , That means network congestion , At this point, it is necessary to reduce TCP The rate of the sender .

  • An acknowledgement segment indicates that the sender is passing the segment to the receiver , therefore , When an acknowledgement of a previously unacknowledged segment arrives , Can increase the rate of the sender . Why? ? Because the unacknowledged message segment arrives at the receiver, it means that the network is not congested , Be able to get to , As a result, the length of congestion window of sender will become larger , So the sending rate will be faster

  • Bandwidth detection , The bandwidth probe says TCP Can be increased by adjusting the transmission rate / Reduce ACK The number of arrivals , If there is a packet loss event , It will reduce the transmission rate . therefore , To detect the frequency at which congestion begins , TCP The sender should increase its transmission rate . And then slowly slow down the transmission rate , And we're going to start exploring again , See if the congestion start rate has changed .

After understanding TCP After congestion control , Now we should talk about TCP Of   Congestion control algorithm (TCP congestion control algorithm)  了 .TCP Congestion control algorithm mainly includes three parts : Slow start 、 Congestion avoidance 、 Fast recovery , Let's take a look at

Slow start

When one TCP When starting to establish a connection ,cwnd The value of will be initialized to a MSS The lesser of . This makes the initial transmission rate about  MSS/RTT byte / second  , For example, to transmit 1000 Bytes of data ,RTT by 200 ms , So the initial transmission rate is about 40 kb/s . In fact, the available bandwidth is higher than this MSS/RTT Much more , therefore TCP To find the best delivery rate , Can pass   Slow start (slow-start)  The way , In slow start mode ,cwnd The value of is initialized to 1 individual MSS, And each time the transmission message is confirmed, it will add a MSS,cwnd The value of becomes 2 individual MSS, After these two segments are successfully transmitted, each segment + 1, Will turn into 4 individual MSS, And so on , Every time you succeed cwnd The value will double . As shown in the figure below

The sending rate is not likely to grow all the time , Growth will come to an end , So when will it end ? Slow start usually uses the following methods to end the growth of sending rate .

  • If packet loss occurs during the slow start sending process , that TCP Will send the sender's cwnd Set to 1 And restart the slow start process , This will introduce a  ssthresh( Slow start threshold )  The concept of , Its initial value is to generate packet loss cwnd Value / 2, That is, when congestion is detected ,ssthresh Is half the window value .

  • The second way is directly and ssthresh The value of , Because when congestion is detected ,ssthresh Is half the window value , So when cwnd > ssthresh when , There may be packet loss every time it is doubled , So the best way is to cwnd Value = ssthresh , such TCP It will switch to congestion control mode , End slow start .

  • The last way to end a slow start is if 3 Redundancy ACK,TCP It will perform a fast retransmission and enter the recovery state .

Congestion avoidance

When TCP After entering congestion control state ,cwnd The value of is equal to half of the value in congestion , That is to say ssthresh Value . therefore , Can't every time a segment arrives cwnd Double the value of . It's a relative conservative The way , After each transmission, only cwnd The value of the increase One MSS, For example, I received 10 Confirmation of message segments , however cwnd The value of is only increased by one MSS. It's a linear growth pattern , It will also have growth overruns , Its growth overshoot is the same as a slow start , If there is a packet loss , that cwnd The value of is a MSS,ssthresh The value of is equal to cwnd Half of ; Or receive 3 Redundant ACK The response can also stop MSS growth . If TCP take cwnd After halving the value of , Still receive 3 Redundancy ACK, Then it will ssthresh The value of is recorded as cwnd Half of the value , Get into   Fast recovery   state .

Fast recovery

In rapid recovery , To make TCP Entering the missing segment of fast recovery state , For each received redundancy ACK,cwnd Will increase by one MSS . When one of the missing segments ACK On arrival ,TCP In the lower cwnd Backward entry congestion avoidance state . If a timeout occurs after congestion control , Then it will migrate to a slow start state ,cwnd The value of is set to 1 individual MSS,ssthresh Is set to cwnd Half of .

版权声明
本文为[Senior brother Wu, programmer]所创,转载请带上原文链接,感谢
https://chowdera.com/2020/12/20201207160347936x.html