Freeswitch video conference "standard" solution
2020-11-06 22:27:49 【rpandora】
This paper is written by FreeSWITCH Du JinFang, founder of Chinese community, is in LiveVideoStack The content of the speech shared online is organized into , In detail FreeSWITCH As an open source video conferencing solution, how to open source 、 On the basis of openness , Docking with all kinds of things that can't be modified “ standard ” Video conference terminal 、WebRTC Browser and wechat applet, etc , Meet all kinds of challenges .
writing / Du JinFang
Arrangement / LiveVideoStack
What we call “ standard ” Solution , Not that the solution is standard . But in the process of video conferencing ,FreeSWITCH As a server , Will face different types of clients and various hardware terminals . Because they use a variety of standard protocols , It's something we can't modify , So they're called standard clients . and FreeSWITCH Videoconferencing “ standard ” The solution is a solution to these non modifiable standard clients .
Video conference type
Video conferencing can be roughly divided into three types . One is the traditional video conference , The traditional video conference is “ standard ” Of , Because they need to communicate with each other . Early video conference protocols were generally H323, Even now, we still often encounter H323 The equipment , But then most of them were SIP The device of the agreement replaces .SIP The protocol is a text protocol , The whole is more flexible .
In recent years, there have been some cloud video conferences , This year can also be regarded as the first year of cloud video conference , Because of the epidemic , People are starting to use video conferencing more . for example Zoom, Tencent Conference 、 Little fish is easy to wait , It is said that Tencent conference was launched within a week 10 10000 servers , Emergency expansion , This is impossible in the traditional video conference era , Only in the era of cloud computing can we achieve rapid expansion , This also reflects the advantages of cloud computing . It is understood that these cloud video conferencing vendors , It's basically a private protocol used , The advantage is that it can be optimized without limitation . But private protocols can't do a good job of Interconnection , However, in order to realize the intercommunication with other devices by various video conference manufacturers , I will also provide some of them SDK.
Open source video conferencing , Yes FreeSWITCH、Jitsi、Kurento、Janus、Medooze etc. , These video conferences are also many years old , At present, most of them have begun to support WebRTC. Some support WebRTC Mainly , for example Kurento and Janus;Janus and Medooze Initially it was to support SIP Of , I haven't paid much attention to ;Jitsi Yes WebRTC Our support is very good .
about FreeSWITCH, There may be some misunderstanding .FreeSWITCH In fact, it was first used for audio communication , namely PBX SPC exchange , But actually FreeSWITCH Video conferencing is also very powerful . Open source video conference because it's open source 、 Open , It's open API, So more open protocols like SIP agreement .
at present WebRTC Comparing the fire , All video conferencing devices are basically supported WebRTC, You can make a phone call in the browser .WebRTC It's a media protocol , No signaling is specified , There is no standard at the signaling level , Therefore, we will be more flexible in implementation .FreeSWITCH Two protocols are implemented in the signaling layer , One is SIP, It is carried in WebSocket On , Because there's only... In the browser WebSocket Can achieve two-way communication . In addition, there are private agreements that we have implemented Verto.
As we use video conferencing software more and more , There's a problem ： There are more and more video conferencing clients on mobile phones and computers .
In fact, we also hope to solve this problem , The way is to interconnect . We hope all terminals can be interconnected , Is it just because the video conferencing clients used are different , Can't we have a meeting together ？
The ideal is full , But the reality is still very difficult to implement .
In fact, it's more because of business , No one would choose to do this . Of course, from a technical point of view , All using private protocols 、 Servers and terminals , Can better optimize 、 Better security and so on . All in all , There is a long way to go to realize interconnection .
It's a long way to go , But we have always wanted to do this ,FreeSWITCH Also connected to a lot of different clients , I hope to be able to connect with more equipment .
FreeSWITCH History of growth
Speaking of FreeSWITCH, Take a brief look at its history .
2006 year FreeSWITCH Released the 1 A version .FreeSWITCH It started as a PBX On going .PBX It's what we call the switch in the enterprise , For phone calls .
2008 Years issued 1.0 edition — Phoenix version , Like Phoenix Nirvana , After countless collapses 、 Optimize , So it's called Phoenix version .
2012 Released in FreeSWITCH1.2 edition （FreeSWITCH The version numbers of are all even numbers ）,1.2 The version is very stable , The audio side is also very mature , There is basically nothing to do on the phone . But with WebRTC Appearance ,FreeSWITCH Decide to support WebRTC.
2014 Launched in 2013 1.4 edition , Start supporting WebRTC. In the early WebRTC It's not very stable , but WebRTC The standard of the standard becomes very fast , therefore FreeSWITCH It's been changing all the time .
In fact as early as 2008 I started to do FreeSWITCH 了 , At that time, I mainly did online education , Early online education didn't have videos , Only audio , Teachers use audio to teach English dialogue . And then I did some other projects , Video required , Then I added some video functions , Including video MCU. because FreeSWITCH Early on, it wasn't very mature in terms of video , I was not particularly satisfied with several projects , So we opened up the video part later .
2014 year -2015 year , We spent a year , Merge what you've done into FreeSWITCH In the main branch of , Take a whole year to standardize your own code 、 Simultaneous adaptation Windows、Linux、Unix And other platforms , Realization FreeSWITCH Compilation and support on various platforms , Released 1.6 edition . At this time FreeSWITCH Start supporting video calls and video conferencing , After from 2017 To begin to 2020 year , I've been repairing for years bug, take FreeSWITCH It's getting better and better .
On this branch of open source video conferencing , We've also done a lot of things , Some have been merged into 1.8 In the version , Some merge in 1.10 In the version . In fact, we have maintained a branch of our own without merging into it , Because it takes a lot of labor to merge your own code into open branches , So it will be gradually completed in the follow-up .
FreeSWITCH Supported standard protocols
When it comes to standard protocols ,FreeSWITCH Support these standard protocols in the figure above .
First FreeSWITCH Support SIP Signaling , It's the standard protocol for audio and video calls , Support a variety of clients 、 terminal , At present, many conference equipment on the market is supported SIP Of , It can directly realize intercommunication .
Secondly, we made a H323 Module ,FreeSWITCH There are two of them H323 Module , But they don't support video , Because there are customer needs , Then I made a video module , So you can follow H323 To communicate with each other . With the development of mobile Internet , At present, there are many kinds of mobile devices APP Most of them support SIP Signaling , It can directly realize intercommunication .
With WebRTC The development of , A lot of people started to port it to mobile clients .WebRTC The advantage is that you don't have to write your own media layer , With WebRTC Open source brings a lot of features , for instance JitterBuffer、 Echo cancellation 、 Noise reduction and so on WebRTC Already included in , No need to write it yourself , It's not as good as a variety of privatized manufacturers , Because the privatization of the manufacturer can be more extreme optimization . But for an open source project ,WebRTC Well done enough , because WebRTC Only the media layer, no signaling layer , So everybody started to go WebRTC There are all kinds of signaling .
It is worth mentioning that RTMP, In fact, what I did at the beginning was RTMP In the video . Despite the current Flash Basically no one's using it anymore , but RTMP The agreement is still very good , At present, it is more widely used in live broadcast and streaming .
Several ways to realize video conference ：Mesh、MCU、SFU
Video conferencing, in short, has three ways ：Mesh、MCU、SFU.
Mesh It is a net structure formed by simple point-to-point connection and does not need a server , But this method is used very little , Because it's not easy to control .
At present, there are two mainstream ways ,MCU and SFU.
MCU The reason why it is more mainstream , Because the first video conferencing equipment was basically MCU Of .MCU There's a server in the middle , The video client communicates directly with the server , In fact, sending and receiving are all the same . The video server combines all the streams together , That is, video fusion screen . Of course, the audio will also fuse , Simplicity , We're just talking about video . The video server fuses the video streams together to form a picture , Distributed to all terminals , All the terminals see the same picture , This situation is called MCU（Multipoint Control Unit）, Multi point control unit .
With WebRTC Appearance , A lot of people start using SFU（Selective Forward in Unit）.SFU No decoding 、 It doesn't melt the screen , It was said that MCU I'm going to put together all kinds of pictures 、 Fusion screen , Also need to encode and decode the video . and SFU Only need to receive the video and audio from each client , Send it to different people selectively . The advantage is that it doesn't take up too much CPU, But the disadvantage is that it wastes bandwidth . such as 5 Personal calls , One of them just needs to send all the way , The forwarding unit will be arranged , Copy the stream into multiple copies for distribution , Everyone gets a lot of traffic , The pressure on the terminal will be greater , Because one side of the terminal needs to be on the other side of 4 The path stream decodes . The advantage is that the terminal can freely arrange the display styles of other clients received , Everyone can see different pictures .
Above, MCU The basic principle diagram of , As shown in the picture 4 A camera , Each camera uploads its own image to MCU,MCU Zoom 、 Splicing , Put together a picture , Then send out . Each terminal displays the same screen .FreeSWITCH This is how it works internally ,FreeSWITCH Internal implementation such as 1、2、3、4 And so on , And a canvas （canvas）, We will receive different video decodes , Zoom again , To the canvas , Form a flow , And then it was distributed .
There are many types of canvas styles ,FreeSWITCH In addition to supporting standard canvases ：3×3、4×4、8×8 outside , It also supports the training course model ： The picture of the speaker （ Big ）+ Audience pictures （ Small ）, And more different arrangements .
We have made a key optimization of this , Namely RTP. As we all know, video streaming depends on RTP transmission ,RTP There is also a sister agreement called RTCP, It's a control protocol that controls RTP, There is a thing in this agreement called TMMBR（Temporary Maximum Media Stream Bit Rate Request）. In a video conference , Generally speaking, the HD images you see are 720p or 1080p Of , And in speaker mode , The audience's picture is usually small , There's no need to upload 1080P or 720P The picture of , waste 1 Trillion or 2 Megabytes of bandwidth . At this time ,FreeSWITCH Will send a command -TMMBR, No need to upload HD video , Reduce bandwidth upload , In this way, viewers can go online with low bandwidth consumption , Reduce bandwidth pressure on servers 、 It also reduces the pressure of decoding .
We've applied it to our video conferencing , The effect is very obvious . But only if the terminal supports this protocol , stay WebRTC Is supported by , The standard of this agreement is called RFC 5104.RFC 5104 In addition to norms TMMBR outside , There are other agreements , for example FIR、NACK、PLI, It's all about keyframes . In the case of packet loss ,FIR Is to request a keyframe , Because of decoding failure, there will be mosaic , Ask the other party to send a keyframe , Refresh screen .NACK It's a bag loss , In fact, packet loss involves caching , That's what I'm talking about Jitter Buffer,Jitter Buffer It's between two communication terminals , Whether it's the sender or the receiver , There will be one Buffer, What this buffer sends out , It will be put into the buffer to receive , In case of packet loss , Found a packet loss , Ask the other party to resend . At this time , The other party sees that the packet is still in the buffer , The retransmission can be removed from the buffer , It's called NACK.PLI yes Packet Lost Indication, Tell this end that I lost my bag , This protocol is responsible for the quality of audio and video transmission , Because most video transmission uses UDP The agreement is unreliable , So in case of packet loss, more compensation should be made .
We did some other optimizations , In large-scale video conferencing , Dozens of people, even hundreds of people participated in , For the server decoding pressure will be greater . And it doesn't make sense to show hundreds of people on one screen at the same time , Because it's too small to see the expression of the audience , So in general, when the number of participants is relatively large , Just show a few or dozens of people , Too much will not show , There is no need to decode the parts that are not shown , No need to waste the processing power of the server . But the audience still needs to show it in turn , So we also did a multi person round show , For example, start looking at this now 10 personal , Next time I'll see another 10 personal , And then make a timer to rotate , Give everyone a chance to be on the stage . Of course, because of keyframes , When you show it , You need to decode , You don't have to have a keyframe coming in , So you need to turn on decoding two seconds in advance , And then when it's shown , Generally, this key frame is here , Because each time you show it, you also ask for keyframes , That's exactly how it shows , No black screen . There are also some of the holes we encountered in the video conferencing process , for example NACK Request too much , Sometimes because many terminals have bad networks , Then they all come and ask for NACk Packet loss , If you find this NACK Request too much , We just send keyframes directly , Ignore packet loss .
And there are restrictions FIR The frequency of requests ,FIR This is what we call keyframe request , Refresh keyframes , When the terminal requests a key frame under bad network conditions , If you have any 10 All terminals request keyframes , If within a second 10 Key frames , The bandwidth will be inflated or the video will be very blurred , Because I can't meet so many keyframes , So we did some algorithms , In fact, this is also some basic algorithms , Limit keyframe requests . For example, you can set two or three seconds , So in two or three seconds , Only two keyframes are generated , That is, the first request and the last request will produce keyframes , The rest is ignored , In this case, the bandwidth will be protected , Of course, the other party may have a short screen , But it doesn't matter , And then it's clear in two seconds , Because after all, its network state is not good, it can be tolerated , Of course, if it is a terminal with good network status , He won't have an impact either .
in addition , Different codes have different encoders , FreeSWITCH Support different encoding , For historical reasons ,Chrome Support vp8, Apple's browser only supports H.264, Can't achieve interoperability , And then at the beginning WebRTC We can't communicate with each other , Of course, in recent years ,Chrome Also support H.264 了 , Other browsers also support vp8 了 , So FreeSWITCH From the very beginning, it supports multiple encodings , And then in the same meeting , Meetings with different codes , You can use different encoders .
There are different terminals , Maybe these are private terminals 、 Private communications don't have this kind of trouble , Because the terminals are all his own , Code what you want to code with , But in the open source world , Need to face a variety of clients , For example, military projects , A lot of their equipment is still H.263 Of , Cannot be replaced , We can only adapt H.263,H.263 I won't support it 720p The resolution of the , It only supports CIF Resolution ,CIF That is not 16:9 Neither 4:3, So we need a separate encoder . And so on and so forth , The more models of terminals you want to support, the more you need to do .
We did some optimizations in the early days , Reduce resolution 、 Reduce the frame rate , Found a terminal to use flash It's something we can't handle , So we'll reduce the resolution , Reduce the resolution by half , And then frame down , The original 30 The frame is reduced to 15 frame . Now there's a better technology called SVC,SVC That is, multiple resolutions and multiple frame rates in one encoder , Yes, of course CPU There is still a price , After it's made up, it can be distributed selectively , Actually FreeSWITCH There's a module called mod_openh264, Using Cisco's open source codec , Support SVC, But we haven't used it yet , It just uses its simple encoding and decoding function , We are FreeSWITCH For internal use , The other is internal libx264, It's in FFmpeg Self contained .
In the video conference, we often encounter a double stream , Traditional video conference equipment ,H.323 The equipment , Generally supported protocols are called H.239, It can support two streams in a call , One stream is the speaker's video , The other is PPT, Two streams can go up to MCU, You can see , Even the other person can play two TVs or projectors , A projector shows the speaker's video , Another projector display PPT.
SIP The device also supports dual streaming ,SIP The dual stream of the device is called BFCP, The full name is Binary Floor Control Protocol. The speaker will do some control ,H.323 We didn't do it , Just a simple docking , Because there are few docking devices , If you can pass it . about SIP Two streams of agreement , We don't have open source yet , It's also BFCP Of . We are directly in the SIP In the module of SDP, Because in SDP There will be two video streams inside , Hijack to process and generate a new call （ A fake call ）,FreeSWITCH When you get a call all the way , I saw that he was a double flow call , And then there are two calls , In this case, the two calls will be transferred to the conference at the same time , The conference code doesn't need to be changed . For the conference, there were two calls , But yes. FreeSWITCH It's a call all the way , In this way, we can support double flow .
In addition to WebRTC in , Shuangliu has a name of Simlcast, It can also be in a SDP There are many streams inside , because Simlcast Early instability , There are many problems , Now we use Simlcast Just doing some experiments , There is no detailed code yet , We're just simple and rough , Make two calls directly in the browser , One call is this video of the speaker , Another call is shared desktop , Because it's launched in the browser WebRTC When you call , You can directly choose whether the video source is a camera or a screen, or share an app , The formation of this double flow . It's the same FreeSWITCH, It's still two-way flow , Enter the meeting as two calls .
For the above mentioned SFU,FreeSWITCH It can be realized SFU Of , In fact, we have tried a lot , however SFU More complicated , Need to control a lot of calls , future FreeSWITCH At the heart of this is to support multiple streams , But now it's implemented SFU It's not on the agenda yet , Another reason is that there are so many things on the market SFU Products , And they are more mature , for example Jitsi、Kurento、Janus There are SFU The implementation of the , If necessary , Collocation FreeSWITCH To implement .
When video conferencing is large , You need to cascade , Because one server can't support .FreeSWITCH Video conferencing in the laboratory testing a server can support 400 road 720p Video stream of , According to the specific application scenario, the specification of the server is 32 Nuclear or 64 nucleus , Of course, in the context of the conference , It's not going to show everyone , Just encode and decode the people that are shown , The optimization mentioned above does not decode , In this way, only one stream can be encoded and sent , So for CPU The pressure is not great . In fact, we do it in the lab 400 Road flow , But to the customer online is a server support 100 The end of the road , It should be able to go up to 200 road . But the application scenario needs 800 Road customer terminal , So we'll do a cascade .
There's a problem with cascading , We call it “ Look at each other ”, It's about being two MCU When cascading , for example 1、2、3 In a MCU On ,5、6、7 On the other side MCU On , When MCU When the composite picture enters another picture , It will bring your picture back , So there is an infinite circle in the middle of the picture , Of course, the dynamic diagram in the following figure is more intuitive .
When doing desktop sharing, I must be familiar with the above interface .
Actually Google We can find a lot of similar cases on the website , This is the video we received and sent out as a video source again , This is called eye to eye . Video conference in two MCU This happens when you cascade . So what's the solution ？
FreeSWITCH It implements a function called multi canvas , As shown in the above application scenario , When a person starts a speech , It will be the main venue , Put it on the canvas 1 On . All the participants need to see the main venue , At the same time, the speaker needs to supervise other participants to learn , You need to see pictures of other participants , We put all the pictures of the participants at the 2 On a canvas , People in the main venue can see the participants in the sub venue .
even to the extent that FreeSWITCH One more Super Canvas—— Super canvas , It's mainly used for monitoring , No matter how many canvases there are in the meeting , You can see at a glance . such , When the backstage manager controls the meeting , You can easily see the whole scene of the meeting . In this way , We solved this “ Look at each other ” The problem of , These two MCU You can cascade , For example, up MCU Always on canvas 2 On , The downside MCU Always on the canvas 1 On , So you can tell them apart .
The figure above is a diagram of a cascade ,Master Is the primary server , There's a lot of FreeSWITCH From the server , The down screen is always at the 1 On a canvas , The up screen is always in the first 2 On a canvas , And vice versa . In this way, you can switch at any time , Check the uplink , All participants can also see the main venue . Sometimes in the course of a meeting , In order to prevent participants from feeling lonely when they only see the main venue , At this point, we will put the 2 The pictures of the canvases are also pushed down , So everyone can see the other participants . Even in the process of a meeting to communicate , For example, raise your hands to speak , At this time, you can also replace the screen of the main venue . This is the basic principle of cascading , But the actual control is still more complicated .
It is worth mentioning that ,FreeSWITCH It's different from some video conferencing now , For example, some meetings can simply support a meeting of hundreds of people , Push the speaker screen to a streaming media server , But the speaker couldn't see the participants . This is actually a live mode , The streams that participants receive are unidirectional , There's only downstream flow .
In some live scenes , Communication and interaction means live broadcast of Lianmai . Due to the large scale of participating users , For example, a live broadcast of 100000 people , When the host wants to interact with the audience , We need to raise the level of this audience , To the anchor class , Then we can broadcast his picture out . This kind of application scenario is basically the same as that of video conference , however FreeSWITCH The meeting is two-way flow from beginning to end .
In this way , FreeSWITCH You can talk to other people MCU Cascade . Other units already have MCU 了 , FreeSWITCH There are more canvases , You can distinguish the up and down on different canvases . For example, the two on the left in the picture 1、2 yes FreeSWITCH Users of , On the two mobile clients on the right 3、4 It's something else MCU Users of , After going up, in other MCU After fusing the screen, you can send it out .
Docking with other conference systems
FreeSWITCH Except with other things MCU docking 、 Out of the cascade , And some optimizations .
because FreeSWITCH It's open source. , If you want to connect with other video conferencing systems , You need to integrate other video conferencing SDK. For example, we can connect with Tencent conference （ Now the audio is on , The video hasn't been picked up yet ）,TRTC Provide multi platform SDK, We wrote a module to put it in FreeSWITCH in ,FreeSWITCH You can communicate with it . adopt PSTN We can dial in to FreeSWITCH in , You can also access Tencent conference directly ,FreeSWITCH It can be used as a gateway . Of course PSTN Video is not supported yet , Only audio support .
The other is Agora Of SDK, We integrated it many years ago Agora Of SDK, Both audio and video can be connected .
With Agora For example , We are FreeSWITCH There is a module called mod_agrtc, In this way, it can be realized with Agora Interworking of .Agora And WebRTC similar , Only the media layer SDK There is no signaling layer , Therefore, we need to implement a signaling service by ourselves . Of course, most companies do their own APP, Need to register 、 Authentication etc. , There is already a signaling service , So just call FreeSWITCH Of API, You can control the call 、 Recording, etc , Realize interconnection . But some customers don't want to write their own signaling service , Or it's hard to expand your own signaling service . So we also wrote an article based on WebRTC Signaling service for , Integrate on the mobile end Agora Of SDK,FreeSWITCH Integrated in Linux Of SDK, Then we can realize intercommunication .
There's a problem with this , Whether it's Agora still TRTC, Because of the early SDK It's for the client , So only one client is supported , That's one SDK Only one way calls are supported . So we did another service — IPC, There's a couple of calls right there FreeSWITCH Create several processes in , In this way, it can be realized with FreeSWITCH Interworking of .
Wechat applet solution
When it comes to wechat applets , Let's talk about Flash. In fact, for Flash Support for has been around for a long time FreeSWITCH To realize ,FreeSWITCH There's a module called mod_rtmp,RTMP The agreement has been implemented . Because wechat applet uses RTMP The agreement , When we first started making wechat apps , I wanted to be in RTMP On the basis of the extension to achieve wechat applet support , But it's not , The original Flash Just one socket And signaling , You can communicate with audio and video , But wechat apps can't .
Wechat applet provides two components ,Live Publisher as well as Live Player, It's the concept of push flow and pull flow , use RTMP The stream can receive and send . So we introduced SRS Service for （SRS It's also an open source software , It is mainly used for live broadcast platform ）, It's called mod_weixin.FreeSWITCH It can be pushed to SRS On , Pull through a small program , On the contrary, wechat applets can be pushed to SRS On ,FreeSWITCH Pull again . In this way, a two-way communication is realized .
because RTMP No signaling , So we realized Websocket Signaling , The applet still goes through Websocket Connected to the FreeSWITCH, In this way, the establishment and connection of the call can be controlled .
FreeSWITCH And support ASR/TTS, Not native support, of course , But by calling some third-party SDK Realization , In this way, intelligent control can be realized , Even take automatic minutes 、 Automatic translation and so on .
Realize video conference on public network , Inevitably it involves NAT Crossing the question . about NAT through ,WebRTC Well done , such as ICE The way .FreeSWITCH Currently supported ICE, stay ICE When you can't get through , It can also be used. Stun/TURN The server gets through .
There are also applications like banking , Due to special circumstances, you can't open too many ports . And based on FreeSWITCH Each call requires more than one RTP The port of . So we can use VPN The way , Connect to the server on the public network , It only needs one UDP The port can be realized .
For large-scale video conferencing , We used iptables. For example, we have servers in Beijing and Shanghai , You can do one on the server in Shanghai iptables, Then all the traffic will be transferred to the server in Beijing , In this way, the client can access nearby . Of course, for some more flexible applications , We need to design the corresponding application program to control how to call these iptables Do the forwarding .
besides , We also tried Tencent cloud's CLB（ Load balancing ）. In fact, Tencent cloud load balancing , Can directly provide a cloud load balancing IP, And then forward it to the back-end service .
Next, we plan to try to use Ali's LVS.
In the present situation , Plus the conference room cascading technology mentioned earlier , In fact, relatively large-scale video conference can be realized .
The next step is 4G/5G, We've actually tried a lot . At present, the direct use of mobile phones 4G There may be fewer video calls , But some customer service systems in the industry have begun to use , Some customers can call directly , Use 4G Video call to call center , Information exchange . Believe that with the future 5G The development of , Communication technology and capabilities will also become more powerful .
FreeSWITCH Community official website ：https://www.freeswitch.org.cn Welcome to pay attention 、 Communicate together , In the future, we will open source to the end . in addition , in consideration of FreeSWITCH The coverage may be too narrow , We made one RTS Our community —Real-Time Solutions, In the future, it will be right FreeSWITCH Some of the extended code is open source , Into it . Of course, the ultimate goal is to go straight up the stream FreeSWITCH among , But because it has to go through all kinds of code reviews , Reception may be slow , So it will be stored in this warehouse first , Wait for the opportunity to open source to FreeSWITCH The upstream . If you have the conditions 、 If you have the ability, you are welcome to join us .
- C++ 数字、string和char*的转换
- Won the CKA + CKS certificate with the highest gold content in kubernetes in 31 days!
- C + + number, string and char * conversion
- C + + Learning -- capacity() and resize() in C + +
- C + + Learning -- about code performance optimization
C + + programming experience (6): using C + + style type conversion
Latest party and government work report ppt - Park ppt
Online ID number extraction birthday tool
Field pointer? Dangling pointer? This article will help you understand!
GVRP of hcna Routing & Switching
- LeetCode 91. 解码方法
- Seq2seq implements chat robot
- [chat robot] principle of seq2seq model
- Leetcode 91. Decoding method
- HCNA Routing＆Switching之GVRP
- GVRP of hcna Routing & Switching
- HDU7016 Random Walk 2
- [Code+＃1]Yazid 的新生舞会
- CF1548C The Three Little Pigs
- HDU7033 Typing Contest
- HDU7016 Random Walk 2
- [code + 1] Yazid's freshman ball
- CF1548C The Three Little Pigs
- HDU7033 Typing Contest
- Qt Creator 自动补齐变慢的解决
- HALCON 20.11：如何处理标定助手品质问题
- HALCON 20.11：标定助手使用注意事项
- Solution of QT creator's automatic replenishment slowing down
- Halcon 20.11: how to deal with the quality problem of calibration assistant
- Halcon 20.11: precautions for use of calibration assistant
- "Top ten scientific and technological issues" announced| Young scientists 50 ² forum
- Reverse linked list
- JS data type
- Remember the bug encountered in reading and writing a file
- Singleton mode
- 在这个 N 多编程语言争霸的世界，C++ 究竟还有没有未来？
- In this world of N programming languages, is there a future for C + +?
- js Promise
- js 数组方法 回顾
- ES6 template characters
- js Promise
- JS array method review
- 【Golang】️走进 Go 语言️ 第一课 Hello World
- [golang] go into go language lesson 1 Hello World