当前位置:网站首页>The fast track of an architect

The fast track of an architect

2020-12-07 16:09:36 itread01

A senior architect of qiniu once said such a sentence : > Nginx+ Business logic layer + Database + Cache layer + Message queuing , This model can adapt to almost most business scenarios . So many years have passed , This sentence deeply or shallowly affected my technical choice , As a result, I spent a lot of time learning cache related technology . I am here 10 I started using caching years ago , From local cache 、 To distributed caching 、 Then to the multi-level cache , Step on a lot of holes . Next, I combine my own experience of using cache , Talk about what I know about caching .
# 01 Local cache **1\. Page level cache ** I used the cache very early ,2010 I used it about two years ago OSCache, At that time, it was mainly used in JSP Page is used to implement page level cache . Virtual code is like this : ``` ` ``` The middle part JSP The code will use key="foobar" Pick it up at session in , So that other pages can share this cache . In the use of JSP In the context of this ancient technology , By introducing OSCache After that , The loading speed of the page is really fast . But with the separation of front end and back end and the rise of distributed caching , Server side page level caching has been rarely used . But in the front end , Page level caching is still very popular .
**2\. Object Caching ** 2011 About years ago , The sweet potato brother of open source China has written many articles about cache . He mentioned : Open source China has millions of dynamic requests every day , Only 1 Taiwan 4 Core 8G The server will be able to carry , Thanks to the cache framework Ehcache. I'm fascinated by it , A simple framework can achieve such a single machine performance , I want to try . On , I refer to sweet potato brother's sample code , For the first time in the company's balance withdrawal service Ehcache. The logic is also very simple , It is to cache the successful or failed orders , So the next time you look up , No need to query Alipay service . Virtual code is like this : ![](https://oscimg.oschina.net/oscnet/976ceb93-8267-40a7-bc9b-a32fe7081cbc.png) After adding cache , The effect of optimization is obvious , The task takes time from the original 40 The minutes are down to 5~10 Minutes . The example above is typical of 「 Object Caching 」, It is the most common application scenario for local caching . Compared to page caching , It's finer grained 、 More flexible , It is often used to cache data with little change , such as : Global configuration 、 Orders with status closed, etc , To improve the overall query speed .
**3\. Rearranging strategy ** 2018 year , My partner and I have developed our own configuration center , To make the client read as fast as possible , Local cache uses Guava, The overall structure is shown in the figure below : ![](https://oscimg.oschina.net/oscnet/04f25e3f-99a8-4ba2-ab4e-4194e032df42.png) How is the local cache updated ? There are two mechanisms : - The client starts the scheduled task , Pull data from configuration center . - When the configuration center has data changes , Actively push it to the client . I don't use it here websocket, It USES RocketMQ Remoting Communication framework . And then I read Soul The original code of the gateway , Its local cache update mechanism is shown in the figure below , Support... Together 3 Strategies : ![](https://oscimg.oschina.net/oscnet/b3848b94-adbf-483e-9c60-c905b3f20d61.png) **▍ zookeeper watch Mechanism ** soul-admin At startup , The data will be written in full zookeeper, When subsequent data changes , Will be incrementally updated zookeeper The node of . At the same time ,soul-web Nodes that listen for configuration information , Once there is a change in information , The local cache will be updated . **▍ websocket Mechanism ** websocket and zookeeper The mechanism is a little bit similar , When the gateway and admin Set up for the first time websocket When connecting ,admin Will push a full amount of data , Later, if the configuration data changes , Then the incremental data is passed through websocket Actively push to soul-web. **▍ http Long polling mechanism ** http After the request arrives at the server , Not immediately , But the use of Servlet 3.0 The asynchronous mechanism response data of . When the configuration changes , The server will remove the long polling requests in the queue one by one , Tell me which Group The information of has changed , When the gate receives a response , Ask again that Group Configuration information for . I don't know if you found out ? - pull Patterns are essential - Incremental push is much the same Long polling is an interesting topic , This pattern is in RocketMQ The consumer model is also used , Close to quasi real time , And it can reduce the pressure on the server .
# 02 Distributed cache About distributed caching , memcached and Redis It should be the most commonly used technology selection . I believe that programmers are very familiar with , I'd like to share two cases here . **1. Reasonable control of object size and reading strategy ** 2013 year , I work for a lottery company , Our live score module also uses distributed caching . At that time , I met a Young GC Frequent online questions , Through jstat After checking the tools , It turns out that the new generation is full every two seconds . Further positioning analysis , It turns out to be some key It's fast value It's too big , On average 300K about , The biggest one is 500K. In this way, high and low , It's easy Lead to GC Frequently . After finding the root cause , How to change it ? I didn't have a clear idea . On , I went to my peers' websites to see how they did the same thing , Include : 360 lottery , Aoke. Com . I found two things : > 1、 The data format is very simple , Return only the necessary information to the front end , Part of the data is returned by array > > 2、 Use websocket, After entering the page, push the full amount of data , Data changes push incremental data Back to my question , What is the final solution ? At that time , Our live score module cache format is JSON Array , Each array element contains 20 Multiple key value pairs , Below JSON I just list the examples 4 Attributes . ``` [{ "playId":"2399", "guestTeamName":" Calf ", "hostTeamName":" Lakers ", "europe":"123" }] ``` This data structure , In general, there is no problem . But when the number of columns is up to 20 Multiple , And a lot of games every day , In fact, it is easy to cause problems in high concurrency requests . Based on time limit and risk considerations , In the end, we adopt a conservative optimization scheme : 1) Change the size of the new generation , From the original 2G Modified into 4G 2) Format the cached data from JSON Change it to an array , As shown below : ``` [["2399"," Calf "," Lakers ","123"]] ``` After the modification , The size of the cache varies from average 300k From left to right 80k about ,YGC The frequency drop is obvious , At the same time, the page response is much faster . But after a while ,cpu load It will fluctuate higher in an instant . Visible , Although we reduced the cache size , But reading large objects is still a great loss of system resources , Lead to Full GC The frequency is not low . 3) In order to solve this problem thoroughly , We use a more refined cache read strategy . We split the cache into two parts , The first part is the full data , The second part is incremental data ( The amount of data is very small ). The first time the page requests to pull the full amount of data , When the score changes , Through websocket Push incremental data . The first 3 When you're done , Page access speed is extremely fast , The server also uses very little resources , The effect of optimization is excellent . After this optimization , I understand : Although caching can improve the overall speed , But in high concurrency scenarios , Cache object size is still a concern , If you don't pay attention, there will be accidents . In addition, we also need to control the reading strategy reasonably , Minimize GC Frequency of , So as to improve the overall efficiency .
**2. Paging list query ** How to cache lists is a skill I'm eager to share with you . This knowledge point is also me 2012 What I learned from open source China in , Now I will 「 Query the list of blogs 」 For example . Let's start with 1 A plan : Cache the paged content as a whole . This kind of plan will Combine the page number and page size into a cache key, Quick value is the list of blog information . If the content of a blog changes , We're going to reload the cache , Or delete the whole page cache . This scheme , The particle size of the cache is relatively large , If blog updates are frequent , The cache is easy to fail . Now I'll introduce the next 2 A plan : Blog caching only . The process is as follows : 1) First query the blog of the current page from the database id list ,sql Similar : ``` select id from blogs limit 0,10 ``` 2) Batch get blogs from cache id The cache data corresponding to the list , And record the missed blogs id, If you don't hit id The list is larger than 0, Check the database again , And put it in the cache ,sql Similar : ``` select id from blogs where id in (noHitId1, noHitId2) ``` 3) Save the blog objects that are not cached into the cache 4) Return to the list of blog objects In theory , If the cache is preheated , A simple database query , One cache batch access , You can return all the information . in addition , About Slow down Storage batch access , How to achieve ? - Local cache : Very efficient ,for Just loop back - memcached: Use mget command - Redis: If the cache object structure is simple , Use mget 、hmget command ; If the structure is complex , Consider using pipleline,lua Script mode The first 1 These schemes are suitable for scenarios where data are rarely changed , Like the leaderboard , Home page news information, etc . The first 2 These schemes are suitable for most paging scenarios , And it can be integrated with other resources . Examples : In the search system , We can find the blog through the filter conditions id list , And then through the above way , Get a list of blogs quickly .
# 03 Multi level cache First of all, make sure why you want to use multi-level caching ? Local caching is extremely fast , But the capacity is limited , And can't share memory . Distributed cache capacity expansion Kit , But in high concurrency scenarios , If all data must be obtained from the remote cache , It's easy to get full bandwidth , Throughput drops . There is a good saying ,** The closer the cache is to the user, the more efficient it is !** The benefits of using multi-level caching are in : In high concurrency scenarios , It can improve the throughput of the whole system , Reduce the pressure on distributed caching . 2018 year , An e-commerce company I work for needs to app Performance optimization of home page interface . It took me about two days to complete the whole project , Two level cache mode is adopted , At the same time, it makes use of guava The lazy loading mechanism of , The overall structure is shown in the figure below : ![](https://oscimg.oschina.net/oscnet/510c833e-95e2-4222-9d54-d8f97abc2888.png) The cache read process is as follows : 1、 When the service gateway is just started , Local cache has no data , Read Redis Get it , If Redis There is no data in cache , Then pass RPC Call the shopping guide service to read data , Then write the data to the local cache and Redis in ; if Redis Cache not empty , The cache data is written to the local cache . 2、 Because of the steps 1 The local cache has been warmed up , Subsequent requests read the local cache directly , Back to the client . 3、Guava Configured with refresh Mechanism , Custom calls are made every once in a while LoadingCache Thread pool (5 The biggest thread ,5 A core thread ) Go to shopping guide service to synchronize data to local cache and Redis in . After optimization , Good performance , The average time taken is 5ms about . At first, I thought the probability of problems was very small , But one night , Suddenly found out app The information displayed on the front page is sometimes the same , Sometimes it's different . That is to say : Although LoadingCache The thread has been calling the interface to update the cache information , But each one The data in the server local cache is not completely consistent . It illustrates two important points : 1、 Lazy loading can still cause data inconsistency between multiple machines 2、 LoadingCache The number of thread pools is not configured properly , It causes threads to pile up Finally , Our solution is : 1、 Lazy loading combines with message mechanism to update cache data , That is to say : When the configuration of shopping guide service changes , Inform the service gateway to retrieve data again , Update cache . 2、 Turn it up properly LoadigCache Thread pool arguments for , And bury some in the thread pool , Monitor thread pool usage , When the thread is busy, it can send an alarm , Then dynamically modify the thread pool arguments .
# At the end Caching is a very important technique . If you can go from principle to practice , Keep going deep into it , This should be the most enjoyable thing for technicians . This article is the beginning of the cache series , It's more about me 10 The typical problems encountered in years of work are described in detail , It doesn't go very deep into the theoretical knowledge . I think I should communicate with my friends more : How to systematically learn a new technology . - Choose the classic book of the technology , Understand the basic concepts - Build the knowledge context of the technology - Unity of , Practice or build your own wheels in a production environment - Keep repeating , Whether there is a better plan In the future, I will serialize some cache related content : High availability mechanisms including caching 、codis The principle of , Welcome to continue to pay attention to . About caching , If you have your own experience or what you want to know more about , Welcome to the comments section .
Introduction to the author :985 Master's degree , Former Amazon Engineer , Now 58 Transfer to technical director ** Welcome to scan the QR code below , Follow my personal account :IT People's career advancement ** ![](https://img-blog.csdnimg.cn/20201107215432

版权声明
本文为[itread01]所创,转载请带上原文链接,感谢
https://chowdera.com/2020/12/20201207160712771j.html