当前位置:网站首页>Getting started with openresty

Getting started with openresty

2021-09-15 04:25:14 roshilikang

This article has been included https://github.com/lkxiaolou/lkxiaolou welcome star.

OpenResty Introduce

OpenResty By bringing together a variety of well-designed Nginx modular ( Mainly by OpenResty Team independent development ), So that Nginx Effectively become a powerful gm Web Application platform . such ,Web Developers and systems engineers can use it Lua Scripting language mobilization Nginx Various supported C as well as Lua modular , Build up quickly enough to be competent 10K Even 1000K High performance of the above single-machine concurrent connections Web Application system .OpenResty The goal is to let you Web The service runs directly in Nginx Services within , make the best of Nginx The non-blocking I/O Model , Not just to HTTP Client request , Even for the remote backend MySQL、PostgreSQL、Memcached as well as Redis And so on for consistent high performance response .

OpenResty install

May refer to http://openresty.org/en/linux-packages.html

With centos For example :

  • wget https://openresty.org/package/centos/openresty.repo
  • mv openresty.repo /etc/yum.repos.d/
  • yum check-update
  • yum install openresty

hello world Program

  • mkdir -p /home/roshi/opensty/conf /home/roshi/opensty/logs
  • Write an output ”hello world“ Configuration of
worker_processes  1;
error_log logs/error.log;
events {
    worker_connections 1024;
}

http {
    server {
        listen 6699;
        location / {
            default_type text/html;


            content_by_lua_block {
                ngx.say("HelloWorld")
            }
        }
    }
}
  • function Use the above installation method ,nginx Will be installed in /usr/local/openresty Catalog , perform
/usr/local/openresty/nginx/sbin/nginx -p /home/roshi/openresty -c /home/roshi/openresty/conf/nginx.conf
  • test
curl http://127.0.0.1:6699

Processing flow

OpenResty Processing a request , Its processing flow is shown in the figure below

actual combat

nginx The most common is the reverse proxy function , For example, through URL According to the rules, requests under the same domain name are distributed to different back-end clusters , for instance :

http://example.com/user/1 and http://example.com/product/1

It's two requests under the same domain name , They correspond to users and commodities respectively , The cluster of back-end services is likely to be split , Use in this case nginx It's easy to shunt ;

But if the characteristics of this diversion are not header perhaps URL On , For example post Requested body The body of the ,nginx I can't support it , At this time, we can use OpenResty Of lua Script to achieve .

I've come across such a need , The same request needs to be routed to different clusters for processing , But features can't pass header perhaps URL To distinguish between , Because in the early design , No need to distinguish ; This request can handle a single request , It can also deal with batch situations , Now the performance of batch requests is not satisfactory , A new cluster is needed to handle , Abstract as the following request

curl -X "POST" -d '{"uids":[1,2]}' -H "Content-Type:application/json" 'http://127.0.0.1:6699/post'

Expect separation when body The body of the uids It's multiple and single requests , When uids Only 1 individual uid When the request is routed to the back end A,uids in uid The number is larger than 1 Route to the back end B

Prior to nginx.conf Modify... On the basis of

worker_processes  1;
error_log logs/error.log info;
events {
    worker_connections 1024;
}

http {
   upstream single.uid {
       server host.docker.internal:8888;
   }
   upstream multiple.uids {
       server host.docker.internal:9999;
   }

    server {
        listen 6699;
        location / {
            default_type application/json;
            # default upstream
            set $upstream_name 'multiple.uids';

            rewrite_by_lua_block {

                cjson = require 'cjson.safe'
                ngx.req.read_body()
                local body = ngx.req.get_body_data()
                if body then
                    ngx.log(ngx.INFO, "body=" .. body)
                    local data = cjson.decode(body)

                    if data and type(data) == "table" then
                        local count = 0
                        for k,v in pairs(data["uids"]) do
                            count = count + 1
                        end

                        ngx.log(ngx.INFO, "count = " .. count)

                        if count == 1 then
                            ngx.var.upstream_name = "single.uid"
                        end
                    end
                end
            }
            proxy_pass http://$upstream_name;
        }
    }
}
  • The second line adjusts the log level to info, Convenient for debugging and observation
  • Define two upstream, Corresponding to different back ends , Because of my openresty stay docker In the container , Back end services are in the physical machine , All used here are host.docker.internal Instead of the back end ip
  • Use rewrite_by_lua_block( You can refer to the processing flow chart above )
  • Use cjson Parsing body, Judge uids The number in , This is all about it lua Code , Be careful lua Code and nginx The syntax of configuration is different ,lua In order to get nginx Of variables using ngx.var.upstream_name

Post the back-end code here , Use golang To write , Yes echo frame

package main

import (
   "github.com/labstack/echo/v4"
   "github.com/labstack/echo/v4/middleware"
   "os"
   "strconv"
)

type Response struct {
   Success bool `json:"success"`
   Message string `json:"message"`
   Port int `json:"port"`
   Uids []int `json:"uids"`
}

type Request struct {
   Uids []int `json:"uids"`
}

var port = 8888

func main() {

   e := echo.New()
   e.Use(middleware.Logger())
   e.Use(middleware.Recover())

   e.POST("/post", post)

   if len(os.Args) >= 2 {
      p, err := strconv.Atoi(os.Args[1])
      if err == nil {
         port = p
      }
   }
   e.Logger.Fatal(e.Start(":"+strconv.Itoa(port)))
}

func post(c echo.Context) error {
   req := Request{}
   err := c.Bind(&req)
   if err != nil {
      c.JSON(500, Response{
         Success: false,
         Port: port,
         Message: "bind body error",
      })
      return err
   }
   response := Response{
      Success: true,
      Port: port,
   }
   for _, uid := range req.Uids {
      response.Uids = append(response.Uids, uid + 100)
   }
   c.JSON(200, response)
   return nil
}

Monitor separately in 8888 and 9999 port , After operation , request 6699 port (nginx monitor ) Observe

meanwhile , In the Journal /home/roshi/openresty/logs/error.log You can see

Last

This article starts with installation , The basic principle is briefly introduced OpenResty, And a practical example shows that OpenResty The ability of , I hope you can get started after watching it OpenResty.


WeChat official account " Master bug catcher ", Back end technology sharing , Architecture design 、 performance optimization 、 Source code reading 、 Troubleshoot problems 、 Step on the pit practice .

版权声明
本文为[roshilikang]所创,转载请带上原文链接,感谢
https://chowdera.com/2021/09/20210909112309742h.html

随机推荐