当前位置:网站首页>Elk multi tenant scheme

Elk multi tenant scheme

2020-12-07 09:54:29 zlt2000

file

One 、 Preface

Log analysis is one of the important means of system debugging and troubleshooting , At present, the distributed system has many instances and machines , So it is very necessary to build a unified log system ;ELK Provides a complete set of solutions , And it's all open source software , To use with each other , Perfect connection , Efficient to meet the application of many occasions , It's one of the current mainstream choices .

This paper mainly introduces how to realize a set of ELK Log system At the same time Multiple environments Multiple systems The common use / test , And realize the mutual data and view Isolation They don't influence each other .

 

Two 、 Isolation

common ELK The architecture is shown in the following figure , By ElasticsearchLogstashKibana And FileBeat form .
file

Deploy one in each application server FileBeat Component as a log collector , Get data from the file through the input plug-in , And then transmit it to Logstash The log data will be processed and structured through the filter plug-in and sent to Elasticsearch Storage , Finally through Kibana Visual display analysis .

PS: You need to check the image above ELK Each of the components of the Isolation Handle

 

2.1. FileBeat Isolation

Because there will be one on each machine Beat Instances are collected as logs , therefore FileBeat There is no need to do any isolation configuration , But as a data entry point, you need to put the Tenant Relevant information is passed down to , As shown in the figure below

file

adopt project( Project name ) and env( Environmental Science ) As Tenant Isolation sign

 

2.2. Logstash Isolation

The main reason is that the log format of each project may be different , So there will be different personalized profiles , This Log parsing configuration file Isolation rules need to be defined to separate ;

Start with the following command logstash Appoint config/conf/ Store directory for configuration , And specify the configuration file hot load .

bin/logstash -f config/conf/ --config.reload.automatic

 

Log resolution configuration file isolation method can refer to the following figure :

file

(1)01-input-beats.conf

For general purpose Input To configure , Each tenant shares , Used to receive from Filebeat The data of

input {
  beats {
    port => 5044
  }
}

 

(2)02-output-es.conf

For general purpose Output To configure , Each tenant shares , Used to put log data according to defined Index naming rules Create an index and write it to es in

You need to add projectenv and docType The three fields represent the project name 、 Environment and log type

output {
  elasticsearch {
    hosts => ["localhost"]
    user => "elastic"
    password => "changeme"
    index => "%{[fields][project]}-%{[fields][env]}-%{[fields][docType]}-%{+YYYY.MM.dd}"
  }
}

ip、 The user name and password should be changed according to the actual situation

 

(3)mp.conf

For personalization Log parsing To configure , Each tenant creates a new profile to configure its own filter Content

filter {
  if [fields][project] == "mp" and [fields][env] == "pre" and [fields][docType] == "syslog" {
    grok {
      ..........
    }
  }
}

PS: It's necessary to add if Statement to confirm whether it belongs to your own tenant's log data !

 

2.3. Elasticsearch Isolation

Name it by a different index , Create separate indexes for physical isolation ; From the front Logstash When building indexes after structured data , Has passed automatically Filebeat The specified index name is dynamically generated by the input parameter variable of .

The naming rule of the index is :${ Project name }-${ Environmental Science }-${ Log type }-%{+YYYY.MM.dd}

for example :mp-pre-syslog-2020.12.01

 

2.4. Kibana Isolation

It can be isolated by multiple work areas , Each tenant creates its own independent workspace , Used to isolate your own index data 、 Show objects such as views , also Not visible to each other .

The configuration process of the workspace is as follows :

  1. Create workspace
  2. Create the role ( Configure permissions )
  3. Create user ( Connected characters )

2.3.1 Create a workspace

2.3.1.1 Super administrator login

Use super administrator account elastic Sign in Kibana, choice Default workspace
file

2.3.1.2 Enter the management page

file

2.3.1.3 Create a workspace

Create workspace , And can customize the display function point ( Show all by default )
file

 

2.3.2 Create a role binding workspace

Create a new character , And assign the corresponding Index permission And Workspace permissions Give the role permission
file

 

2.3.3 Create user

Create user , And bind yourself working space The next role
file

PS: The user can only see what he belongs to work area Under the Indexes and instrument panel Objects such as

 

3、 ... and 、 summary

Every Tenant Need for ELK Each of the components of the Isolation Handle

  1. Filebeat: Responsible for distinguishing Tenant Relevant information is passed down to
  2. Logstash: Separate the personalization of each tenant independently Filter The configuration file
  3. Elasticsearch: Named through a canonical index , Each tenant creates an index independently to achieve physical isolation
  4. Kibana: Isolation through multiple workspaces , The data is invisible to the dashboard

 

PS: The isolation steps are a little cumbersome , But later, you can develop your own log system , Put the above steps on the graphical interface to operate and implement .

 

Code scanning, attention, surprise !

file

版权声明
本文为[zlt2000]所创,转载请带上原文链接,感谢
https://chowdera.com/2020/12/20201207095118051r.html