TORINGBOOT file Upload -DEMO ————DEV (development) environment and PRO (production) environment

2022-12-20   ES  

network to give an example to collect a type of log output to Logstash, but there are many types of logs on a system. Can the same collection end distinguish different log types?

The structure below
is NXLOG as a client collection, send it to Logstash through the TCP protocol, and then transmitted to Elasticsearch by logstash.
Prerequisite conditions must be required:

  1. Install Elasticsearch, Logstash, NXLOG, FileBeat
  2. NXLOG/FileBeats Installed in Windows system, Elasticsearch/Logstash is installed in the Linux system

nxlog configuration nxlog.conf:

<Input in_donglilog>
	Module im_file
	File "D:\\jar\\dongli\\logs\\spring-boot.log"
	SavePos TRUE
</Input>

<Output out_donglitcp>
	Module om_tcp
	Host 192.168.1.238
	Port 514
</Output>

<Route 1>
	Path in_donglilog => out_donglitcp
</Route>

CollectionD:\jar\dongli\logs\spring-boot.loglog, output to 192.168.1.238:514

logstash configuration:

input {
	tcp {
		port => 514
		type=>"plm"
   	}
}
output{
	if [type] == "plm"{
		elasticsearch {
				hosts => ["127.0.0.1:9200"]
				index => "kelian-%{+YYYY.MM.dd}"
		}
  }
}

Monitor porting 514 through the TCP protocol. At this time, the logstash work Mode is Server (another job mode is Client, which is used to collect and send data), which is monitoring of port data.

Most of the
online examples stop, listen to a port, then receive data, send to Elasticsearch.
Assuming, we not only monitor the log of dongliD:\jar\dongli\logs\spring-boot.logLog, I also listen to another system log, assuming that the application is Kelian. These two log formats are different. The NXLOG configuration is relatively simple, mainly how Logstash can distinguish the different logs received, creating different index in Elasticsearch. You can’t output the two application logs to the same index.

This method is the simplest, different applications open different port monitoring
nxlog configuration

<Input in_donglilog>
	Module im_file
	File "D:\\jar\\dongli\\logs\\spring-boot.log"
	SavePos TRUE
</Input>

<Output out_donglitcp>
	Module om_tcp
	Host 192.168.1.238
	Port 514
</Output>

<Route 1>
	Path in_donglilog => out_donglitcp
</Route>

<Input in_kelianlog>
	Module im_file
	File "D:\\jar\\kelaien\\logs\\spring-boot.log"
	SavePos TRUE
</Input>
 <Output out_keliantcp>
	Module om_tcp
	Host 192.168.1.238
	Port 515
</Output>
<Route 2>
	Path in_kelianlog => out_keliantcp
</Route>

logstash configuration:

input {
	tcp {
		port => 514
		type=>"dongli"
   	}
   	tcp {
		port => 515
		type=>"kelian"
   	}
}
output{
	if [type] == "dongli"{
		elasticsearch {
				hosts => ["127.0.0.1:9200"]
				index => "dongli-%{+YYYY.MM.dd}"
		}
  }
  if [type] == "kelian"{
		elasticsearch {
				hosts => ["127.0.0.1:9200"]
				index => "kelian-%{+YYYY.MM.dd}"
		}
  }
}

The easiest, but I do n’t want to do so, because every application is added, one port will be added, and one port is added to increase this port to the outside world. If it is Alibaba Cloud ECS, it is necessary to modify the rules of the safety group.Personally troubles, but this is also a optional way

If you can bring a data to distinguish the log type. Unfortunately, NXLOG does not provide this option, what should I do?
Modify the transmitted data.
nxlog is sent to Logstash each line, add a special string in front of each log, and then logstash intercept this string to create different index based on this special string.

The principle of

is the quoting of the logstash character. As long as it is the value input input, it can be reference

nxlog configuration:

<Input in_donglilog>
	Module im_file
	File "D:\\jar\\dongli\\logs\\spring-boot.log"
	SavePos TRUE
</Input>
<Input in_kelianlog>
	Module im_file
	File "D:\\jar\\kelaien\\logs\\spring-boot.log"
	SavePos TRUE
</Input>


<Processor proc_donglilog>
	Module      pm_transformer
	Exec $raw_event = "dongli " + $raw_event;
</Processor>
<Processor proc_kelianlog>
	Module      pm_transformer
	Exec $raw_event = "kelian " + $raw_event;
</Processor>

<Output out_donglitcp>
	Module om_tcp
	Host 192.168.1.238
	Port 514
</Output>
 <Output out_keliantcp>
	Module om_tcp
	Host 192.168.1.238
	Port 514
</Output>
 

<Route 1>
	Path in_donglilog => proc_donglilog => out_donglitcp
</Route>

<Route 2>
	Path in_kelianlog => proc_kelianlog => out_keliantcp
</Route>

Through the Processor module, the application name is added to each row log.
logstash configuration:

input {
	tcp {
		port => 514
		type=>"plm"
   	}
}
filter{
	if [type] == "plm" {
		grok{
			match=>{
				"message" => "%{WORD:key} %{WORD}"
			}
		}
		mutate{
			gsub=>["message","%{key}",""]
	    }
	}
}
output{
	if [type] == "plm"{
		if [key] == "dongli" {	
			elasticsearch {
				hosts => ["127.0.0.1:9200"]
				index => "dongli-%{+YYYY.MM.dd}"
			}
		}
		if [key] == "kelian" {	
			elasticsearch {
				hosts => ["127.0.0.1:9200"]
				index => "kelian-%{+YYYY.MM.dd}"
			}
		}
	}
}

The key to processing is in the filter code

GROK { 
 match => {{ 
 #名 拿 
 "Message" => " %{word: key} %{word}" 
 } 
 } 
 mutate { 
 #Cap to use the application name in message to empty 
 GSUB => [Message ","%{key} "," "] 
 }

can be judged by the field reference function in OUTPUT

if [type] == "plm"{
		if [key] == "dongli" {	
		}
		if [key] == "kelian" {	
		}
	}

disadvantage
There is a disadvantage, which only has a role in a single line of logs. If there are many lines of combination into a line of abnormal logs, it is not suitable, because the keywords are added to the head of the line and destroyed the data.
When using the Multiline plug -in filtering input data, there is no way to distinguish the line. When using the Multiline plug -in, there will be problems when combined with multiple lines.

codec => multiline { 
 # # # 
 Pattern =>^[" 
 Negate => TRUE 
 What => "Previous" 
 }

If the keyword is placed at the end of each line
nxlog configuration:


<Input in_donglilog>
	Module im_file
	File "D:\\jar\\dongli\\logs\\spring-boot.log"
	SavePos TRUE
</Input>
<Input in_kelianlog>
	Module im_file
	File "D:\\jar\\kelaien\\logs\\spring-boot.log"
	SavePos TRUE
</Input>


<Processor proc_donglilog>
	Module      pm_transformer
	Exec $raw_event = $raw_event + "(dongli)";
</Processor>
<Processor proc_kelianlog>
	Module      pm_transformer
	Exec $raw_event = $raw_event + "(kelian)";
</Processor>

<Output out_donglitcp>
	Module om_tcp
	Host 192.168.1.238
	Port 514
</Output>
 <Output out_keliantcp>
	Module om_tcp
	Host 192.168.1.238
	Port 514
</Output>
 

<Route 1>
	Path in_donglilog => proc_donglilog => out_donglitcp
</Route>

<Route 2>
	Path in_kelianlog => proc_kelianlog => out_keliantcp
</Route>

Note code

<Processor proc_donglilog>
	Module      pm_transformer
	Exec $raw_event = $raw_event + "(dongli)";
</Processor>
<Processor proc_kelianlog>
	Module      pm_transformer
	Exec $raw_event = $raw_event + "(kelian)";
</Processor>

The key is placed on the tail and included in small brackets.
logstash configuration:

input {
	tcp {
		port => 514
		codec => multiline{
			pattern => "^\d{4}(\-|\/|.)\d{1,2}\1\d{1,2}"
			negate => true
			what => "previous"
		}
		type=>"plm"
   	}
}
filter{
	if [type] == "plm" {
		grok{
			match=>{
				"message" => "(?<ckey>[(]\w+[)\\r])"
			}
		}
		mutate{
			gsub=>["message","[(]%{ckey}[)]",""]
			#gsub=>["ckey","\r",""]
	    }
	}
}
output{
	if [type] == "plm"{
		if [ckey] == "(dongli)" {	
			elasticsearch {
				hosts => ["127.0.0.1:9200"]
				index => "dongli-%{+YYYY.MM.dd}"
			}
		}
		if [ckey] == "(kelian)" {	
			elasticsearch {
				hosts => ["127.0.0.1:9200"]
				index => "kelian-%{+YYYY.MM.dd}"
			}
		}
	}
}

The main processing in the filter

if [type] == "plm" {
		grok{
			match=>{
				"message" => "(?<ckey>[(]\w+[)\\r])"
			}
		}
		mutate{
			gsub=>["message","[(]%{ckey}[)]",""]
	    }
	}

Get the keyword and delete the key in the MESSAGE field.
This method is also a solution, but it is not an elegant solution

It is naturally available to carry keywords, and it is also stable in Windows, so I recommend using FileBeat for NXLOG.
filebeat.yml configuration:

filebeat.inputs:
- type: log
  enabled: true

  paths:
    - D:\jar\dongli\logs\spring-boot.logg
  fields:
    appname: dongli
- type: log
  enabled: true
  paths:
    - D:\jar\kelaien\logs\spring-boot.log
  fields:
    appname: kelaien

logstash configuration

input{
	beats  {
		port => 515
		type=>"beatss"
   	}
}

output{
	if [fields][appname] == "dongli"{
		elasticsearch {
			hosts => ["127.0.0.1:9200"]
			index => "dongli-%{+YYYY.MM.dd}"
		}
	}
	if [fields][appname] == "kelaien"{
		elasticsearch {
			hosts => ["127.0.0.1:9200"]
			index => "kelaien-%{+YYYY.MM.dd}"
		}
	}
}

The above corresponds to a one -way log. If it is a multi -line log, its configuration is placed on FileBeats instead of logstash
filebeat.yml configuration:

filebeat.inputs:
- type: log
  enabled: true

  paths:
    - D:\jar\dongli\logs\spring-boot.logg
   multiline:
    pattern: '^\d{4}-\d{1,2}-\d{1,2}'
    negate: true
    match: after
  fields:
    appname: dongli
- type: log
  enabled: true
  paths:
    - D:\jar\kelaien\logs\spring-boot.log
   multiline:
    pattern: '^\d{4}-\d{1,2}-\d{1,2}'
    negate: true
    match: after
  fields:
    appname: kelaien

The key code of multi -line is

 multiline:
    pattern: '^\d{4}-\d{1,2}-\d{1,2}'
    negate: true
    match: after

source

Related Posts

computer composition -Verilog flow lamp experiment

WeChat Mini Program Skeleton Skeleton Pingzhi Mountain

A coexistence solution to implement multiple versions of Java JDK on Windows

Anaconda3 Windows and LIUNX deployment installation

TORINGBOOT file Upload -DEMO ————DEV (development) environment and PRO (production) environment

Random Posts

Turn the xxx.json file to JSON object

C ++

[Linux User Space Programming-3]: Several implementation methods of Linux timing mechanism

SpringBoot new annotation

Customized radio Sunshine of Radio Receptions