This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Beats

Beats are a family of lightweight shippers that you should consider as a first-solution for sending data to Elasticsearch. The two most common ones to use are:

  • Filebeat
  • Winlogbeat

Filebeat is used both for files, and for other general types, like syslog and NetFlow data.

Winlogbeat is used to load Windows events into Elasticsearch and works well with Windows Event Forwarding.

1 - Linux Installation

On Linux

A summary from the general docs. View and adjust versions as needed.

If you haven’t already added the repo:

apt-get install apt-transport-https
apt install gnupg2
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-7.x.list
apt update

apt install filebeat
systemctl enable filebeat

Filebeat uses a default config file at /etc/filebeat/filebeat.yml. If you don’t want to edit that, you can use the ‘modules’ to configure it for you. That command will also load dashboard elements into Kibana, so you must have that already installed Kibana to make use of it.

Here’s a simple test

mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.orig
vi /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
output.file:
  path: "/tmp/filebeat"
  filename: filebeat
  #rotate_every_kb: 10000
  #number_of_files: 7
  #permissions: 0600

2 - Windows Installation

Installation

Download the .zip version (the msi doesn’t include the server install script) from the URL below. Extract, rename to Filebeat and move it the to the c:\Program Files directory.

https://www.elastic.co/downloads/beats/filebeat

Start an admin powershell, change to that directory and run the service install command. (Keep the shell up for later when done)

PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-filebeat.ps1

Basic Configuration

Edit the filebeat config file.

write.exe filebeat.yml

You need to configure the input and output sections. The output is already set to elasticsearch localhost so you only have to change the input from the unix to the windows style.

  paths:
    #- /var/log/*.log
    - c:\programdata\elasticsearch\logs\*

Test as per normal

  ./filebeat test config -e

Filebeat specific dashboards must be added to Kibana. Do that with the setup argument:

  .\filebeat.exe setup --dashboards

To start Filebeat in the forrgound (to see any interesting messages)

  .\filebeat.exe -e

If you’re happy with the results, you can stop the application then start the service

  Ctrl-C
  Start-Service filebeat

Adapted from the guide at

https://www.elastic.co/guide/en/beats/filebeat/7.6/filebeat-getting-started.html

3 - NetFlow Forwarding

The NetFlow protocol is now implemented in Filebeat1. Assuming you’ve installed Filebeat and configured Elasticsearch and Kibana, you can use this input module to auto configure the inputs, indexes and dashboards.

./filebeat modules enable netflow
filebeat setup -e

If you are just testing and don’t want to add the full stack, you can set up the netflow input2 which the module is a wrapper for.

filebeat.inputs:
- type: netflow
  max_message_size: 10KiB
  host: "0.0.0.0:2055"
  protocols: [ v5, v9, ipfix ]
  expiration_timeout: 30m
  queue_size: 8192
output.file:
  path: "/tmp/filebeat"
  filename: filebeat
filebeat test config -e

Consider dropping all the fields you don’t care about as there are a lot of them. Use the include_fields processor to limit what you take in

  - include_fields:
      fields: ["destination.port", "destination.ip", "source.port", "source.mac", "source.ip"]

4 - Palo Example

# This filebeat config accepts TRAFFIC and SYSTEM syslog messages from a Palo Alto, 
# tags and parses them 

# This is an arbitrary port. The normal port for syslog is UDP 512
filebeat.inputs:
  - type: syslog
    protocol.udp:
      host: ":9000"

processors:
    # The message field will have "TRAFFIC" for  netflow logs and we can 
    # extract the details with a CSV decoder and array extractor
  - if:
      contains:
        message: ",TRAFFIC,"
    then:
      - add_tags:
          tags: "netflow"
      - decode_csv_fields:
          fields:
            message: csv
      - extract_array:
          field: csv
          overwrite_keys: true
          omit_empty: true
          fail_on_error: false
          mappings:
            source.ip: 7
            destination.ip: 8
            source.nat.ip: 9
            network.application: 14
            source.port: 24
            destination.port: 25
            source.nat.port: 26
      - drop_fields:
          fields: ["csv", "message"] 
    else:
        # The message field will have "SYSTEM,dhcp" for dhcp logs and we can 
        # do a similar process to above
      - if:
          contains:
            message: ",SYSTEM,dhcp"
        then:
        - add_tags:
            tags: "dhcp"
        - decode_csv_fields:
            fields:
              message: csv
        - extract_array:
            field: csv
            overwrite_keys: true
            omit_empty: true
            fail_on_error: false
            mappings:
              message: 14
        # The DHCP info can be further pulled apart using space as a delimiter
        - decode_csv_fields:
            fields:
              message: csv2
            separator: " "
        - extract_array:
            field: csv2
            overwrite_keys: true
            omit_empty: true
            fail_on_error: false
            mappings:
              source.ip: 4
              source.mac: 7
              hostname: 10
        - drop_fields:
            fields: ["csv","csv2"] # Can drop message too like above when we have watched a few        
  - drop_fields:
      fields: ["agent.ephemeral_id", "agent.hostname", "agent.id", "agent.type", "agent.version", "ecs.version","host.name","event.severity","input.type","hostname","log.source.address","syslog.facility", "syslog.facility_label", "syslog.priority", "syslog.priority_label","syslog.severity_label"]
      ignore_missing: true
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
output.elasticsearch:
  hosts: ["localhost:9200"]

5 - RADIUS Forwarding

Here’s an example of sending FreeRADIUS logs to Elasticsearch.

cat /etc/filebeat/filebeat.yml
filebeat.inputs:
  - type: log
    paths:
      - /var/log/freeradius/radius.log
    include_lines: ['\) Login OK','incorrect']
    tags: ["radius"]
processors:
  - drop_event:
      when:
        contains:
          message: "previously"
  - if:
      contains:
        message: "Login OK"
    then: 
      - dissect:
          tokenizer: "%{key1} [%{source.user.id}/%{key3}cli %{source.mac})"
          target_prefix: ""
      - drop_fields:
          fields: ["key1","key3"]
      - script:
          lang: javascript
          source: >
            function process(event) {
                var mac = event.Get("source.mac");
                if(mac != null) {
                        mac = mac.toLowerCase();
                         mac = mac.replace(/-/g,":");
                         event.Put("source.mac", mac);
                }
              }            
    else:
      - dissect:
          tokenizer: "%{key1} [%{source.user.id}/<via %{key3}"
          target_prefix: ""
      - drop_fields: 
          fields: ["key1","key3"]
output.elasticsearch:
  hosts: ["http://logcollector.yourorg.local:9200"]        
  allow_older_versions: true
  setup.ilm.enabled: false

6 - Syslog Forwarding

You may have an older system or appliance that can transmit syslog data. You can use filebeat to accept that data and store it in Elasticsearch.

Add Syslog Input

Install filebeat and test the reception the /tmp.

vi  /etc/filebeat/filebeat.yml

filebeat.inputs:
- type: syslog
  protocol.udp:
    host: ":9000"
output.file:
  path: "/tmp"
  filename: filebeat


sudo systemctl filebeat restart

pfSense Example

The instructions are NetGate’s remote logging example.

Status -> System Logs -> Settings

Enable and configure. Internet rumor has it that it’s UDP only so the config above reflects that. Interpreting the output requires parsing the message section detailed in the filter log format docs.

'5,,,1000000103,bge1.1099,match,block,in,4,0x0,,64,0,0,DF,17,udp,338,10.99.147.15,255.255.255.255,2048,30003,318'

'5,,,1000000103,bge2,match,block,in,4,0x0,,84,1,0,DF,17,udp,77,157.240.18.15,205.133.125.165,443,61343,57'

'222,,,1000029965,bge2,match,pass,out,4,0x0,,128,27169,0,DF,6,tcp,52,205.133.125.142,205.133.125.106,5225,445,0,S,1248570004,,8192,,mss;nop;wscale;nop;nop;sackOK'

'222,,,1000029965,bge2,match,pass,out,4,0x0,,128,11613,0,DF,6,tcp,52,205.133.125.142,211.24.111.75,15305,445,0,S,2205942835,,8192,,mss;nop;wscale;nop;nop;sackOK'