Kibana Filebeat Index Pattern is not working












0















Kibana is trying to use filebeat for dashboard but it doesn't work. Can I fix this error? I added the error and filebeat.yml content. How to fix this? I can't see an error in filebeat.yml? I've done the necessary configurations, but I can't run. Filebeat- * command does not work when creating index pattern in kibana



filebeat version 1.3.1 (amd64)



dev@dev-Machine:~$ service filebeat status

filebeat.service - filebeat
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enable
Active: failed (Result: start-limit-hit) since Fri 2018-11-23 02:34:06 +03; 7h ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 822 ExecStart=/usr/bin/filebeat -c /etc/filebeat/filebeat.yml (code=exited,
Main PID: 822 (code=exited, status=1/FAILURE)

Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Unit entered failed state.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Failed with result 'exit-cod
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Service hold-off time over,
Nov 23 02:34:06 dev-Machine systemd[1]: Stopped filebeat.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Start request repeated too q
Nov 23 02:34:06 dev-Machine systemd[1]: Failed to start filebeat.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Unit entered failed state.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Failed with result 'start-li
lines 1-15/15 (END)


filebeat.yml



################### Filebeat Configuration Example #########################

############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log/*/*.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
- /var/log/*.log
#- c:programdataelasticsearchlogs*

# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one prospector
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in
input_type: log

# exclude_lines. By default, no lines are dropped.
# exclude_lines: ["^DBG"]

# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list. The include_lines is called before
# exclude_lines. By default, all the lines are exported.
# include_lines: ["^ERR", "^WARN"]

# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
# exclude_files: [".gz$"]

# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

# fields.
#fields_under_root: false

# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#ignore_older: 0

# Close older closes the file handler for which were not modified
# for longer then close_older
# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#close_older: 1h

# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: log
#document_type: log

# to 0s, it is done as often as possible. Default: 10s
#scan_frequency: 10s

# Defines the buffer size every harvester uses when fetching the file
#harvester_buffer_size: 16384

# Maximum number of bytes a single log event can have
# All bytes after max_bytes are discarded and not sent. The default is 10MB.
# This is especially useful for multiline log messages which can get large.
#max_bytes: 10485760

# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
#multiline:

# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#pattern: ^[

# Defines if the pattern set under pattern should be negated or not. Default is false.
#negate: false

# Default is 500
#max_lines: 500

# Default is 5s.
#timeout: 5s

#tail_files: false

# Every time a new line appears, backoff is reset to the initial value.
#backoff: 1s
# file after having backed off multiple times, it takes a maximum of 10s to read the new line
#max_backoff: 10s

# The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
# the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
# The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
#backoff_factor: 2

# This option closes a file, as soon as the file name changes.
# This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
# issues when the file is removed, as the file will not be fully removed until also Filebeat closes
# the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
# same name can be created. Turning this feature on the other hand can lead to loss of data
# on rotate files. It can happen that after file rotation the beginning of the new
# file is skipped, as the reading starts at the end. We recommend to leave this option on false
# but lower the ignore_older value to release files faster.
#force_close_files: false

# Additional prospector
#-
# Configuration to use stdin input
#input_type: stdin

# General filebeat configuration options
#
# Event count spool threshold - forces network flush if exceeded
#spool_size: 2048

# Enable async publisher pipeline in filebeat (Experimental!)
#publish_async: false

# Defines how often the spooler is flushed. After idle_timeout the spooler is
# Flush even though spool_size is not reached.
#idle_timeout: 5s

# Name of the registry file. Per default it is put in the current working
# directory. In case the working directory is changed after when running
# filebeat again, indexing starts from the beginning again.
registry_file: /var/lib/filebeat/registry

# Full Path to directory with additional prospector configuration files. Each file must end with .yml
# These config files must have the full filebeat config part inside, but only
# the prospector part is processed. All global options like spool_size are ignored.
# The config_dir MUST point to a different directory then where the main filebeat config file is in.
#config_dir:

###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features

############################# Output ##########################################

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:

### Elasticsearch as output
# elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
# hosts: ["localhost:9200"]

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "test"
#password: "test"

# Number of workers per Elasticsearch host.
#worker: 1

# Optional index name. The default is "filebeat" and generates
# [filebeat-]YYYY.MM.DD keys.
#index: "filebeat"

# A template is used to set the mapping in Elasticsearch
# By default template loading is disabled and no template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones
#template:

# Template name. By default the template name is filebeat.
#name: "filebeat"

# Path to template file
#path: "filebeat.template.json"

# Overwrite existing template
#overwrite: false

# Optional HTTP Path
#path: "/elasticsearch"

# Proxy server url
#proxy_url: http://proxy:3128

# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3

# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50

# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1

# Boolean that sets if the topology is kept in Elasticsearch. The default is
# false. This option makes sense only for Packetbeat.
#save_topology: false

# The time to live in seconds for the topology information that is stored in
# Elasticsearch. The default is 15 seconds.
#topology_expire: 15

# tls configuration. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"

# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true

# Configure cipher suites to be used for TLS connections
#cipher_suites:

# Configure curve types for ECDHE based cipher suites
#curve_types:

# Configure minimum TLS version allowed for connection to logstash
#min_version: 1.0

# Configure maximum TLS version allowed for connection to logstash
#max_version: 1.2


### Logstash as output
logstash:
# The Logstash hosts
hosts: ["localhost:5044"]

# Number of workers per Logstash host.
#worker: 1

# The maximum number of events to bulk into a single batch window. The
# default is 2048.
#bulk_max_size: 2048

# Set gzip compression level.
#compression_level: 3

# Optional load balance the events between the Logstash hosts
#loadbalance: true

# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat

# Optional TLS. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"

# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true

# Configure cipher suites to be used for TLS connections
#cipher_suites:

# Configure curve types for ECDHE based cipher suites
#curve_types:


### File as output
#file:
# Path to the directory where to save the generated files. The option is mandatory.
#path: "/tmp/filebeat"

# Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
#filename: filebeat

# Maximum size in kilobytes of each file. When this size is reached, the files are
# rotated. The default value is 10 MB.
#rotate_every_kb: 10000

# Maximum number of files under path. When this number of files is reached, the
# oldest file is deleted and the rest are shifted from last to first. The default
# is 7 files.
#number_of_files: 7


### Console output
# console:
# Pretty print json event
#pretty: false


############################# Shipper #########################################

shipper:
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:

# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]

# Uncomment the following if you want to ignore transactions created
# by the server on which the shipper is installed. This option is useful
# to remove duplicates if shippers are installed on multiple servers.
#ignore_outgoing: true

# How often (in seconds) shippers are publishing their IPs to the topology map.
# The default is 10 seconds.
#refresh_topology_freq: 10

# Expiration time (in seconds) of the IPs published by a shipper to the topology map.
# All the IPs will be deleted afterwards. Note, that the value must be higher than
# refresh_topology_freq. The default is 15 seconds.
#topology_expire: 15

# Internal queue size for single events in processing pipeline
#queue_size: 1000

# Configure local GeoIP database support.
# If no paths are not configured geoip is disabled.
#geoip:
#paths:
# - "/usr/share/GeoIP/GeoLiteCity.dat"
# - "/usr/local/var/GeoIP/GeoLiteCity.dat"


############################# Logging #########################################

# There are three options for the log ouput: syslog, file, stderr.
# Under Windos systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
logging:

# Send all logging output to syslog. On Windows default is false, otherwise
# default is true.
#to_syslog: true

# Write all logging output to files. Beats automatically rotate files if rotateeverybytes
# limit is reached.
#to_files: false

# To enable logging to files, to_files option has to be set to true
files:
# The directory where the log files will written to.
#path: /var/log/mybeat

# The name of the files where the logs are written to.
#name: mybeat

# Configure log file size limit. If limit is reached, log file will be
# automatically rotated
rotateeverybytes: 10485760 # = 10MB

# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7

# Enable debug output for selected components. To enable all selectors use ["*"]
# Other available selectors are beat, publish, service
# Multiple selectors can be chained.
#selectors: [ ]

# Sets log level. The default log level is error.
# Available log levels are: critical, error, warning, info, debug
#level: error









share|improve this question

























  • Check filebeat logs to see why the service failed to start. Also please post filebeat.yml

    – ben5556
    Nov 23 '18 at 8:09











  • Which log? Would you say the command? Al ı added filebeat.yml

    – LisaAnabel
    Nov 23 '18 at 8:29











  • Please also include your filebeat version.

    – Alexandre Juma
    Nov 23 '18 at 9:55











  • filebeat version 1.3.1 (amd64)

    – LisaAnabel
    Nov 23 '18 at 10:05
















0















Kibana is trying to use filebeat for dashboard but it doesn't work. Can I fix this error? I added the error and filebeat.yml content. How to fix this? I can't see an error in filebeat.yml? I've done the necessary configurations, but I can't run. Filebeat- * command does not work when creating index pattern in kibana



filebeat version 1.3.1 (amd64)



dev@dev-Machine:~$ service filebeat status

filebeat.service - filebeat
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enable
Active: failed (Result: start-limit-hit) since Fri 2018-11-23 02:34:06 +03; 7h ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 822 ExecStart=/usr/bin/filebeat -c /etc/filebeat/filebeat.yml (code=exited,
Main PID: 822 (code=exited, status=1/FAILURE)

Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Unit entered failed state.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Failed with result 'exit-cod
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Service hold-off time over,
Nov 23 02:34:06 dev-Machine systemd[1]: Stopped filebeat.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Start request repeated too q
Nov 23 02:34:06 dev-Machine systemd[1]: Failed to start filebeat.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Unit entered failed state.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Failed with result 'start-li
lines 1-15/15 (END)


filebeat.yml



################### Filebeat Configuration Example #########################

############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log/*/*.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
- /var/log/*.log
#- c:programdataelasticsearchlogs*

# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one prospector
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in
input_type: log

# exclude_lines. By default, no lines are dropped.
# exclude_lines: ["^DBG"]

# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list. The include_lines is called before
# exclude_lines. By default, all the lines are exported.
# include_lines: ["^ERR", "^WARN"]

# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
# exclude_files: [".gz$"]

# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

# fields.
#fields_under_root: false

# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#ignore_older: 0

# Close older closes the file handler for which were not modified
# for longer then close_older
# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#close_older: 1h

# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: log
#document_type: log

# to 0s, it is done as often as possible. Default: 10s
#scan_frequency: 10s

# Defines the buffer size every harvester uses when fetching the file
#harvester_buffer_size: 16384

# Maximum number of bytes a single log event can have
# All bytes after max_bytes are discarded and not sent. The default is 10MB.
# This is especially useful for multiline log messages which can get large.
#max_bytes: 10485760

# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
#multiline:

# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#pattern: ^[

# Defines if the pattern set under pattern should be negated or not. Default is false.
#negate: false

# Default is 500
#max_lines: 500

# Default is 5s.
#timeout: 5s

#tail_files: false

# Every time a new line appears, backoff is reset to the initial value.
#backoff: 1s
# file after having backed off multiple times, it takes a maximum of 10s to read the new line
#max_backoff: 10s

# The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
# the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
# The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
#backoff_factor: 2

# This option closes a file, as soon as the file name changes.
# This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
# issues when the file is removed, as the file will not be fully removed until also Filebeat closes
# the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
# same name can be created. Turning this feature on the other hand can lead to loss of data
# on rotate files. It can happen that after file rotation the beginning of the new
# file is skipped, as the reading starts at the end. We recommend to leave this option on false
# but lower the ignore_older value to release files faster.
#force_close_files: false

# Additional prospector
#-
# Configuration to use stdin input
#input_type: stdin

# General filebeat configuration options
#
# Event count spool threshold - forces network flush if exceeded
#spool_size: 2048

# Enable async publisher pipeline in filebeat (Experimental!)
#publish_async: false

# Defines how often the spooler is flushed. After idle_timeout the spooler is
# Flush even though spool_size is not reached.
#idle_timeout: 5s

# Name of the registry file. Per default it is put in the current working
# directory. In case the working directory is changed after when running
# filebeat again, indexing starts from the beginning again.
registry_file: /var/lib/filebeat/registry

# Full Path to directory with additional prospector configuration files. Each file must end with .yml
# These config files must have the full filebeat config part inside, but only
# the prospector part is processed. All global options like spool_size are ignored.
# The config_dir MUST point to a different directory then where the main filebeat config file is in.
#config_dir:

###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features

############################# Output ##########################################

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:

### Elasticsearch as output
# elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
# hosts: ["localhost:9200"]

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "test"
#password: "test"

# Number of workers per Elasticsearch host.
#worker: 1

# Optional index name. The default is "filebeat" and generates
# [filebeat-]YYYY.MM.DD keys.
#index: "filebeat"

# A template is used to set the mapping in Elasticsearch
# By default template loading is disabled and no template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones
#template:

# Template name. By default the template name is filebeat.
#name: "filebeat"

# Path to template file
#path: "filebeat.template.json"

# Overwrite existing template
#overwrite: false

# Optional HTTP Path
#path: "/elasticsearch"

# Proxy server url
#proxy_url: http://proxy:3128

# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3

# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50

# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1

# Boolean that sets if the topology is kept in Elasticsearch. The default is
# false. This option makes sense only for Packetbeat.
#save_topology: false

# The time to live in seconds for the topology information that is stored in
# Elasticsearch. The default is 15 seconds.
#topology_expire: 15

# tls configuration. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"

# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true

# Configure cipher suites to be used for TLS connections
#cipher_suites:

# Configure curve types for ECDHE based cipher suites
#curve_types:

# Configure minimum TLS version allowed for connection to logstash
#min_version: 1.0

# Configure maximum TLS version allowed for connection to logstash
#max_version: 1.2


### Logstash as output
logstash:
# The Logstash hosts
hosts: ["localhost:5044"]

# Number of workers per Logstash host.
#worker: 1

# The maximum number of events to bulk into a single batch window. The
# default is 2048.
#bulk_max_size: 2048

# Set gzip compression level.
#compression_level: 3

# Optional load balance the events between the Logstash hosts
#loadbalance: true

# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat

# Optional TLS. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"

# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true

# Configure cipher suites to be used for TLS connections
#cipher_suites:

# Configure curve types for ECDHE based cipher suites
#curve_types:


### File as output
#file:
# Path to the directory where to save the generated files. The option is mandatory.
#path: "/tmp/filebeat"

# Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
#filename: filebeat

# Maximum size in kilobytes of each file. When this size is reached, the files are
# rotated. The default value is 10 MB.
#rotate_every_kb: 10000

# Maximum number of files under path. When this number of files is reached, the
# oldest file is deleted and the rest are shifted from last to first. The default
# is 7 files.
#number_of_files: 7


### Console output
# console:
# Pretty print json event
#pretty: false


############################# Shipper #########################################

shipper:
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:

# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]

# Uncomment the following if you want to ignore transactions created
# by the server on which the shipper is installed. This option is useful
# to remove duplicates if shippers are installed on multiple servers.
#ignore_outgoing: true

# How often (in seconds) shippers are publishing their IPs to the topology map.
# The default is 10 seconds.
#refresh_topology_freq: 10

# Expiration time (in seconds) of the IPs published by a shipper to the topology map.
# All the IPs will be deleted afterwards. Note, that the value must be higher than
# refresh_topology_freq. The default is 15 seconds.
#topology_expire: 15

# Internal queue size for single events in processing pipeline
#queue_size: 1000

# Configure local GeoIP database support.
# If no paths are not configured geoip is disabled.
#geoip:
#paths:
# - "/usr/share/GeoIP/GeoLiteCity.dat"
# - "/usr/local/var/GeoIP/GeoLiteCity.dat"


############################# Logging #########################################

# There are three options for the log ouput: syslog, file, stderr.
# Under Windos systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
logging:

# Send all logging output to syslog. On Windows default is false, otherwise
# default is true.
#to_syslog: true

# Write all logging output to files. Beats automatically rotate files if rotateeverybytes
# limit is reached.
#to_files: false

# To enable logging to files, to_files option has to be set to true
files:
# The directory where the log files will written to.
#path: /var/log/mybeat

# The name of the files where the logs are written to.
#name: mybeat

# Configure log file size limit. If limit is reached, log file will be
# automatically rotated
rotateeverybytes: 10485760 # = 10MB

# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7

# Enable debug output for selected components. To enable all selectors use ["*"]
# Other available selectors are beat, publish, service
# Multiple selectors can be chained.
#selectors: [ ]

# Sets log level. The default log level is error.
# Available log levels are: critical, error, warning, info, debug
#level: error









share|improve this question

























  • Check filebeat logs to see why the service failed to start. Also please post filebeat.yml

    – ben5556
    Nov 23 '18 at 8:09











  • Which log? Would you say the command? Al ı added filebeat.yml

    – LisaAnabel
    Nov 23 '18 at 8:29











  • Please also include your filebeat version.

    – Alexandre Juma
    Nov 23 '18 at 9:55











  • filebeat version 1.3.1 (amd64)

    – LisaAnabel
    Nov 23 '18 at 10:05














0












0








0








Kibana is trying to use filebeat for dashboard but it doesn't work. Can I fix this error? I added the error and filebeat.yml content. How to fix this? I can't see an error in filebeat.yml? I've done the necessary configurations, but I can't run. Filebeat- * command does not work when creating index pattern in kibana



filebeat version 1.3.1 (amd64)



dev@dev-Machine:~$ service filebeat status

filebeat.service - filebeat
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enable
Active: failed (Result: start-limit-hit) since Fri 2018-11-23 02:34:06 +03; 7h ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 822 ExecStart=/usr/bin/filebeat -c /etc/filebeat/filebeat.yml (code=exited,
Main PID: 822 (code=exited, status=1/FAILURE)

Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Unit entered failed state.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Failed with result 'exit-cod
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Service hold-off time over,
Nov 23 02:34:06 dev-Machine systemd[1]: Stopped filebeat.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Start request repeated too q
Nov 23 02:34:06 dev-Machine systemd[1]: Failed to start filebeat.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Unit entered failed state.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Failed with result 'start-li
lines 1-15/15 (END)


filebeat.yml



################### Filebeat Configuration Example #########################

############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log/*/*.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
- /var/log/*.log
#- c:programdataelasticsearchlogs*

# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one prospector
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in
input_type: log

# exclude_lines. By default, no lines are dropped.
# exclude_lines: ["^DBG"]

# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list. The include_lines is called before
# exclude_lines. By default, all the lines are exported.
# include_lines: ["^ERR", "^WARN"]

# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
# exclude_files: [".gz$"]

# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

# fields.
#fields_under_root: false

# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#ignore_older: 0

# Close older closes the file handler for which were not modified
# for longer then close_older
# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#close_older: 1h

# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: log
#document_type: log

# to 0s, it is done as often as possible. Default: 10s
#scan_frequency: 10s

# Defines the buffer size every harvester uses when fetching the file
#harvester_buffer_size: 16384

# Maximum number of bytes a single log event can have
# All bytes after max_bytes are discarded and not sent. The default is 10MB.
# This is especially useful for multiline log messages which can get large.
#max_bytes: 10485760

# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
#multiline:

# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#pattern: ^[

# Defines if the pattern set under pattern should be negated or not. Default is false.
#negate: false

# Default is 500
#max_lines: 500

# Default is 5s.
#timeout: 5s

#tail_files: false

# Every time a new line appears, backoff is reset to the initial value.
#backoff: 1s
# file after having backed off multiple times, it takes a maximum of 10s to read the new line
#max_backoff: 10s

# The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
# the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
# The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
#backoff_factor: 2

# This option closes a file, as soon as the file name changes.
# This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
# issues when the file is removed, as the file will not be fully removed until also Filebeat closes
# the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
# same name can be created. Turning this feature on the other hand can lead to loss of data
# on rotate files. It can happen that after file rotation the beginning of the new
# file is skipped, as the reading starts at the end. We recommend to leave this option on false
# but lower the ignore_older value to release files faster.
#force_close_files: false

# Additional prospector
#-
# Configuration to use stdin input
#input_type: stdin

# General filebeat configuration options
#
# Event count spool threshold - forces network flush if exceeded
#spool_size: 2048

# Enable async publisher pipeline in filebeat (Experimental!)
#publish_async: false

# Defines how often the spooler is flushed. After idle_timeout the spooler is
# Flush even though spool_size is not reached.
#idle_timeout: 5s

# Name of the registry file. Per default it is put in the current working
# directory. In case the working directory is changed after when running
# filebeat again, indexing starts from the beginning again.
registry_file: /var/lib/filebeat/registry

# Full Path to directory with additional prospector configuration files. Each file must end with .yml
# These config files must have the full filebeat config part inside, but only
# the prospector part is processed. All global options like spool_size are ignored.
# The config_dir MUST point to a different directory then where the main filebeat config file is in.
#config_dir:

###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features

############################# Output ##########################################

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:

### Elasticsearch as output
# elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
# hosts: ["localhost:9200"]

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "test"
#password: "test"

# Number of workers per Elasticsearch host.
#worker: 1

# Optional index name. The default is "filebeat" and generates
# [filebeat-]YYYY.MM.DD keys.
#index: "filebeat"

# A template is used to set the mapping in Elasticsearch
# By default template loading is disabled and no template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones
#template:

# Template name. By default the template name is filebeat.
#name: "filebeat"

# Path to template file
#path: "filebeat.template.json"

# Overwrite existing template
#overwrite: false

# Optional HTTP Path
#path: "/elasticsearch"

# Proxy server url
#proxy_url: http://proxy:3128

# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3

# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50

# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1

# Boolean that sets if the topology is kept in Elasticsearch. The default is
# false. This option makes sense only for Packetbeat.
#save_topology: false

# The time to live in seconds for the topology information that is stored in
# Elasticsearch. The default is 15 seconds.
#topology_expire: 15

# tls configuration. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"

# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true

# Configure cipher suites to be used for TLS connections
#cipher_suites:

# Configure curve types for ECDHE based cipher suites
#curve_types:

# Configure minimum TLS version allowed for connection to logstash
#min_version: 1.0

# Configure maximum TLS version allowed for connection to logstash
#max_version: 1.2


### Logstash as output
logstash:
# The Logstash hosts
hosts: ["localhost:5044"]

# Number of workers per Logstash host.
#worker: 1

# The maximum number of events to bulk into a single batch window. The
# default is 2048.
#bulk_max_size: 2048

# Set gzip compression level.
#compression_level: 3

# Optional load balance the events between the Logstash hosts
#loadbalance: true

# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat

# Optional TLS. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"

# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true

# Configure cipher suites to be used for TLS connections
#cipher_suites:

# Configure curve types for ECDHE based cipher suites
#curve_types:


### File as output
#file:
# Path to the directory where to save the generated files. The option is mandatory.
#path: "/tmp/filebeat"

# Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
#filename: filebeat

# Maximum size in kilobytes of each file. When this size is reached, the files are
# rotated. The default value is 10 MB.
#rotate_every_kb: 10000

# Maximum number of files under path. When this number of files is reached, the
# oldest file is deleted and the rest are shifted from last to first. The default
# is 7 files.
#number_of_files: 7


### Console output
# console:
# Pretty print json event
#pretty: false


############################# Shipper #########################################

shipper:
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:

# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]

# Uncomment the following if you want to ignore transactions created
# by the server on which the shipper is installed. This option is useful
# to remove duplicates if shippers are installed on multiple servers.
#ignore_outgoing: true

# How often (in seconds) shippers are publishing their IPs to the topology map.
# The default is 10 seconds.
#refresh_topology_freq: 10

# Expiration time (in seconds) of the IPs published by a shipper to the topology map.
# All the IPs will be deleted afterwards. Note, that the value must be higher than
# refresh_topology_freq. The default is 15 seconds.
#topology_expire: 15

# Internal queue size for single events in processing pipeline
#queue_size: 1000

# Configure local GeoIP database support.
# If no paths are not configured geoip is disabled.
#geoip:
#paths:
# - "/usr/share/GeoIP/GeoLiteCity.dat"
# - "/usr/local/var/GeoIP/GeoLiteCity.dat"


############################# Logging #########################################

# There are three options for the log ouput: syslog, file, stderr.
# Under Windos systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
logging:

# Send all logging output to syslog. On Windows default is false, otherwise
# default is true.
#to_syslog: true

# Write all logging output to files. Beats automatically rotate files if rotateeverybytes
# limit is reached.
#to_files: false

# To enable logging to files, to_files option has to be set to true
files:
# The directory where the log files will written to.
#path: /var/log/mybeat

# The name of the files where the logs are written to.
#name: mybeat

# Configure log file size limit. If limit is reached, log file will be
# automatically rotated
rotateeverybytes: 10485760 # = 10MB

# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7

# Enable debug output for selected components. To enable all selectors use ["*"]
# Other available selectors are beat, publish, service
# Multiple selectors can be chained.
#selectors: [ ]

# Sets log level. The default log level is error.
# Available log levels are: critical, error, warning, info, debug
#level: error









share|improve this question
















Kibana is trying to use filebeat for dashboard but it doesn't work. Can I fix this error? I added the error and filebeat.yml content. How to fix this? I can't see an error in filebeat.yml? I've done the necessary configurations, but I can't run. Filebeat- * command does not work when creating index pattern in kibana



filebeat version 1.3.1 (amd64)



dev@dev-Machine:~$ service filebeat status

filebeat.service - filebeat
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enable
Active: failed (Result: start-limit-hit) since Fri 2018-11-23 02:34:06 +03; 7h ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 822 ExecStart=/usr/bin/filebeat -c /etc/filebeat/filebeat.yml (code=exited,
Main PID: 822 (code=exited, status=1/FAILURE)

Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Unit entered failed state.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Failed with result 'exit-cod
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Service hold-off time over,
Nov 23 02:34:06 dev-Machine systemd[1]: Stopped filebeat.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Start request repeated too q
Nov 23 02:34:06 dev-Machine systemd[1]: Failed to start filebeat.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Unit entered failed state.
Nov 23 02:34:06 dev-Machine systemd[1]: filebeat.service: Failed with result 'start-li
lines 1-15/15 (END)


filebeat.yml



################### Filebeat Configuration Example #########################

############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log/*/*.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
- /var/log/*.log
#- c:programdataelasticsearchlogs*

# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one prospector
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in
input_type: log

# exclude_lines. By default, no lines are dropped.
# exclude_lines: ["^DBG"]

# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list. The include_lines is called before
# exclude_lines. By default, all the lines are exported.
# include_lines: ["^ERR", "^WARN"]

# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
# exclude_files: [".gz$"]

# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

# fields.
#fields_under_root: false

# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#ignore_older: 0

# Close older closes the file handler for which were not modified
# for longer then close_older
# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#close_older: 1h

# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: log
#document_type: log

# to 0s, it is done as often as possible. Default: 10s
#scan_frequency: 10s

# Defines the buffer size every harvester uses when fetching the file
#harvester_buffer_size: 16384

# Maximum number of bytes a single log event can have
# All bytes after max_bytes are discarded and not sent. The default is 10MB.
# This is especially useful for multiline log messages which can get large.
#max_bytes: 10485760

# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
#multiline:

# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#pattern: ^[

# Defines if the pattern set under pattern should be negated or not. Default is false.
#negate: false

# Default is 500
#max_lines: 500

# Default is 5s.
#timeout: 5s

#tail_files: false

# Every time a new line appears, backoff is reset to the initial value.
#backoff: 1s
# file after having backed off multiple times, it takes a maximum of 10s to read the new line
#max_backoff: 10s

# The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
# the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
# The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
#backoff_factor: 2

# This option closes a file, as soon as the file name changes.
# This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
# issues when the file is removed, as the file will not be fully removed until also Filebeat closes
# the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
# same name can be created. Turning this feature on the other hand can lead to loss of data
# on rotate files. It can happen that after file rotation the beginning of the new
# file is skipped, as the reading starts at the end. We recommend to leave this option on false
# but lower the ignore_older value to release files faster.
#force_close_files: false

# Additional prospector
#-
# Configuration to use stdin input
#input_type: stdin

# General filebeat configuration options
#
# Event count spool threshold - forces network flush if exceeded
#spool_size: 2048

# Enable async publisher pipeline in filebeat (Experimental!)
#publish_async: false

# Defines how often the spooler is flushed. After idle_timeout the spooler is
# Flush even though spool_size is not reached.
#idle_timeout: 5s

# Name of the registry file. Per default it is put in the current working
# directory. In case the working directory is changed after when running
# filebeat again, indexing starts from the beginning again.
registry_file: /var/lib/filebeat/registry

# Full Path to directory with additional prospector configuration files. Each file must end with .yml
# These config files must have the full filebeat config part inside, but only
# the prospector part is processed. All global options like spool_size are ignored.
# The config_dir MUST point to a different directory then where the main filebeat config file is in.
#config_dir:

###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features

############################# Output ##########################################

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:

### Elasticsearch as output
# elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
# hosts: ["localhost:9200"]

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "test"
#password: "test"

# Number of workers per Elasticsearch host.
#worker: 1

# Optional index name. The default is "filebeat" and generates
# [filebeat-]YYYY.MM.DD keys.
#index: "filebeat"

# A template is used to set the mapping in Elasticsearch
# By default template loading is disabled and no template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones
#template:

# Template name. By default the template name is filebeat.
#name: "filebeat"

# Path to template file
#path: "filebeat.template.json"

# Overwrite existing template
#overwrite: false

# Optional HTTP Path
#path: "/elasticsearch"

# Proxy server url
#proxy_url: http://proxy:3128

# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3

# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50

# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1

# Boolean that sets if the topology is kept in Elasticsearch. The default is
# false. This option makes sense only for Packetbeat.
#save_topology: false

# The time to live in seconds for the topology information that is stored in
# Elasticsearch. The default is 15 seconds.
#topology_expire: 15

# tls configuration. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"

# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true

# Configure cipher suites to be used for TLS connections
#cipher_suites:

# Configure curve types for ECDHE based cipher suites
#curve_types:

# Configure minimum TLS version allowed for connection to logstash
#min_version: 1.0

# Configure maximum TLS version allowed for connection to logstash
#max_version: 1.2


### Logstash as output
logstash:
# The Logstash hosts
hosts: ["localhost:5044"]

# Number of workers per Logstash host.
#worker: 1

# The maximum number of events to bulk into a single batch window. The
# default is 2048.
#bulk_max_size: 2048

# Set gzip compression level.
#compression_level: 3

# Optional load balance the events between the Logstash hosts
#loadbalance: true

# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat

# Optional TLS. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"

# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true

# Configure cipher suites to be used for TLS connections
#cipher_suites:

# Configure curve types for ECDHE based cipher suites
#curve_types:


### File as output
#file:
# Path to the directory where to save the generated files. The option is mandatory.
#path: "/tmp/filebeat"

# Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
#filename: filebeat

# Maximum size in kilobytes of each file. When this size is reached, the files are
# rotated. The default value is 10 MB.
#rotate_every_kb: 10000

# Maximum number of files under path. When this number of files is reached, the
# oldest file is deleted and the rest are shifted from last to first. The default
# is 7 files.
#number_of_files: 7


### Console output
# console:
# Pretty print json event
#pretty: false


############################# Shipper #########################################

shipper:
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:

# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]

# Uncomment the following if you want to ignore transactions created
# by the server on which the shipper is installed. This option is useful
# to remove duplicates if shippers are installed on multiple servers.
#ignore_outgoing: true

# How often (in seconds) shippers are publishing their IPs to the topology map.
# The default is 10 seconds.
#refresh_topology_freq: 10

# Expiration time (in seconds) of the IPs published by a shipper to the topology map.
# All the IPs will be deleted afterwards. Note, that the value must be higher than
# refresh_topology_freq. The default is 15 seconds.
#topology_expire: 15

# Internal queue size for single events in processing pipeline
#queue_size: 1000

# Configure local GeoIP database support.
# If no paths are not configured geoip is disabled.
#geoip:
#paths:
# - "/usr/share/GeoIP/GeoLiteCity.dat"
# - "/usr/local/var/GeoIP/GeoLiteCity.dat"


############################# Logging #########################################

# There are three options for the log ouput: syslog, file, stderr.
# Under Windos systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
logging:

# Send all logging output to syslog. On Windows default is false, otherwise
# default is true.
#to_syslog: true

# Write all logging output to files. Beats automatically rotate files if rotateeverybytes
# limit is reached.
#to_files: false

# To enable logging to files, to_files option has to be set to true
files:
# The directory where the log files will written to.
#path: /var/log/mybeat

# The name of the files where the logs are written to.
#name: mybeat

# Configure log file size limit. If limit is reached, log file will be
# automatically rotated
rotateeverybytes: 10485760 # = 10MB

# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7

# Enable debug output for selected components. To enable all selectors use ["*"]
# Other available selectors are beat, publish, service
# Multiple selectors can be chained.
#selectors: [ ]

# Sets log level. The default log level is error.
# Available log levels are: critical, error, warning, info, debug
#level: error






elasticsearch kibana elastic-stack filebeat






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 23 '18 at 14:41









koksalb

355713




355713










asked Nov 23 '18 at 7:51









LisaAnabelLisaAnabel

216




216













  • Check filebeat logs to see why the service failed to start. Also please post filebeat.yml

    – ben5556
    Nov 23 '18 at 8:09











  • Which log? Would you say the command? Al ı added filebeat.yml

    – LisaAnabel
    Nov 23 '18 at 8:29











  • Please also include your filebeat version.

    – Alexandre Juma
    Nov 23 '18 at 9:55











  • filebeat version 1.3.1 (amd64)

    – LisaAnabel
    Nov 23 '18 at 10:05



















  • Check filebeat logs to see why the service failed to start. Also please post filebeat.yml

    – ben5556
    Nov 23 '18 at 8:09











  • Which log? Would you say the command? Al ı added filebeat.yml

    – LisaAnabel
    Nov 23 '18 at 8:29











  • Please also include your filebeat version.

    – Alexandre Juma
    Nov 23 '18 at 9:55











  • filebeat version 1.3.1 (amd64)

    – LisaAnabel
    Nov 23 '18 at 10:05

















Check filebeat logs to see why the service failed to start. Also please post filebeat.yml

– ben5556
Nov 23 '18 at 8:09





Check filebeat logs to see why the service failed to start. Also please post filebeat.yml

– ben5556
Nov 23 '18 at 8:09













Which log? Would you say the command? Al ı added filebeat.yml

– LisaAnabel
Nov 23 '18 at 8:29





Which log? Would you say the command? Al ı added filebeat.yml

– LisaAnabel
Nov 23 '18 at 8:29













Please also include your filebeat version.

– Alexandre Juma
Nov 23 '18 at 9:55





Please also include your filebeat version.

– Alexandre Juma
Nov 23 '18 at 9:55













filebeat version 1.3.1 (amd64)

– LisaAnabel
Nov 23 '18 at 10:05





filebeat version 1.3.1 (amd64)

– LisaAnabel
Nov 23 '18 at 10:05












1 Answer
1






active

oldest

votes


















0














First of all, I guess you're using filebeat 1.x (which is a very old version of filebeat).



Cleaning your configuration file, it seems that you have a wrongly formated configuration file.



Your current configuration:



filebeat:
prospectors:
paths:
- /var/log/*.log
input_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["localhost:5044"]
shipper:
logging:
files:


I can see that you have wrong identation and a missing prospector start dash "-".



I tested this configuration with filebeat-1.3.1-x86_64 and it works.



Can you please try to update your configuration file to:



filebeat:
prospectors:
-
input_type: log
paths:
- /var/log/*.log

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts:
- "localhost:5044"





share|improve this answer


























  • Can I completely delete filebeat.yml and change this?

    – LisaAnabel
    Nov 23 '18 at 10:27











  • Made a simple edit due to a copy/paste typo. You should be able to run the current posted configuration.

    – Alexandre Juma
    Nov 23 '18 at 10:27











  • Yes, you can override the content of the configuration file. All of that content is commented, so it's not doing nothing there. Just backup your current config file anyway (copy it to your home directory or something).

    – Alexandre Juma
    Nov 23 '18 at 10:28













  • Thank you so much my friend,problem is solved !

    – LisaAnabel
    Nov 23 '18 at 10:30











  • No problem, in the future with that filebeat version you can test changes to your configuration file like this: [root@localhost filebeat-1.3.1-x86_64]# ./filebeat -configtest -c "/path/to/your/new_config_file.yml". If it doesn't output anything means that your configuration file is OK. If it throws an error, you need to investigate that error.

    – Alexandre Juma
    Nov 23 '18 at 10:32












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53442601%2fkibana-filebeat-index-pattern-is-not-working%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














First of all, I guess you're using filebeat 1.x (which is a very old version of filebeat).



Cleaning your configuration file, it seems that you have a wrongly formated configuration file.



Your current configuration:



filebeat:
prospectors:
paths:
- /var/log/*.log
input_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["localhost:5044"]
shipper:
logging:
files:


I can see that you have wrong identation and a missing prospector start dash "-".



I tested this configuration with filebeat-1.3.1-x86_64 and it works.



Can you please try to update your configuration file to:



filebeat:
prospectors:
-
input_type: log
paths:
- /var/log/*.log

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts:
- "localhost:5044"





share|improve this answer


























  • Can I completely delete filebeat.yml and change this?

    – LisaAnabel
    Nov 23 '18 at 10:27











  • Made a simple edit due to a copy/paste typo. You should be able to run the current posted configuration.

    – Alexandre Juma
    Nov 23 '18 at 10:27











  • Yes, you can override the content of the configuration file. All of that content is commented, so it's not doing nothing there. Just backup your current config file anyway (copy it to your home directory or something).

    – Alexandre Juma
    Nov 23 '18 at 10:28













  • Thank you so much my friend,problem is solved !

    – LisaAnabel
    Nov 23 '18 at 10:30











  • No problem, in the future with that filebeat version you can test changes to your configuration file like this: [root@localhost filebeat-1.3.1-x86_64]# ./filebeat -configtest -c "/path/to/your/new_config_file.yml". If it doesn't output anything means that your configuration file is OK. If it throws an error, you need to investigate that error.

    – Alexandre Juma
    Nov 23 '18 at 10:32
















0














First of all, I guess you're using filebeat 1.x (which is a very old version of filebeat).



Cleaning your configuration file, it seems that you have a wrongly formated configuration file.



Your current configuration:



filebeat:
prospectors:
paths:
- /var/log/*.log
input_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["localhost:5044"]
shipper:
logging:
files:


I can see that you have wrong identation and a missing prospector start dash "-".



I tested this configuration with filebeat-1.3.1-x86_64 and it works.



Can you please try to update your configuration file to:



filebeat:
prospectors:
-
input_type: log
paths:
- /var/log/*.log

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts:
- "localhost:5044"





share|improve this answer


























  • Can I completely delete filebeat.yml and change this?

    – LisaAnabel
    Nov 23 '18 at 10:27











  • Made a simple edit due to a copy/paste typo. You should be able to run the current posted configuration.

    – Alexandre Juma
    Nov 23 '18 at 10:27











  • Yes, you can override the content of the configuration file. All of that content is commented, so it's not doing nothing there. Just backup your current config file anyway (copy it to your home directory or something).

    – Alexandre Juma
    Nov 23 '18 at 10:28













  • Thank you so much my friend,problem is solved !

    – LisaAnabel
    Nov 23 '18 at 10:30











  • No problem, in the future with that filebeat version you can test changes to your configuration file like this: [root@localhost filebeat-1.3.1-x86_64]# ./filebeat -configtest -c "/path/to/your/new_config_file.yml". If it doesn't output anything means that your configuration file is OK. If it throws an error, you need to investigate that error.

    – Alexandre Juma
    Nov 23 '18 at 10:32














0












0








0







First of all, I guess you're using filebeat 1.x (which is a very old version of filebeat).



Cleaning your configuration file, it seems that you have a wrongly formated configuration file.



Your current configuration:



filebeat:
prospectors:
paths:
- /var/log/*.log
input_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["localhost:5044"]
shipper:
logging:
files:


I can see that you have wrong identation and a missing prospector start dash "-".



I tested this configuration with filebeat-1.3.1-x86_64 and it works.



Can you please try to update your configuration file to:



filebeat:
prospectors:
-
input_type: log
paths:
- /var/log/*.log

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts:
- "localhost:5044"





share|improve this answer















First of all, I guess you're using filebeat 1.x (which is a very old version of filebeat).



Cleaning your configuration file, it seems that you have a wrongly formated configuration file.



Your current configuration:



filebeat:
prospectors:
paths:
- /var/log/*.log
input_type: log
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["localhost:5044"]
shipper:
logging:
files:


I can see that you have wrong identation and a missing prospector start dash "-".



I tested this configuration with filebeat-1.3.1-x86_64 and it works.



Can you please try to update your configuration file to:



filebeat:
prospectors:
-
input_type: log
paths:
- /var/log/*.log

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts:
- "localhost:5044"






share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 23 '18 at 10:27

























answered Nov 23 '18 at 10:19









Alexandre JumaAlexandre Juma

542315




542315













  • Can I completely delete filebeat.yml and change this?

    – LisaAnabel
    Nov 23 '18 at 10:27











  • Made a simple edit due to a copy/paste typo. You should be able to run the current posted configuration.

    – Alexandre Juma
    Nov 23 '18 at 10:27











  • Yes, you can override the content of the configuration file. All of that content is commented, so it's not doing nothing there. Just backup your current config file anyway (copy it to your home directory or something).

    – Alexandre Juma
    Nov 23 '18 at 10:28













  • Thank you so much my friend,problem is solved !

    – LisaAnabel
    Nov 23 '18 at 10:30











  • No problem, in the future with that filebeat version you can test changes to your configuration file like this: [root@localhost filebeat-1.3.1-x86_64]# ./filebeat -configtest -c "/path/to/your/new_config_file.yml". If it doesn't output anything means that your configuration file is OK. If it throws an error, you need to investigate that error.

    – Alexandre Juma
    Nov 23 '18 at 10:32



















  • Can I completely delete filebeat.yml and change this?

    – LisaAnabel
    Nov 23 '18 at 10:27











  • Made a simple edit due to a copy/paste typo. You should be able to run the current posted configuration.

    – Alexandre Juma
    Nov 23 '18 at 10:27











  • Yes, you can override the content of the configuration file. All of that content is commented, so it's not doing nothing there. Just backup your current config file anyway (copy it to your home directory or something).

    – Alexandre Juma
    Nov 23 '18 at 10:28













  • Thank you so much my friend,problem is solved !

    – LisaAnabel
    Nov 23 '18 at 10:30











  • No problem, in the future with that filebeat version you can test changes to your configuration file like this: [root@localhost filebeat-1.3.1-x86_64]# ./filebeat -configtest -c "/path/to/your/new_config_file.yml". If it doesn't output anything means that your configuration file is OK. If it throws an error, you need to investigate that error.

    – Alexandre Juma
    Nov 23 '18 at 10:32

















Can I completely delete filebeat.yml and change this?

– LisaAnabel
Nov 23 '18 at 10:27





Can I completely delete filebeat.yml and change this?

– LisaAnabel
Nov 23 '18 at 10:27













Made a simple edit due to a copy/paste typo. You should be able to run the current posted configuration.

– Alexandre Juma
Nov 23 '18 at 10:27





Made a simple edit due to a copy/paste typo. You should be able to run the current posted configuration.

– Alexandre Juma
Nov 23 '18 at 10:27













Yes, you can override the content of the configuration file. All of that content is commented, so it's not doing nothing there. Just backup your current config file anyway (copy it to your home directory or something).

– Alexandre Juma
Nov 23 '18 at 10:28







Yes, you can override the content of the configuration file. All of that content is commented, so it's not doing nothing there. Just backup your current config file anyway (copy it to your home directory or something).

– Alexandre Juma
Nov 23 '18 at 10:28















Thank you so much my friend,problem is solved !

– LisaAnabel
Nov 23 '18 at 10:30





Thank you so much my friend,problem is solved !

– LisaAnabel
Nov 23 '18 at 10:30













No problem, in the future with that filebeat version you can test changes to your configuration file like this: [root@localhost filebeat-1.3.1-x86_64]# ./filebeat -configtest -c "/path/to/your/new_config_file.yml". If it doesn't output anything means that your configuration file is OK. If it throws an error, you need to investigate that error.

– Alexandre Juma
Nov 23 '18 at 10:32





No problem, in the future with that filebeat version you can test changes to your configuration file like this: [root@localhost filebeat-1.3.1-x86_64]# ./filebeat -configtest -c "/path/to/your/new_config_file.yml". If it doesn't output anything means that your configuration file is OK. If it throws an error, you need to investigate that error.

– Alexandre Juma
Nov 23 '18 at 10:32




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53442601%2fkibana-filebeat-index-pattern-is-not-working%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







這個網誌中的熱門文章

Tangent Lines Diagram Along Smooth Curve

Yusuf al-Mu'taman ibn Hud

Zucchini