乐码库:一个极速、放心、纯净的下载站! 更新: 资源发布
  • 您的位置:首页 > 技术文档 > 服务器 > Docker构建ELK Docker集群日志收集系统
  • 收藏本页
      Docker构建ELK Docker集群日志收集系统
      发布时间:2016-12-21 08:06:59 关键词: Docker,ELK,Docker,集群日志
      内容简介:为了在Docker集群中更好的管理查看日志 我们使用Docker 来搭建集群的ELK日志收集系统,这篇文章介绍了Docker构建ELK Docker集群日志收集系统的相关资料,需要的朋友可以参考下

    当我们搭建好Docker集群后就要解决如何收集日志的问题 ELK就提供了一套完整的解决方案 本文主要介绍使用Docker搭建ELK 收集Docker集群的日志

    ELK简介

    ELK由ElasticSearch、LogstashKiabana三个开源工具组成

    <em>Elasticsearch</em>是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。

    <em>Logstash</em>是一个完全开源的工具,他可以对你的日志进行收集、过滤,并将其存储供以后使用

    <em>Kibana</em> 也是一个开源和免费的工具,它Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。

    使用Docker搭建ELK平台

    首先我们编辑一下 logstash的配置文件 logstash.conf

    input { 
      udp {
      port => 5000
      type => json
     }
    }
    filter {
      json {
       source => "message"
      }
    }
    output {
      elasticsearch {
           hosts => "elasticsearch:9200" #将logstash的输出到 elasticsearch 这里改成你们自己的host 
      }
    }
    

    然后我们还需要需要一下Kibana 的启动方式

    编写启动脚本 等待elasticserach 运行成功后启动

    #!/usr/bin/env bash
    
    # Wait for the Elasticsearch container to be ready before starting Kibana.
    echo "Stalling for Elasticsearch" 
    while true; do
      nc -q 1 elasticsearch 9200 2>/dev/null && break
    done
    
    echo "Starting Kibana"
    exec kibana
    
    

    修改Dockerfile 生成自定义的Kibana镜像

    FROM kibana:latest
    
    RUN apt-get update && apt-get install -y netcat
    
    COPY entrypoint.sh /tmp/entrypoint.sh
    RUN chmod +x /tmp/entrypoint.sh
    
    RUN kibana plugin --install elastic/sense
    
    CMD ["/tmp/entrypoint.sh"]
    
    

    同时也可以修改一下Kibana 的配置文件 选择需要的插件

    # Kibana is served by a back end server. This controls which port to use.
    port: 5601
    
    # The host to bind the server to.
    host: "0.0.0.0"
    
    # The Elasticsearch instance to use for all your queries.
    elasticsearch_url: "http://elasticsearch:9200"
    
    # preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
    # then the host you use to connect to *this* Kibana instance will be sent.
    elasticsearch_preserve_host: true
    
    # Kibana uses an index in Elasticsearch to store saved searches, visualizations
    # and dashboards. It will create a new index if it doesn't already exist.
    kibana_index: ".kibana"
    
    # If your Elasticsearch is protected with basic auth, this is the user credentials
    # used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
    # users will still need to authenticate with Elasticsearch (which is proxied thorugh
    # the Kibana server)
    # kibana_elasticsearch_username: user
    # kibana_elasticsearch_password: pass
    
    # If your Elasticsearch requires client certificate and key
    # kibana_elasticsearch_client_crt: /path/to/your/client.crt
    # kibana_elasticsearch_client_key: /path/to/your/client.key
    
    # If you need to provide a CA certificate for your Elasticsarech instance, put
    # the path of the pem file here.
    # ca: /path/to/your/CA.pem
    
    # The default application to load.
    default_app_id: "discover"
    
    # Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
    # request_timeout setting
    # ping_timeout: 1500
    
    # Time in milliseconds to wait for responses from the back end or elasticsearch.
    # This must be > 0
    request_timeout: 300000
    
    # Time in milliseconds for Elasticsearch to wait for responses from shards.
    # Set to 0 to disable.
    shard_timeout: 0
    
    # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
    # startup_timeout: 5000
    
    # Set to false to have a complete disregard for the validity of the SSL
    # certificate.
    verify_ssl: true
    
    # SSL for outgoing requests from the Kibana Server (PEM formatted)
    # ssl_key_file: /path/to/your/server.key
    # ssl_cert_file: /path/to/your/server.crt
    
    # Set the path to where you would like the process id file to be created.
    # pid_file: /var/run/kibana.pid
    
    # If you would like to send the log output to a file you can set the path below.
    # This will also turn off the STDOUT log output.
    log_file: ./kibana.log
    # Plugins that are included in the build, and no longer found in the plugins/ folder
    bundled_plugin_ids:
     - plugins/dashboard/index
     - plugins/discover/index
     - plugins/doc/index
     - plugins/kibana/index
     - plugins/markdown_vis/index
     - plugins/metric_vis/index
     - plugins/settings/index
     - plugins/table_vis/index
     - plugins/vis_types/index
     - plugins/visualize/index
    
    

    好了下面我们编写一下 Docker-compose.yml 方便构建

    端口之类的可以根据自己的需求修改 配置文件的路径根据你的目录修改一下 整体系统配置要求较高 请选择配置好点的机器

    elasticsearch:
     image: elasticsearch:latest
     command: elasticsearch -Des.network.host=0.0.0.0
     ports:
      - "9200:9200"
      - "9300:9300"
    logstash:
     image: logstash:latest
     command: logstash -f /etc/logstash/conf.d/logstash.conf
     volumes:
      - ./logstash/config:/etc/logstash/conf.d
     ports:
      - "5001:5000/udp"
     links:
      - elasticsearch
    kibana:
     build: kibana/
     volumes:
      - ./kibana/config/:/opt/kibana/config/
     ports:
      - "5601:5601"
     links:
      - elasticsearch
    
    #好了命令 就可以直接启动ELK了 
    docker-compose up -d
    

    访问之前的设置的kibanna的5601端口就可以看到是否启动成功了

    使用logspout收集Docker日志

    下一步我们要使用logspout对Docker日志进行收集 我们根据我们的需求修改一下logspout镜像

    编写配置文件 modules.go

    package main
    
    import (
      _ "github.com/looplab/logspout-logstash"
      _ "github.com/gliderlabs/logspout/transports/udp"
    
    )
    
    

    编写Dockerfile

    FROM gliderlabs/logspout:latest
    COPY ./modules.go /src/modules.go
    

    重新构建镜像后 在各个节点运行即可

     docker run -d --name="logspout" --volume=/var/run/docker.sock:/var/run/docker.sock \
             jayqqaa12/logspout logstash://你的logstash地址
    

    现在打开Kibana 就可以看到收集到的 docker日志了

    注意Docker容器应该选择以console输出 这样才能采集到

    好了我们的Docker集群下的ELK 日志收集系统就部署完成了

    如果是大型集群还需要添加logstash 和elasticsearch 集群 这个我们下回分解。

    以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持乐码库。

      最新更新
      热门排行榜