mycat web install

JAVA环境安装

1
先安装java环境

zookeeper环境安装

1
安装zookeeper环境,单节点就好

mycat-web 安装

  1. mycat-web URL

    1
    2
    download: http://dl.mycat.io/mycat-web-1.0/
    download: http://dl.mycat.io/
  2. mycat-web 安装

    1
    2
    3
    [root@10 app]# tar xzf Mycat-web-1.0-SNAPSHOT-20170102153329-linux.tar.gz
    [root@10 app]# mv mycat-web /mnt/app/mycat-web
    [root@10 app]# chown -R wisdom.wisdom /mnt/app/mycat-web
  3. mycat-web修改配置文件

    1
    2
    [wisdom@10 ~]$ vim /mnt/app/mycat-web/mycat-web/WEB-INF/classes/mycat.properties
    zookeeper=10.0.1.89:2181
  4. mycat-web启动

    1
    2
    3
    [root@10 app]# su - wisdom
    [wisdom@10 ~]$ cd /mnt/app/mycat-web/
    [wisdom@101 mycat-web]$ ./start.sh &
  5. mycat-web登录

    1
    浏览器访问:http://1.1.1.1:8082/mycat/

logstash info

logstash 信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Logstash是一款轻量级的日志搜集处理框架,可以方便的把分散的,多样化的日志搜集起来,并进行自定义的处理,然后传输到指定的位置,比如某个服务器或者文件

Logstash使用管道方式进行日志的搜集处理和输出.有点类似*NIX系统的管道命令"xxx | ccc | ddd"(xxx执行完了会执行ccc,然后执行ddd)
Logstash包含三个阶段: 输入input --> 处理filter(不是必须的) --> 输出output
Logstash的每个阶段都可以指定多个访问方式,例如:在input{}中指定多个来源地址,在filter{}中指定多个过滤条件,在output{}指定多个输出地址
注意: 在filter{}中添加多种处理规则,是按照规则顺序一一处理(可以简单的理解从上到下),并且官方不建议在filter中使用重复使用插件

input:是指日志数据传输到Logstash中
Fillter:在Logstash处理链中担任中间处理组件
output:是logstash处理管道的最末端组件
codecs:是基于数据流的过滤器,它可以作为input,output的一部分配置.Codecs可以帮助你轻松的分割发送过来已经被序列化的数据

Logstash常用命令参数:
-f:通过这个命令可以指定Logstash的配置文件,根据配置文件配置logstash
-t:测试配置文件是否正确,然后退出,长与-f一起使用
-e:后面跟着字符串,该字符串可以被当做logstash的配置(如果是""则默认使用stdin作为输入,stdout作为输出)
-l:日志输出的地址(默认就是stdout直接在控制台中输出)
  1. logstash 多配置文件启动问题

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    例如: 有两个配置文件1.conf和2.conf
    conf/1.conf
    conf/2.conf

    方式一:
    bin/logstash -f ./config/*
    注意: 种启动方式启动的是conf目录下的第一个配置文件,其它配置文件并没有生效

    方式二:
    bin/logstash -f ./config/
    此时,conf配置文件下的内容都生效了

    综上:启动logstash,不管有多少个配置文件最后在启动的时候都会编译成一个文件,也就是说logstash启动后,无论有多少个input或output,只有一个pipeline
  2. logstash 自动reload配置文件

    1
    2
    3
    bin/logstash –f apache.config --auto-reload   //默认每3秒检查一次
    bin/logstash –f apache.config --auto-reload --reload-interval 60 //设置60秒检查一次配置
    bin/logstash –f apache.config --auto-reload --reload-interval 60 -l /log/logstash.log 指定logstash的日志存放位置
  3. logstash 插件操作

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    //查看插件
    bin/logstash-plugin list
    bin/logstash-plugin list --verbose
    bin/logstash-plugin list '*namefragment*'
    bin/logstash-plugin list --group output

    //安装插件
    bin/logstash-plugin install logstash-output-kafka

    //更新插件
    bin/logstash-plugin update
    bin/logstash-plugin update logstash-output-kafka

    //删除插件
    bin/logstash-plugin uninstall logstash-output-kafka

    //代理安装插件
    export HTTP_PROXY=http://127.0.0.1:3128
    bin/logstash-plugin install logstash-output-kafka
  4. logstash 创建个人插件

    1
    2
    3
    4
    5
    bin/logstash-plugin generate --type input --name xkcd --path ~/ws/elastic/plugins

    --type: Type of plugin - input, filter, output, or codec
    --name: Name for the new plugin
    --path: Directory path where the new plugin structure will be created. If not specified, it will be created in the current directory.

haproxy install

haproxy 安装

  1. haproxy 安装

    1
    2
    3
    4
    5
    6
    [root@localhost app]# useradd -s /sbin/nologin haproxy

    [root@localhost app]# tar xzf haproxy-1.7.2.tar.gz
    [root@localhost app]# cd haproxy-1.7.2
    [root@localhost haproxy-1.7.2]# make TARGET=generic PREFIX=/mnt/app/haproxy
    [root@localhost haproxy-1.7.2]# make install PREFIX=/mnt/app/haproxy
  2. haproxy 环境变量

    1
    2
    3
    4
    [root@localhost haproxy-1.7.2]# echo 'export HAPROXY_HOME=/mnt/app/haproxy' | tee /etc/profile.d/haproxy.sh   
    [root@localhost haproxy-1.7.2]# echo 'export HAPROXY_BIN=${HAPROXY_HOME}/sbin' | tee -a /etc/profile.d/haproxy.sh
    [root@localhost haproxy-1.7.2]# echo 'export PATH=${HAPROXY_BIN}:$PATH' | tee -a /etc/profile.d/haproxy.sh
    [root@localhost haproxy-1.7.2]# source /etc/profile
  3. haproxy 创建配置文件

    1
    2
    [root@localhost haproxy-1.7.2]# mkdir -p /mnt/app/haproxy/conf
    [root@localhost haproxy-1.7.2]# touch /mnt/app/haproxy/conf/haproxy.cfg
  4. haproxy 配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    [root@localhost app]# vim /mnt/app/haproxy/conf/haproxy.cfg
    global
    log /mnt/log/haproxy 127.0.0.1 local0
    log /mnt/log/haproxy 127.0.0.1 local1 notice
    stats socket /mnt/log/haproxy/haproxy.socket mode 770 level admin
    pidfile /mnt/log/haproxy/haproxy.pid
    maxconn 5000
    user haproxy
    group haproxy
    deamon

    defaults
    log global
    mode tcp
    option tcplog
    option dontlognull
    retries 3
    option redispatch
    timeout connect 5s
    timeout client 120s
    timeout server 120s

    listen haproxy_stats
    bind 0.0.0.0:8080
    stats refresh 30s
    stats uri /haproxy?stats
    stats realm Haproxy Manager
    stats auth admin:admin
    stats hide-version

    listen rabbitmq_admin
    bind 0.0.0.0:8090
    server rabbit228 192.168.18.228:15672
    server rabbit229 192.168.18.229:15672
    server rabbit230 192.168.18.230:15672

    listen rabbitmq_cluster
    bind 0.0.0.0:5672
    mode tcp
    option tcplog
    option clitcpka
    timeout client 3h
    timeout server 3h
    balance roundrobin
    server rabbit228 192.168.18.228:15672 check inter 5s rise 2 fall 3
    server rabbit229 192.168.18.229:15672 check inter 5s rise 2 fall 3
    server rabbit230 192.168.18.230:15672 check inter 5s rise 2 fall 3
  5. haproxy 启动

    1
    [root@localhost haproxy-1.7.2]# /mnt/app/haproxy/sbin/haproxy -f /mnt/app/haproxy/conf/haproxy.cfg
  6. 查看haproxy状态

    1
    2
    在浏览器中打开:
    http://192.168.18.223:8080/haproxy?stats

RabbitMQ install

Java 安装

1
2
3
4
5
6
7
8
[root@localhost app]# tar xzf jdk-8u111-linux-x64.tar.gz
[root@localhost app]# mv jdk1.8.0_111 /mnt/app/java
[root@localhost app]# echo 'JAVA_HOME=/mnt/app/java' | tee /etc/profile.d/java.sh
[root@localhost app]# echo 'JRE_HOME=${JAVA_HOME}/jre' | tee -a /etc/profile.d/java.sh
[root@localhost app]# echo 'CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib' | tee -a /etc/profile.d/java.sh
[root@localhost app]# echo 'export PATH=${JAVA_HOME}/bin:$PATH' | tee -a /etc/profile.d/java.sh
[root@localhost app]# source /etc/profile
[root@localhost app]# java -version

Erlang 安装

1
2
3
4
5
6
7
8
9
10
[root@localhost app]# tar xzf otp_src_19.2.tar.gz
[root@localhost app]# cd otp_src_19.2
[root@localhost otp_src_19.2]# ./configure --prefix=/mnt/app/erlang
[root@localhost otp_src_19.2]# make
[root@localhost otp_src_19.2]# make install

[root@localhost otp_src_19.2]# echo 'export ERLANG_HOME=/mnt/app/erlang' | tee /etc/profile.d/erlang.sh
[root@localhost otp_src_19.2]# echo 'export ERLANG_BIN=${ERLANG_HOME}/bin' | tee -a /etc/profile.d/erlang.sh
[root@localhost otp_src_19.2]# echo 'export PATH=${ERLANG_BIN}:$PATH' | tee -a /etc/profile.d/erlang.sh
[root@localhost otp_src_19.2]# source /etc/profile

RabbitMQ standard install

  1. RabbitMQ 设置主机名

    1
    2
    3
    4
    [root@localhost app]# echo rabbitmq188 | tee /etc/hostname
    [root@localhost app]# echo '192.168.13.188 rabbitmq188' |tee -a /etc/hosts
    [root@localhost app]# hostname rabbitmq188
    [root@localhost app]# $SHELL
  2. RabbitMQ 安装

    1
    2
    3
    4
    [root@localhost app]# xz -d rabbitmq-server-generic-unix-3.6.10.tar.xz
    [root@localhost app]# tar xf rabbitmq-server-generic-unix-3.6.10.tar
    [root@localhost app]# mv rabbitmq_server-3.6.10 /mnt/app/rabbitmq
    [root@localhost app]# chown -R wisdom.wisdom /mnt/app/rabbitmq
  3. RabbitMQ 环境变量

    1
    2
    3
    4
    [root@localhost app]# echo 'export RABBITMQ_HOME=/mnt/app/rabbitmq' |tee /etc/profile.d/rabbitmq.sh
    [root@localhost app]# echo 'export RABBITMQ_BIN=$RABBITMQ_HOME/sbin' |tee -a /etc/profile.d/rabbitmq.sh
    [root@localhost app]# echo 'export PATH=$RABBITMQ_BIN:$PATH' |tee -a /etc/profile.d/rabbitmq.sh
    [root@localhost app]# source /etc/profile
  4. RabbitMQ 配置文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    [wisdom@localhost ~]$ cat > /mnt/app/rabbitmq/etc/rabbitmq/rabbitmq-env.conf <<EOF
    RABBITMQ_MNESIA_BASE=/mnt/data/rabbitmq/mnesia
    RABBITMQ_LOG_BASE=/mnt/log/rabbitmq
    EOF

    [wisdom@localhost ~]$ cat > /mnt/app/rabbitmq/etc/rabbitmq/rabbitmq.config <<EOF
    [
    {rabbit,
    [
    {vm_memory_high_watermark, 0.6},
    {vm_memory_high_watermark_paging_ratio, 0.3},
    {disk_free_limit, "10GB"},
    {hipe_compile, true},
    {queue_index_embed_msgs_below, 4096}
    ]
    }
    ].
    EOF

    [root@localhost app]# mkdir -p /mnt/data/rabbitmq/mnesia
    [root@localhost app]# mkdir -p /mnt/log/rabbitmq
    [root@localhost app]# chown -R wisdom.wisdom /mnt/data/rabbitmq
    [root@localhost app]# chown -R wisdom.wisdom /mnt/log/rabbitmq
  5. RabbitMQ 启动

    1
    2
    3
    4
    5
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmq-server -detached
    or:
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmq-server &

    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl stop
  6. RabbitMQ 安装插件rabbitmq_management(web控制台)

    1
    2
    3
    [root@localhost app]# su - wisdom
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmq-plugins list
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmq-plugins enable rabbitmq_management
  7. RabbitMQ 创建vhost

    1
    2
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl list_vhosts
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl add_vhost /zabbix
  8. RabbitMQ 高可用方案

    1
    2
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl set_policy -p /zabbix ha-all "^" '{"ha-mode":"all"}'
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl list_policies -p /zabbix
  9. RabbitMQ 用户创建

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl list_users
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl delete_user guest

    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl add_user zabbix zabbix123

    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl set_user_tags zabbix administrator monitoring

    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl list_permissions
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl list_user_permissions zabbix
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl set_permissions -p /zabbix zabbix '.*' '.*' '.*'
  10. RabbitMQ 用户操作

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    * vhost管理
    //查看vhost
    [wisdom@rabbitmq188 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl list_vhosts

    //创建vhost
    [wisdom@rabbitmq188 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl add_vhost /zabbix

    //删除vhost
    [wisdom@rabbitmq188 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl delete_vhost /zabbix


    * 用户管理
    //创建用户
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl add_user xroot xroot123

    //删除用户
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl delete_user guest

    //修改密码
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl change_password Username Newpassword

    //查看用户列表
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl list_users


    * 用户角色管理
    1. 超级管理员(administrator)
    可登陆管理控制台(启用management plugin的情况下),可查看所有的信息,并且可以对用户,策略(policy)进行操作
    2. 监控者(monitoring)
    可登陆管理控制台(启用management plugin的情况下),同时可以查看rabbitmq节点的相关信息(进程数,内存使用情况,磁盘使用情况等)
    3. 策略制定者(policymaker)
    可登陆管理控制台(启用management plugin的情况下),同时可以对policy进行管理.但无法查看节点的相关信息(上图红框标识的部分)
    4. 普通管理者(management)
    仅可登陆管理控制台(启用management plugin的情况下),无法看到节点信息,也无法对策略进行管理
    5. 其他
    无法登陆管理控制台,通常就是普通的生产者和消费者

    //修改用户角色
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl set_user_tags xroot administrator


    * 用户权限
    //查看指定vhost权限
    [wisdom@rabbitmq188 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl list_permissions -p /

    //查看指定用户权限
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl list_user_permissions xroot

    //为用户设置权限
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl set_permissions -p / xnroot '.*' '.*' '.*'

    //清空所有权限
    [wisdom@localhost ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl clear_permissions
  11. rabbitmq 优化

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    [wisdom@localhost ~]$ cat /mnt/app/rabbitmq/etc/rabbitmq/rabbitmq.config
    [
    {rabbit,
    [
    {vm_memory_high_watermark, 0.6},
    {vm_memory_high_watermark_paging_ratio, 0.3},
    {disk_free_limit, "10GB"},
    {hipe_compile, true},
    {queue_index_embed_msgs_below, 4096}
    ]
    }
    ].

    [wisdom@localhost ~]$ cat /mnt/app/rabbitmq/sbin/rabbitmq-defaults
    PLUGINS_DIR="${RABBITMQ_HOME}/plugins"
    IO_THREAD_POOL_SIZE=16
  12. RabbitMQ 启动脚本

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    [wisdom@rabbit67 ~]$ cat /etc/init.d/rabbitmq
    #!/bin/bash
    STATUS=$1
    ROLE='rabbitmq'
    RBSERVER="/mnt/app/${ROLE}/sbin/rabbitmq-server "
    RBCTL="/mnt/app/${ROLE}/sbin/rabbitmqctl "

    if [ $STATUS == "start" ];then
    $RBSERVER &
    fi

    if [ $STATUS == "stop" ];then
    NUM=$(ps -ef|grep -w rabbitmq-server|grep -v grep|awk '{print $2}'|wc -l)
    if [ $NUM != 1 ];then
    echo "Please check $ROLE, There are $NUM processes."
    exit 0
    fi

    $RBCTL stop
    if [ $? == 0 ];then
    echo "$ROLE stoped success."
    else
    echo "$ROLE stoped fail."
    fi
    fi

    if [ $STATUS == "status" ];then
    NUM=$(ps -ef|grep -w rabbitmq-server|grep -v grep|awk '{print $2}'|wc -l)

    if [ $NUM == 0 ];then
    echo "$ROLE is stopped."
    exit 0
    fi

    if [ $NUM != 1 ];then
    echo "Please check $ROLE, There are $NUM processes."
    fi

    PID=$(ps -ef|grep -w rabbitmq-server|grep -v grep|awk '{print $2}')
    echo "$ROLE is running. PID: $PID"
    fi

RabbitMQ cluster install

  1. RabbitMQ 安装

    1
    2
    3
    4
    [root@localhost app]# xz -d rabbitmq-server-generic-unix-3.6.6.tar.xz
    [root@localhost app]# tar xf rabbitmq-server-generic-unix-3.6.6.tar
    [root@localhost app]# mv rabbitmq_server-3.6.6 /mnt/app/rabbitmq
    [root@localhost app]# chown -R root.root /mnt/app/rabbitmq
  2. RabbitMQ 安装rabbitmq_management插件

    1
    [root@localhost app]# /mnt/app/rabbitmq/sbin/rabbitmq-plugins enable rabbitmq_management
  3. RabbitMQ 环境变量

    1
    2
    3
    4
    [root@localhost app]# echo 'export RABBITMQ_HOME=/mnt/app/rabbitmq' | tee /etc/profile.d/rabbitmq.sh
    [root@localhost app]# echo 'export RABBITMQ_BIN=${RABBITMQ_HOME}/sbin' | tee -a /etc/profile.d/rabbitmq.sh
    [root@localhost app]# echo 'export PATH=${RABBITMQ_BIN}:$PATH' | tee -a /etc/profile.d/rabbitmq.sh
    [root@localhost app]# source /etc/profile
  4. RabbitMQ 创建配置文件和目录(数据+日志)

    1
    2
    3
    4
    5
    6
    7
    [root@localhost app]# touch /mnt/app/rabbitmq/etc/rabbitmq/rabbitmq-env.conf
    [root@localhost app]# touch /mnt/app/rabbitmq/etc/rabbitmq/rabbitmq.config
    [root@localhost app]# chown -R wisdom.wisdom /mnt/app/rabbitmq/etc/

    [root@localhost app]# mkdir -p /mnt/{data,log}/rabbitmq
    [root@localhost app]# mkdir -p /mnt/data/rabbitmq/mnesia
    [root@localhost app]# chown -R wisdom.wisdom /mnt/{data,log}/rabbitmq
  5. RabbitMQ 集群主机名设置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    RabbitMQ cluster-1:
    [root@localhost app]# hostname rabbit228
    [root@localhost app]# echo 'rabbit228' |tee /etc/hostname
    [root@localhost app]# echo '192.168.18.228 rabbit228' |tee -a /etc/hosts
    [root@localhost app]# echo '192.168.18.229 rabbit229' |tee -a /etc/hosts
    [root@localhost app]# echo '192.168.18.230 rabbit230' |tee -a /etc/hosts

    RabbitMQ cluster-2:
    [root@localhost app]# hostname rabbit229
    [root@localhost app]# echo 'rabbit229' |tee /etc/hostname
    [root@localhost app]# echo '192.168.18.228 rabbit228' |tee -a /etc/hosts
    [root@localhost app]# echo '192.168.18.229 rabbit229' |tee -a /etc/hosts
    [root@localhost app]# echo '192.168.18.230 rabbit230' |tee -a /etc/hosts

    RabbitMQ cluster-3:
    [root@localhost app]# hostname rabbit230
    [root@localhost app]# echo 'rabbit230' |tee /etc/hostname
    [root@localhost app]# echo '192.168.18.228 rabbit228' |tee -a /etc/hosts
    [root@localhost app]# echo '192.168.18.229 rabbit229' |tee -a /etc/hosts
    [root@localhost app]# echo '192.168.18.230 rabbit230' |tee -a /etc/hosts
  6. RabbitMQ 集群配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    //cluster-X:(三台机器都要执行)
    [root@localhost app]# su - wisdom
    [wisdom@localhost ~]$ cat > /mnt/app/rabbitmq/etc/rabbitmq/rabbitmq-env.conf <<EOF
    > RABBITMQ_NODE_IP_ADDRESS=
    > RABBITMQ_NODE_PORT=5672
    > RABBITMQ_DIST_PORT=25672
    > RABBITMQ_NODENAME=rabbit@\$HOSTNAME
    > RABBITMQ_MNESIA_BASE=/mnt/data/rabbitmq/mnesia
    > RABBITMQ_LOG_BASE=/mnt/log/rabbitmq
    > EOF

    [wisdom@rabbitX ~]$ cat > /mnt/app/rabbitmq/etc/rabbitmq/rabbitmq.config <<EOF
    > [
    > {rabbit,
    > [
    > ]},
    > {kernel,
    > [
    > ]},
    > {rabbitmq_management,
    > [
    > ]},
    > {rabbitmq_shovel,
    > [{shovels,
    > [
    > ]}
    > ]},
    > {rabbitmq_stomp,
    > [
    > ]},
    > {rabbitmq_mqtt,
    > [
    > ]},
    > {rabbitmq_amqp1_0,
    > [
    > ]},
    > {rabbitmq_auth_backend_ldap,
    > [
    > ]}
    > ].
    > EOF
  7. RabbitMQ 启动所有服务

    1
    2
    3
    [wisdom@rabbitX ~]$ /mnt/app/rabbitmq/sbin/rabbitmq-server &
    [wisdom@rabbitX ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl stop
    [wisdom@rabbitX ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl status
  8. RabbitMQ 集群操作:将要加入集群的机器都使用一个cookie(在三台机器中任选一台)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    //在228上找到elang.cookie
    [wisdom@rabbit228 ~]$ cat ~/.erlang.cookie
    HBSJBDVPUCEITCQZQQDB

    //将.erlang.cookie内容copy到229和230这两台机器的~/.erlang.cookie文件中
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl stop
    [wisdom@rabbit229 ~]$ chmod 600 ~/.erlang.cookie
    [wisdom@rabbit229 ~]$ echo HBSJBDVPUCEITCQZQQDB | tee ~/.erlang.cookie
    [wisdom@rabbit229 ~]$ chmod 400 ~/.erlang.cookie

    [wisdom@rabbit230 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl stop
    [wisdom@rabbit230 ~]$ chmod 600 ~/.erlang.cookie
    [wisdom@rabbit230 ~]$ echo HBSJBDVPUCEITCQZQQDB | tee ~/.erlang.cookie
    [wisdom@rabbit230 ~]$ chmod 400 ~/.erlang.cookie

    //修改完~/.erlang.cookie后,启动RabbitMQ服务
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmq-server &
    [wisdom@rabbit230 ~]$ /mnt/app/rabbitmq/sbin/rabbitmq-server &

    9. RabbitMQ 集群操作: 将229节点加入到228 RabbitMQ集群中(磁盘存储)
    ```bash
    //查看228节点的集群状态
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl -n rabbit@rabbit228 cluster_status
    Cluster status of node rabbit@rabbit228 ...
    [{nodes,[{disc,[rabbit@rabbit228,rabbit@rabbit229]}]},
    {running_nodes,[rabbit@rabbit228]},
    {cluster_name,<<"rabbit@rabbit228">>},
    {partitions,[]},
    {alarms,[{rabbit@rabbit228,[]}]}]

    //停止229节点的Rabbitmq服务
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl stop_app

    //清空229 Rabbit元数据
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl -n rabbit@rabbit229 reset

    //将229 Rabbit加入到228集群中
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl join_cluster rabbit@rabbit228

    //启动229 Rabbit服务
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl start_app

    //查看集群状态
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl -n rabbit@rabbit228 cluster_status
    Cluster status of node rabbit@rabbit228 ...
    [{nodes,[{disc,[rabbit@rabbit228,rabbit@rabbit229]}]},
    {running_nodes,[rabbit@rabbit228]},
    {cluster_name,<<"rabbit@rabbit228">>},
    {partitions,[]},
    {alarms,[{rabbit@rabbit228,[]}]}]
  9. RabbitMQ 集群操作: 将230 RabbitMQ加入到228 RabbitMQ集群中(内存存储)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    //停止230节点的Rabbitmq服务
    [wisdom@rabbit230 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl stop_app

    //清空230 Rabbit元数据
    [wisdom@rabbit230 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl -n rabbit@rabbit230 reset

    //将230 Rabbit加入到228集群中,内存存储
    [wisdom@rabbit230 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl join_cluster rabbit@rabbit228 --ram

    //启动230 Rabbit服务
    [wisdom@rabbit230 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl start_app

    ////查看集群状态
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl -n rabbit@rabbit228 cluster_status
    Cluster status of node rabbit@rabbit228 ...
    [{nodes,[{disc,[rabbit@rabbit228,rabbit@rabbit229]},{ram,[rabbit@rabbit230]}]},
    {running_nodes,[rabbit@rabbit230,rabbit@rabbit229,rabbit@rabbit228]},
    {cluster_name,<<"rabbit@rabbit228">>},
    {partitions,[]},
    {alarms,[{rabbit@rabbit230,[]},{rabbit@rabbit229,[]},{rabbit@rabbit228,[]}]}]
  10. 设置集群中所有的队列为镜像队列(任意一台机器上执行)

    1
    [wisdom@rabbit228 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'
  11. 设置用户权限

    1
    2
    3
    [wisdom@rabbit228 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl add_user test test123  
    [wisdom@rabbit229 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl set_user_tags test administrator
    [wisdom@rabbit228 ~]$ /mnt/app/rabbitmq/sbin/rabbitmqctl set_permissions -p / test ".\*" ".\*" ".\*"
  12. 安装haproxy,作为反向代理

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    //haproxy 安装
    [root@localhost app]# useradd -s /sbin/nologin haproxy

    [root@localhost app]# tar xzf haproxy-1.7.2.tar.gz
    [root@localhost app]# cd haproxy-1.7.2
    [root@localhost haproxy-1.7.2]# make TARGET=generic PREFIX=/mnt/app/haproxy
    [root@localhost haproxy-1.7.2]# make install PREFIX=/mnt/app/haproxy

    //haproxy 环境变量设置
    [root@localhost haproxy-1.7.2]# echo 'export HAPROXY_HOME=/mnt/app/haproxy' | tee /etc/profile.d/haproxy.sh
    [root@localhost haproxy-1.7.2]# echo 'export HAPROXY_BIN=${HAPROXY_HOME}/sbin' | tee -a /etc/profile.d/haproxy.sh
    [root@localhost haproxy-1.7.2]# echo 'export PATH=${HAPROXY_BIN}:$PATH' | tee -a /etc/profile.d/haproxy.sh
    [root@localhost haproxy-1.7.2]# source /etc/profile

    //haproxy 创建配置文件
    [root@localhost haproxy-1.7.2]# mkdir -p /mnt/app/haproxy/conf
    [root@localhost haproxy-1.7.2]# touch /mnt/app/haproxy/conf/haproxy.cfg

    //haproxy 配置
    [root@localhost app]# vim /mnt/app/haproxy/conf/haproxy.cfg
    global
    log /mnt/log/haproxy 127.0.0.1 local0
    log /mnt/log/haproxy 127.0.0.1 local1 notice
    stats socket /mnt/log/haproxy/haproxy.socket mode 770 level admin
    pidfile /mnt/log/haproxy/haproxy.pid
    maxconn 5000
    user haproxy
    group haproxy
    deamon

    defaults
    log global
    mode tcp
    option tcplog
    option dontlognull
    retries 3
    option redispatch
    timeout connect 5s
    timeout client 120s
    timeout server 120s

    listen haproxy_stats
    bind 0.0.0.0:8080
    stats refresh 30s
    stats uri /haproxy?stats
    stats realm Haproxy Manager
    stats auth admin:admin
    stats hide-version

    listen rabbitmq_admin
    bind 0.0.0.0:8090
    server rabbit228 192.168.18.228:15672
    server rabbit229 192.168.18.229:15672
    server rabbit230 192.168.18.230:15672

    listen rabbitmq_cluster
    bind 0.0.0.0:5672
    mode tcp
    option tcplog
    option clitcpka
    timeout client 3h
    timeout server 3h
    balance roundrobin
    server rabbit228 192.168.18.228:15672 check inter 5s rise 2 fall 3
    server rabbit229 192.168.18.229:15672 check inter 5s rise 2 fall 3
    server rabbit230 192.168.18.230:15672 check inter 5s rise 2 fall 3

    //haproxy 启动
    [root@localhost haproxy-1.7.2]# /mnt/app/haproxy/sbin/haproxy -f /mnt/app/haproxy/conf/haproxy.cfg

    //通过haproxy查看RabbitMQ状态
    在浏览器中打开:
    http://192.168.18.223:8090

flume install

flume install

  1. flume 安装

    1
    2
    3
    4
    5
    6
    [root@10 app]# tar xzf apache-flume-1.7.0-bin.tar.gz
    [root@10 app]# mv apache-flume-1.7.0-bin /mnt/app/flume
    [root@10 app]# chown -R wisdom.wisdom /mnt/app/flume

    [root@10 app]# mkdir -p /mnt/{data,log}/flume
    [root@10 app]# chown -R wisdom.wisdom /mnt/{data,log}/flume
  2. flume 环境参数配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    [root@10 app]# cp /mnt/app/flume/conf/{flume-env.sh,flume-env.sh.bak}
    [root@10 app]# cat > /mnt/app/flume/conf/flume-env.sh <<EOF
    > export JAVA_HOME=/mnt/app/java
    > export JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote"
    > export JAVA_OPTS="$JAVA_OPTS -Dorg.apache.flume.log.rawdata=true -Dorg.apache.flume.log.printconfig=true "
    > FLUME_CLASSPATH="/home/flume/flume/lib"
    EOF

    3. flume log配置
    ```bash
    [root@10 ~]# cat /mnt/app/flume/conf/log4j.properties |grep -v ^#|grep -v ^$
    flume.root.logger=INFO,LOGFILE
    flume.log.dir=/mnt/log/flume //主要是这里
    flume.log.file=flume.log
    log4j.logger.org.apache.flume.lifecycle = INFO
    log4j.logger.org.jboss = WARN
    log4j.logger.org.mortbay = INFO
    log4j.logger.org.apache.avro.ipc.NettyTransceiver = WARN
    log4j.logger.org.apache.hadoop = INFO
    log4j.logger.org.apache.hadoop.hive = ERROR
    log4j.rootLogger=${flume.root.logger}
    log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
    log4j.appender.LOGFILE.MaxFileSize=100MB
    log4j.appender.LOGFILE.MaxBackupIndex=10
    log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
    log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
    log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
    log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
    log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
    log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
    log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{yyyy-MM-dd}
    log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
    log4j.appender.DAILY.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
    log4j.appender.console=org.apache.log4j.ConsoleAppender
    log4j.appender.console.target=System.err
    log4j.appender.console.layout=org.apache.log4j.PatternLayout
    log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n
  3. flume 配置文件参数
    [root@10 app]# su - wisdom
    [wisdom@10 ~]$ vim /mnt/app/flume/conf/test.conf

    定义参数

    producer.sources = s_test
    producer.channels = c_test
    producer.sinks = r_test

定义通道

producer.channels.c_test.type = file
producer.channels.c_test.checkpointDir = /mnt/data/flume/test/filechannel/checkpointDir
producer.channels.c_test.dataDirs = /mnt/data/flume/test/filechannel/dataDirs
producer.channels.c_test.transactionCapacity = 40000
producer.channels.c_test.capacity = 2000000
producer.channels.c_test.maxFileSize = 2146435071
producer.channels.c_test.minimumRequiredSpace = 524288000
producer.channels.c_test.checkpointInterval = 20000

定义输出到kafka

producer.sinks.r_test.type = org.apache.flume.sink.kafka.KafkaSink
producer.sinks.r_test.kafka.bootstrap.servers = 10.0.3.40:9092,10.0.3.41:9092,10.0.3.42:9092
producer.sinks.r_test.kafka.topic = index-test
producer.sinks.r_test.kafka.flumeBatchSize = 100
producer.sinks.r_test.kafka.producer.acks = 1
producer.sinks.r_test.kafka.producer.compression.type = snappy
producer.sinks.r_test.kafka.producer.max.request.size = 10000000

定义源数据库

producer.sources.s_test.type = TAILDIR
producer.sources.s_test.filegroups = f1
producer.sources.s_test.filegroups.f1 = /mnt/log/test/^test.log$
producer.sources.s_test.positionFile = /mnt/data/flume/test/filesource/test.json

sources 和 sinks 绑定 channel,实现一条通道

producer.sinks.r_test.channel = c_test
producer.sources.s_test.channels = c_test

1
2
3
4

5. flume 执行任务
```bash
[wisdom@10 ~]$ /mnt/app/flume/bin/flume-ng agent -n producer --conf /mnt/app/flume/conf -f /mnt/app/flume/conf/service.properties &

  1. 补充
    1
    /mnt/app/flume2es/bin/flume-ng agent -n producer -f /mnt/app/flume2es/conf/test2.properties  --conf /mnt/app/flume2es/conf -Dflume.root.logger=debug,console

通过flume 从kafka读取数据写入ES

1
2
3
4
5
6
7
8
9
10
11
12
13
14
使用flume经验:
* 使用flume(apache-flume-1.7.0-bin.tar.gz)可以将本地的日志文件读取写入到kafka(kafka_2.11-0.9.0.0.tgz)
*使用flume(apache-flume-1.7.0-bin.tar.gz)从kafka(kafka_2.11-0.9.0.0.tgz)读取写入elasticsearch(elasticsearch-2.3.3.tar.gz)会发生报错.
解决方法:
1. 将flume(apache-flume-1.7.0-bin.tar.gz)解压
2. 将elasticsearch(elasticsearch-2.3.3.tar.gz)解压
3. 将zookeeper(zookeeper-3.4.6.tar.gz)解压
4. 将elasticsearch(elasticsearch-2.3.3.tar.gz)解压后的"elasticsearch-2.3.3/lib/" lib目录下的所有jar包拷贝到"apache-flume-1.7.0-bin/lib"目录下
5. 将zookeeper(zookeeper-3.4.6.tar.gz)解压后的"zookeeper-3.4.6/zookeeper-3.4.6.jar"拷贝到"apache-flume-1.7.0-bin/lib"下,并删除原有的"zookeeper-*jar"
6. 删除"elasticsearch-2.3.3/lib/"下的"guava-*.jar"和"jackson-core-*.jar"
7. 下载elasticsearch-sink2-1.0.jar(https://github.com/lucidfrontier45/ElasticsearchSink2/releases),并上传到"apache-flume-1.7.0-bin/lib"下

或者:
如果自己有能力,可以尝试重写elasticsearch-sink2.jar包

logstash conf file

logstash 通过log4j收集日志,写入ES

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// log4j-4501
input{
log4j{
mode => "server"
host => "10.0.3.41"
port => 4501
}
}
filter{
urldecode {
all_fields => true
}
mutate {
remove_field => [ "tags","timestamp" ]
}
}
output{
elasticsearch{
action => "index"
index => "xxx-%{[application]}-%{+YYYY.MM}"
hosts => ["10.0.3.40:9200","10.0.3.41:9200","10.0.3.42:9200"]
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
//log4j-4502
input{
log4j{
mode => "server"
host => "10.0.3.42"
port => 4502
}
}
filter{
json {
source => "message"
}
urldecode {
all_fields => true
}
mutate {
remove_field => [ "tags" ]
}
}
output{
elasticsearch{
action => "index"
index => "xxx-%{[application]}-%{+yyyy.MM}"
document_type => "%{[key]}"
hosts => ["10.0.3.40:9200","10.0.3.41:9200","10.0.3.42:9200"]
}
}

logstash install

logstash install

  1. logstash安装

    1
    2
    3
    4
    5
    6
    7
    8
    [root@10 app]# tar xzf logstash-all-plugins-2.4.0.tar.gz
    [root@10 app]# mv logstash-2.4.0 /mnt/app/logstash
    [root@10 app]# chown -R wisdom.wisdom /mnt/app/elasticsearch

    [root@10 app]# mkdir /mnt/app/logstash/conf
    [root@10 app]# mkdir /mnt/log/logstash
    [root@10 app]# chown -R wisdom.wisdom /mnt/app/logstash/conf
    [root@10 app]# chown -R wisdom.wisdom /mnt/log/logstash
  2. logstash 配置

    1
    2
    3
    4
    5
    [root@10 app]# su - wisdom
    [wisdom@10 ~]$ cat >/mnt/app/logstash/conf/test.conf <<EOF
    > input { stdin { } }
    > output { stdout {} }
    > EOF
  3. logstash 配置检查

    1
    2
    [wisdom@10 ~]$ /mnt/app/logstash/bin/logstash -f /mnt/app/logstash/conf/test.conf --configtest
    Configuration OK
  4. logstash 测试

    1
    2
    3
    4
    5
    [wisdom@10 ~]$ /mnt/app/logstash/bin/logstash -f /mnt/app/logstash/conf/test.conf
    Settings: Default pipeline workers: 1
    Pipeline main started
    hello world =>输入字符串
    2016-10-12T09:16:18.058Z ubuntu hello world
  5. logstash 设置启动需要的内存大小

    1
    2
    [wisdom@10 ~]$ vim /mnt/app/logstash/bin/logstash
    LS_HEAP_SIZE="8G"
  6. logstash 后台执行

    1
    [wisdom@10 ~]$ /mnt/app/logstash/bin/logstash -f /mnt/app/logstash/conf/test.conf -l /mnt/log/logstash/test.log -w 8 -b 125 -u 5 --auto-reload --reload-interval 3 &

    ```bash

  • The –pipeline-workers or -w parameter determines how many threads to run for filter and output processing.If you find that events are backing up, or that the CPU is not saturated, consider increasing the value of this parameter to make better use of available processing power.
  • The –pipeline-batch-size or -b parameter defines the maximum number of events an individual worker thread collects before attempting to execute filters and outputs. Larger batch sizes are generally more efficient, but increase memory overhead.
  • The –pipeline-batch-delay option rarely needs to be tuned. Pipeline batch delay is the maximum amount of time in milliseconds that Logstash waits for new messages after receiving an event in the current pipeline worker thread. After this time elapses, Logstash begins to execute filters and outputs.
    ``

kibana install

kibana install

  1. kibana 安装

    1
    2
    3
    4
    [root@10 ~]# cd /mnt/ops/app/
    [root@10 app]# tar xzf kibana-4.5.4-linux-x64.tar.gz
    [root@10 app]# mv kibana-4.5.4-linux-x64 /mnt/app/kibana
    [root@10 app]# chown -R wisdom.wisdom /mnt/app/kibana
  2. kibana 配置文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    [root@10 app]# cp /mnt/app/kibana/config/{kibana.yml,kibana.yml.bak}
    [root@10 app]# vim /mnt/app/kibana/config/kibana.yml
    server.port: 5601
    server.host: "0.0.0.0"
    server.maxPayloadBytes: 1048576
    elasticsearch.url: "http://10.0.3.40:9200"
    elasticsearch.preserveHost: true
    kibana.index: ".kibana"
    kibana.defaultAppId: "discover"
    elasticsearch.pingTimeout: 1500
    elasticsearch.requestTimeout: 30000
    elasticsearch.shardTimeout: 0
    elasticsearch.startupTimeout: 5000
    pid.file: /mnt/app/kibana/kibana.pid
    logging.dest: stdout
    logging.silent: true
    ops.interval: 5000
  3. kibana 启动

    1
    2
    [root@10 ~]# su - wisdom
    [wisdom@10 ~]$ /mnt/app/kibana/bin/kibana -c /mnt/app/kibana/config/kibana.yml &
  4. kibana 查看

    1
    2
    3
    4
    //浏览器中打开
    http://{IP}:5601

    注意: kibana的状态跟随elasticsearch的变化而变化.比如: elasticsearch 出现异常,kibana web展示页面会报错

grafana install

mysql install

1
mysql 已经安装

grafana install

  1. 在数据库创建grafana库

    1
    2
    3
    4
    [root@10 ~]# /mnt/app/mysql/bin/mysql -S /mnt/data/mysql/mysql.sock
    mysql> create database grafana DEFAULT CHARACTER SET utf8;
    mysql> GRANT ALL ON grafana.* TO 'grafana'@'10.0.2.113' IDENTIFIED BY "grafana123";
    mysql> flush privileges;
  2. grafana 安装

    1
    2
    3
    4
    5
    6
    7
    [root@10 ~]# cd /mnt/ops/app/
    [root@10 app]# tar xzf grafana-4.0.1-1480694114.linux-x64.tar.gz
    [root@10 app]# mv grafana-4.0.1-1480694114 /mnt/app/grafana
    [root@10 app]# chown -R wisdom.wisdom /mnt/app/grafana

    [root@10 ~]# mkdir -p /mnt/{data,log}/grafana
    [root@10 ~]# chown -R wisdom.wisdom /mnt/{data,log}/grafana
  3. grafana 配置文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    [root@10 ~]# cp /mnt/app/grafana/conf/{defaults.ini,grafana.ini}
    [root@10 ~]# vim /mnt/app/grafana/conf/grafana.ini
    instance_name = 10.0.2.113
    [paths]
    data = /mnt/data/grafana
    logs = /mnt/log/grafana
    plugins = /mnt/app/grafana/data/plugins
    [database]
    type = mysql
    host = 10.0.2.113:3306
    name = grafana
    user = grafana
    password = grafana123
    url = mysql://grafana:grafana123@10.0.2.113:3306/grafana
    ssl_mode = disable
    ca_cert_path =
    client_key_path =
    client_cert_path =
    server_cert_name =
    path = grafana.db
    [session]
    provider = file
    1
    2
    3
    4
    5
    6
    7
    数据库选择:sqlit3
    [root@10 app]# mkdir /mnt/data/sqlite3
    [root@10 app]# chown -R wisdom.wisdom /mnt/data/sqlite3
    [database]
    type = sqlite3
    path = /mnt/data/sqlite3/grafana.db
    name = grafana
  4. grafana 启动

    1
    2
    [root@10 ~]# su - wisdom
    [wisdom@10 ~]$ /mnt/app/grafana/bin/grafana-server -homepath /mnt/app/grafana/ -config /mnt/app/grafana/conf/grafana.ini -pidfile /mnt/data/grafana/grafana.pid &
  5. grafana 登录和配置

    1
    2
    //浏览器中输入,进行配置
    http://101.200.45.206:3000/login admin:admin

elasticsearch install

elasticsearch cluster install

  1. elasticsearch安装

    1
    2
    3
    4
    5
    6
    7
    [root@10 ~]# cd /mnt/ops/app/
    [root@10 app]# tar xzf elasticsearch-2.3.3.tar.gz
    [root@10 app]# mv elasticsearch-2.3.3 /mnt/app/elasticsearch
    [root@10 app]# chown -R wisdom.wisdom /mnt/app/elasticsearch

    [root@10 app]# mkdir -p /mnt/{data,log}/elasticsearch
    [root@10 app]# chown -R wisdom.wisdom mkdir -p /mnt/{data,log}/elasticsearch
  2. elasticsearch web插件安装

    1
    2
    [root@10 app]# /mnt/app/elasticsearch/bin/plugin install mobz/elasticsearch-head
    [root@10 app]# chown -R wisdom.wisdom /mnt/app/elasticsearch
  3. elasticsearch cluster 配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    [root@10 app]# cp /mnt/app/elasticsearch/config/{elasticsearch.yml,elasticsearch.yml.bak}

    //cluster-1:
    [root@10 app]# cat > /mnt/app/elasticsearch/config/elasticsearch.yml <<EOF
    > cluster.name: ssp
    > node.name: 10.0.3.40
    > path.data: /mnt/data/elasticsearch
    > path.repo: /mnt/ops/elasticsearch
    > path.logs: /mnt/log/elasticsearch
    > bootstrap.mlockall: true
    > network.host: 0.0.0.0
    > http.port: 9200
    > transport.tcp.port: 9300
    > transport.tcp.compress: true
    > gateway.recover_after_nodes: 2
    > node.max_local_storage_nodes: 1
    > action.destructive_requires_name: true
    > action.auto_create_index: true
    > action.disable_delete_all_indices: true
    > index.number_of_shards: 5
    > index.number_of_replicas: 1
    > discovery.zen.ping.multicast.enabled: false
    > discovery.zen.fd.ping_timeout: 100s
    > discovery.zen.ping.timeout: 100s
    > discovery.zen.minimum_master_nodes: 2
    > discovery.zen.ping.unicast.hosts: ["10.0.3.40:9300","10.0.3.41:9300","10.0.3.42:9300"]
    > EOF

    //cluster-2:
    [root@10 app]# cat > /mnt/app/elasticsearch/config/elasticsearch.yml <<EOF
    > cluster.name: ssp
    > node.name: 10.0.3.41
    > path.data: /mnt/data/elasticsearch
    > path.repo: /mnt/ops/elasticsearch
    > path.logs: /mnt/log/elasticsearch
    > bootstrap.mlockall: true
    > network.host: 0.0.0.0
    > http.port: 9200
    > transport.tcp.port: 9300
    > transport.tcp.compress: true
    > gateway.recover_after_nodes: 2
    > node.max_local_storage_nodes: 1
    > action.destructive_requires_name: true
    > action.auto_create_index: true
    > action.disable_delete_all_indices: true
    > index.number_of_shards: 5
    > index.number_of_replicas: 1
    > discovery.zen.ping.multicast.enabled: false
    > discovery.zen.fd.ping_timeout: 100s
    > discovery.zen.ping.timeout: 100s
    > discovery.zen.minimum_master_nodes: 2
    > discovery.zen.ping.unicast.hosts: ["10.0.3.40:9300","10.0.3.41:9300","10.0.3.42:9300"]
    > EOF

    //cluster-3:
    [root@10 app]# cat > /mnt/app/elasticsearch/config/elasticsearch.yml <<EOF
    > cluster.name: ssp
    > node.name: 10.0.3.42
    > path.data: /mnt/data/elasticsearch
    > path.repo: /mnt/ops/elasticsearch
    > path.logs: /mnt/log/elasticsearch
    > bootstrap.mlockall: true
    > network.host: 0.0.0.0
    > http.port: 9200
    > transport.tcp.port: 9300
    > transport.tcp.compress: true
    > gateway.recover_after_nodes: 2
    > node.max_local_storage_nodes: 1
    > action.destructive_requires_name: true
    > action.auto_create_index: true
    > action.disable_delete_all_indices: true
    > index.number_of_shards: 5
    > index.number_of_replicas: 1
    > discovery.zen.ping.multicast.enabled: false
    > discovery.zen.fd.ping_timeout: 100s
    > discovery.zen.ping.timeout: 100s
    > discovery.zen.minimum_master_nodes: 2
    > discovery.zen.ping.unicast.hosts: ["10.0.3.40:9300","10.0.3.41:9300","10.0.3.42:9300"]
    > EOF
  4. elasticsearch 启动参数

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    [root@10 app]# vim /mnt/app/elasticsearch/bin/elasticsearch
    ES_HEAP_SIZE=4g

    或者:

    [root@10 app]# cat > /mnt/app/elasticsearch/config/jvm.options <<EOF
    > -Xms4g
    > -Xmx4g
    > –XX:+UseG1GC
    > -XX:+UseStringDeduplication
    > -XX:+DisableExplicitGC
    > -XX:+AlwaysPreTouch
    > -server
    > -Djava.awt.headless=true
    > -Dfile.encoding=UTF-8
    > -Djna.nosys=true
    > -Dio.netty.noUnsafe=true
    > -Dio.netty.noKeySetOptimization=true
    > -Dlog4j.shutdownHookEnabled=false
    > -Dlog4j2.disable.jmx=true
    > -Dlog4j.skipJansi=true
    > -XX:+HeapDumpOnOutOfMemoryError
    > EOF
  5. elasticsearch 启动

    1
    2
    [root@10 app]# su - wisdom
    [wisdom@10 ~]$ /mnt/app/elasticsearch/bin/elasticsearch -d -p /mnt/data/elasticsearch/elasticsearch.pid
  6. 查看集群

    1
    2
    3
    4
    5
    6
    //浏览器中查看集群
    http://{IP}:9200/_plugin/head

    //命令输出
    curl '10.0.3.40:9200/_cat/health?v'
    curl '10.0.3.40:9200/_cat/nodes?v'