postgresql master slave info

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
postgres在9.0之后引入了主从的流复制机制,所谓流复制,就是从库通过tcp流从主库中同步相应的数据
postgres的主从主称之为primary,从称为stand_by.
主从配置需要注意的一个是主从的postgres的版本,环境,等最好都需要一致,否则可能产生奇奇怪怪的问题

PG 主从同步有两种方式:
1.hot standby的方式是备库只读的方式
2.warm standby的方式是备库不能提供只读服务

PG 主从同步有两种方法:
1.使用归档文件,需要归档出wal文件,拷贝到从库上应用
2.使用streaming方式,产生日志,马上应用到从库上,streaming两种方式:同步和异步

PostgreSQL在数据目录下的pg_xlog子目录中维护了一个WAL日志文件,该文件用于记录数据库文件的每次改变,这种日志文件机制提供了一种数据库热备份的方案,即:
在把数据库使用文件系统的方式备份出来的同时也把相应的WAL日志进行备份,即使备份出来的数据块不一致,也可以重放WAL日志把备份的内容推到一致状态.
这也就是基于时间点的备份(Point-in-Time Recovery),简称PITR.而把WAL日志传送到另一台服务器有两种方式,分别是:
1.WAL日志归档(base-file)
2.流复制(streaming replication)
第一种是写完一个WAL日志后,才把WAL日志文件拷贝到standby数据库中,简言之就是通过cp命令实现远程备份,这样通常备库会落后主库一个WAL日志文件
第二种流复制是postgresql9.x之后才提供的新的传递WAL日志的方法,它的好处是只要master库一产生日志,就会马上传递到standby库,同第一种相比有更低的同步延迟,所以我们肯定也会选择流复制的方式

redis error

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1.install error
zmalloc.h:50:31: error: jemalloc/jemalloc.h: No such file or directory
zmalloc.h:55:2: error: #error "Newer version of jemalloc required"
make[1]: *** [adlist.o] Error 1
make[1]: Leaving directory `/data0/src/redis-2.6.2/src'
make: *** [all] Error 2


Resove:
make MALLOC=libc


cause:
Allocator
---------
Selecting a non-default memory allocator when building Redis is done by setting
the `MALLOC` environment variable. Redis is compiled and linked against libc
malloc by default, with the exception of jemalloc being the default on Linux
systems. This default was picked because jemalloc has proven to have fewer
fragmentation problems than libc malloc.
To force compiling against libc malloc, use:
% make MALLOC=libc
To compile against jemalloc on Mac OS X systems, use:
% make MALLOC=jemalloc

redis install

redis standard install

1
2
3
4
5
6
7
8
9
10
11
12

[root@localhost app]# echo "net.core.somaxconn = 65535" | tee -a /etc/sysctl.conf
[root@localhost app]# echo "vm.overcommit_memory = 1" | tee -a /etc/sysctl.conf
[root@localhost app]# sysctl -p

[root@localhost app]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
[root@localhost app]# cat >> /etc/rc.local <<EOF
if test -f /sys/kernel/mm/transparent_hugepage/enabled;then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
EOF
[root@localhost app]# chmod +x /etc/rc.d/rc.local
1
2
3
[root@localhost app]# tar xzf redis-3.2.6.tar.gz
[root@localhost app]# cd redis-3.2.6/
[root@localhost redis-3.2.6]# make MALLOC=libc PREFIX=/mnt/app/redis install
1
2
3
4
[root@localhost redis-3.2.6]# echo 'export REDIS_HOME=/mnt/app/redis'|tee /etc/profile.d/redis.sh
[root@localhost redis-3.2.6]# echo 'export REDIS_BIN=$REDIS_HOME/bin'|tee -a /etc/profile.d/redis.sh
[root@localhost redis-3.2.6]# echo 'export PATH=$REDIS_BIN:$PATH'|tee -a /etc/profile.d/redis.sh
[root@localhost redis-3.2.6]# source /etc/profile
1
2
3
4
5
6
[root@localhost redis-3.2.6]# mkdir -p /mnt/app/redis/conf
[root@localhost redis-3.2.6]# mkdir -p /mnt/data/redis
[root@localhost redis-3.2.6]# mkdir -p /mnt/log/redis
[root@localhost redis-3.2.6]# cp redis.conf /mnt/app/redis/conf/redis.conf
[root@localhost redis-3.2.6]# chown -R wisdom.wisdom /mnt/app/redis
[root@localhost redis-3.2.6]# chown -R wisdom.wisdom /mnt/data/redis
1
2
3
[root@localhost redis-3.2.6]# su - wisdom
[wisdom@localhost ~]$ vim /mnt/app/redis/conf/redis.conf
[wisdom@localhost ~]$ /mnt/app/redis/bin/redis-server /mnt/app/redis/conf/redis.conf

redis master slave install

1
2
3
[root@localhost app]# tar xzf redis-3.2.8.tar.gz
[root@localhost app]# cd redis-3.2.8
[root@localhost redis-3.2.8]# make MALLOC=libc PREFIX=/mnt/app/redis install
1
2
3
4
[root@localhost redis-3.2.8]# echo 'export REDIS_HOME=/mnt/app/redis' | tee /etc/profile.d/redis.sh
[root@localhost redis-3.2.8]# echo 'export REDIS_BIN=$REDIS_HOME/bin' | tee -a /etc/profile.d/redis.sh
[root@localhost redis-3.2.8]# echo 'export PATH=$REDIS_BIN:$PATH' | tee -a /etc/profile.d/redis.sh
[root@localhost redis-3.2.8]# source /etc/profile
1
2
[root@localhost redis-3.2.8]# mkdir -p /mnt/app/redis/conf/6379
[root@localhost redis-3.2.8]# cp redis.conf /mnt/app/redis/conf/6379/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
redis-master:
[root@localhost redis-3.2.8]# mkdir -p /mnt/{data,log}/redis/6379
[root@localhost redis-3.2.8]# chown -R wisdom.wisdom /mnt/{data,log}/redis/6379

[root@localhost redis-3.2.8]# vim /mnt/app/redis/conf/6379/redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 2048
unixsocket /mnt/data/redis/6379/redis.sock
unixsocketperm 700
timeout 120
tcp-keepalive 60
daemonize yes
supervised no
pidfile /mnt/data/redis/6379/redis.pid
loglevel notice
logfile "/mnt/log/redis/6379/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename 6379.rdb
dir /mnt/data/redis/6379
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-ping-slave-period 10
repl-timeout 60
repl-disable-tcp-nodelay no
repl-backlog-size 200mb
repl-backlog-ttl 3600
slave-priority 100
maxclients 10000
appendonly yes
appendfilename "6379.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 4096mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 1024
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

[root@localhost redis-3.2.8]# chown -R wisdom.wisdom /mnt/app/redis/conf

[root@localhost redis-3.2.8]# su - wisdom
[wisdom@localhost ~]$ /mnt/app/redis/bin/redis-server /mnt/app/redis/conf/6379/redis.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
redis-slave:
[root@localhost redis-3.2.8]# mkdir -p /mnt/{data,log}/redis/6379
[root@localhost redis-3.2.8]# chown -R wisdom.wisdom /mnt/{data,log}/redis/6379

[root@localhost redis-3.2.8]# vim /mnt/app/redis/conf/6379/redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 2048
unixsocket /mnt/data/redis/6379/redis.sock
unixsocketperm 700
timeout 120
tcp-keepalive 60
daemonize yes
supervised no
pidfile /mnt/data/redis/6379/redis.pid
loglevel notice
logfile "/mnt/log/redis/6379/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename 6379.rdb
dir /mnt/data/redis/6379
slaveof 192.168.18.226 6379
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-ping-slave-period 10
repl-timeout 60
repl-disable-tcp-nodelay no
repl-backlog-size 200mb
repl-backlog-ttl 3600
slave-priority 100
maxclients 10000
appendonly yes
appendfilename "6379.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 4096mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 1024
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

[root@localhost redis-3.2.8]# chown -R wisdom.wisdom /mnt/app/redis/conf

[root@localhost redis-3.2.8]# su - wisdom
[wisdom@localhost ~]$ /mnt/app/redis/bin/redis-server /mnt/app/redis/conf/6379/redis.conf

redis cluster install

1
2
3
[root@localhost app]# tar xzf redis-3.2.6.tar.gz
[root@localhost app]# cd redis-3.2.6/
[root@localhost redis-3.2.6]# make MALLOC=libc PREFIX=/mnt/app/redis install
1
2
3
4
[root@localhost redis-3.2.6]# echo 'export REDIS_HOME=/mnt/app/redis'|tee /etc/profile.d/redis.sh
[root@localhost redis-3.2.6]# echo 'export REDIS_BIN=$REDIS_HOME/bin'|tee -a /etc/profile.d/redis.sh
[root@localhost redis-3.2.6]# echo 'export PATH=$REDIS_BIN:$PATH'|tee -a /etc/profile.d/redis.sh
[root@localhost redis-3.2.6]# source /etc/profile
1
2
3
4
5
6
7
8
[root@localhost redis-3.2.6]# mkdir -p /mnt/app/redis/conf/{7001,7002,7003,7004,7005,7006}
[root@localhost redis-3.2.6]# mkdir -p /mnt/data/redis/{7001,7002,7003,7004,7005,7006}
[root@localhost redis-3.2.6]# mkdir -p /mnt/log/redis/{7001,7002,7003,7004,7005,7006}
[root@localhost redis-3.2.6]# cp redis.conf /mnt/app/redis/conf/{7001,7002,7003,7004,7005,7006}/redis.conf

[root@localhost redis-3.2.6]# chown -R wisdom.wisdom /mnt/app/redis/conf
[root@localhost redis-3.2.6]# chown -R wisdom.wisdom /mnt/data/redis
[root@localhost redis-3.2.6]# chown -R wisdom.wisdom /mnt/log/redis
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
[root@localhost redis-3.2.6]# vim /mnt/app/redis/conf/7001/redis.conf
port 7001
cluster-enabled yes
cluster-config-file /mnt/data/redis/7001/nodes.conf
cluster-node-timeout 5000
appendonly yes
dir /mnt/data/redis/7001
unixsocket /mnt/data/redis/7001/redis.sock
pidfile /mnt/data/redis/7001/redis.pid
logfile /mnt/log/redis/7001/redis.log

[root@localhost redis-3.2.6]# vim /mnt/app/redis/conf/7002/redis.conf
port 7002
cluster-enabled yes
cluster-config-file /mnt/data/redis/7002/nodes.conf
cluster-node-timeout 5000
appendonly yes
dir /mnt/data/redis/7002
unixsocket /mnt/data/redis/7002/redis.sock
pidfile /mnt/data/redis/7002/redis.pid
logfile /mnt/log/redis/7002/redis.log

[root@localhost redis-3.2.6]# vim /mnt/app/redis/conf/7003/redis.conf
port 7003
cluster-enabled yes
cluster-config-file /mnt/data/redis/7003/nodes.conf
cluster-node-timeout 5000
appendonly yes
dir /mnt/data/redis/7003
unixsocket /mnt/data/redis/7003/redis.sock
pidfile /mnt/data/redis/7003/redis.pid
logfile /mnt/log/redis/7003/redis.log

[root@localhost redis-3.2.6]# vim /mnt/app/redis/conf/7004/redis.conf
port 7004
cluster-enabled yes
cluster-config-file /mnt/data/redis/7004/nodes.conf
cluster-node-timeout 5000
appendonly yes
dir /mnt/data/redis/7004
unixsocket /mnt/data/redis/7004/redis.sock
pidfile /mnt/data/redis/7004/redis.pid
logfile /mnt/log/redis/7004/redis.log

[root@localhost redis-3.2.6]# vim /mnt/app/redis/conf/7005/redis.conf
port 7005
cluster-enabled yes
cluster-config-file /mnt/data/redis/7005/nodes.conf
cluster-node-timeout 5000
appendonly yes
dir /mnt/data/redis/7005
unixsocket /mnt/data/redis/7005/redis.sock
pidfile /mnt/data/redis/7005/redis.pid
logfile /mnt/log/redis/7005/redis.log

[root@localhost redis-3.2.6]# vim /mnt/app/redis/conf/7006/redis.conf
port 7006
cluster-enabled yes
cluster-config-file /mnt/data/redis/7006/nodes.conf
cluster-node-timeout 5000
appendonly yes
dir /mnt/data/redis/7006
unixsocket /mnt/data/redis/7006/redis.sock
pidfile /mnt/data/redis/7006/redis.pid
logfile /mnt/log/redis/7006/redis.log
1
2
[root@localhost redis-3.2.6]# su - wisdom
[wisdom@localhost ~]$ /mnt/app/redis/bin/redis-server /mnt/app/redis/conf/{7001,7002,7003,7004,7005,7006}/redis.conf &
1
2
3
4
5
6
7
[root@localhost app]# yum -y install ruby rubygem-redis
[root@localhost app]# tar xzf redis-3.2.6.tar.gz
[root@localhost app]# cd redis-3.2.6/src
[root@localhost src]# ./redis-trib.rb create --replicas 1 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006

[root@localhost src]# ./redis-trib.rb check 127.0.0.1:7001
[root@localhost src]# ./redis-trib.rb info 127.0.0.1:7001

zookeeper use

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Zookeeper Server持久化两类数据:Transaction和Snapshot.

logDir存储transaction命令,dataDir存储snap快照,其下子目录名称以version-2命名,子目录内部文件是分别以log.zxid和snapshot.lastProcessedZxid命名,每个目录下可以有很多个这样的文件,Transaction文件的文件名中zxid是文件中所有命令中zxid最小的zxid,而Snapshot中的lastProcessedZxid是最后一个操作的zxid,一般来讲是最大的zxid。

事务日志记录的是当前要操作的命令以及命令参数,可以认为是动态的,快照记录的是当前的ZK静态数据结构:包括ACL,节点树,session/临时节点对应关系

如下:
penn@ubuntu:/mnt/app/zookeeper.1/bin$ ls -l /mnt/data/zookeeper.1/version-2/
total 12
-rw-rw-r-- 1 penn penn 1 Nov 3 14:12 acceptedEpoch
-rw-rw-r-- 1 penn penn 1 Nov 3 14:12 currentEpoch
-rw-rw-r-- 1 penn penn 296 Nov 3 14:12 snapshot.0

penn@ubuntu:/mnt/app/zookeeper.1/bin$ ls -l /mnt/log/zookeeper.1/version-2/
total 8
-rw-rw-r-- 1 penn penn 67108880 Nov 3 16:49 log.100000001

日志文件可视化:
默认存储的日志文件是二进制的,我们可以使用如下命令进行查看其存储内容:
penn@ubuntu:/mnt/app/zookeeper.1$ java -cp ./zookeeper-3.4.9.jar:./lib/log4j-1.2.16.jar:./lib/slf4j-log4j12-1.6.1.jar:./lib/slf4j-api-1.6.1.jar org.apache.zookeeper.server.LogFormatter /mnt/log/zookeeper.1/version-2/log.100000001

快照文件:
Zookeeper的数据在内存中是以DataTree为数据结构存储的,而快照就是每间隔一段时间Zookeeper就会把整个DataTree的数据序列化然后把它存储在磁盘中,这就是Zookeeper的快照文件,快照文件是指定时间间隔对数据的备份,所以快照文件中数据通常都不是最新的,多久抓一个快照这也是可以配置的snapCount配置项用于配置处理几个事务请求后生成一个快照文件,与事务日志文件一样快照文件也是使用ZXID作为快照文件的后缀,在FileTxnSnapLog类中的save方法中生成文件并调用FileSnap类序列化DataTree数据并且写入快照文件中;

快照文件可视化:
penn@ubuntu:/mnt/app/zookeeper.1$ java -cp ./zookeeper-3.4.9.jar:./lib/log4j-1.2.16.jar:./lib/slf4j-log4j12-1.6.1.jar:./lib/slf4j-api-1.6.1.jar org.apache.zookeeper.server.SnapshotFormatter /mnt/data/zookeeper.1/version-2/snapshot.0

zookeeper基本操作

  1. 查看当前ZK角色

    1
    2
    3
    4
    5
    penn@ubuntu:~$ cd /mnt/app/zookeeper.1/bin/
    penn@ubuntu:/mnt/app/zookeeper.1/bin$ ./zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /mnt/app/zookeeper.1/bin/../conf/zoo.cfg
    Mode: follower
  2. 登录ZK

    1
    penn@ubuntu:/mnt/app/zookeeper.1/bin$ ./zkCli.sh -server 10.0.2.15:2181
  3. 查看帮助

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    help
    ZooKeeper -server host:port cmd args
    stat path [watch]
    set path data [version]
    ls path [watch]
    delquota [-n|-b] path
    ls2 path [watch]
    setAcl path acl
    setquota -n|-b val path
    history
    redo cmdno
    printwatches on|off
    delete path [version]
    sync path
    listquota path
    rmr path
    get path [watch]
    create [-s] [-e] path data acl
    addauth scheme auth
    quit
    getAcl path
    close
    connect host:port
  4. 查询

    1
    2
    [zk: 10.0.2.15:2181(CONNECTED) 0] ls /
    [zookeeper]
  5. 创建znode节点”zk”并关联其字符串”MyData”

    1
    2
    3
    4
    [zk: 10.0.2.15:2181(CONNECTED) 1] create /zk "MyData"
    Created /zk
    [zk: 10.0.2.15:2181(CONNECTED) 2] ls /
    [zk, zookeeper]
    1
    2
    3
    4
    //注意: 如果不关联字符串,并不会创建新的znode
    [zk: 10.0.2.15:2181(CONNECTED) 3] create /test
    [zk: 10.0.2.15:2181(CONNECTED) 4] ls /
    [zk, zookeeper]
  6. 查看是否包含所创建的字符串

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    [zk: 10.0.2.15:2181(CONNECTED) 5] get /zk
    MyData
    cZxid = 0x100000006
    ctime = Thu Nov 03 14:46:39 CST 2016
    mZxid = 0x100000006
    mtime = Thu Nov 03 14:46:39 CST 2016
    pZxid = 0x100000006
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 6
    numChildren = 0
  7. 现在对关联的字符串进行设置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    [zk: 10.0.2.15:2181(CONNECTED) 6] set /zk "zsl"
    cZxid = 0x100000006
    ctime = Thu Nov 03 14:46:39 CST 2016
    mZxid = 0x100000007
    mtime = Thu Nov 03 14:51:12 CST 2016
    pZxid = 0x100000006
    cversion = 0
    dataVersion = 1
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 3
    numChildren = 0

    [zk: 10.0.2.15:2181(CONNECTED) 8] get /zk
    zsl
    cZxid = 0x100000006
    ctime = Thu Nov 03 14:46:39 CST 2016
    mZxid = 0x100000007
    mtime = Thu Nov 03 14:51:12 CST 2016
    pZxid = 0x100000006
    cversion = 0
    dataVersion = 1
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 3
    numChildren = 0
  8. 删除zk node

    1
    2
    3
    [zk: 10.0.2.15:2181(CONNECTED) 9] delete /zk
    [zk: 10.0.2.15:2181(CONNECTED) 10] ls /
    [zookeeper]

zookeeper 进阶操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[zk: 10.0.2.15:2181(CONNECTED) 13] create /zk "test"
Created /zk
[zk: 10.0.2.15:2181(CONNECTED) 14] ls /
[zk, zookeeper]

[zk: 10.0.2.15:2181(CONNECTED) 15] create /zk/n1 "n1"
Created /zk/n1
[zk: 10.0.2.15:2181(CONNECTED) 16] ls /zk
[n1]
[zk: 10.0.2.15:2181(CONNECTED) 17] ls /zk/n1
[]

//如果zk有层次目录,delete删除不成功,需要使用rmr命令
[zk: 10.0.2.15:2181(CONNECTED) 18] delete /zk
Node not empty: /zk
[zk: 10.0.2.15:2181(CONNECTED) 28] rmr /zk
[zk: 10.0.2.15:2181(CONNECTED) 29] ls /
[zookeeper]

zookeeper quota

zookeeper quota机制支持节点个数(znode)和空间大小(字节数)

1
2
3
4
5
6
//查看,默认是没有限制的
[zk: 10.0.2.15:2181(CONNECTED) 6] create /test "test quota"
Created /test
[zk: 10.0.2.15:2181(CONNECTED) 7] listquota /test
absolute path is /zookeeper/quota/test/zookeeper_limits
quota for /test does not exist.
1
2
3
//设置quota
[zk: 10.0.2.15:2181(CONNECTED) 9] setquota -n 3 /test
Comment: the parts are option -n val 3 path /test
1
2
3
4
5
6
7
//查看quota
[zk: 10.0.2.15:2181(CONNECTED) 10] listquota /test
absolute path is /zookeeper/quota/test/zookeeper_limits
Output quota for /test count=3,bytes=-1
Output stat for /test count=1,bytes=10
//-n表示设置znode count限制,这里表示/test这个path下的znode count个数限制
//-b表示设置znode数据的字节大小限制
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
//测试
[zk: 10.0.2.15:2181(CONNECTED) 14] create /test/0 "0"
Created /test/0
[zk: 10.0.2.15:2181(CONNECTED) 15] create /test/1 "1"
Created /test/1
[zk: 10.0.2.15:2181(CONNECTED) 16] create /test/2 "2"
Created /test/2
[zk: 10.0.2.15:2181(CONNECTED) 17] create /test/3 "3"
Created /test/3
[zk: 10.0.2.15:2181(CONNECTED) 18] ls /test
[0, 1, 2, 3]
//我们发现上面已经超过我们设置的3个znode,但依旧可以创建成功.说明zookeeper的Quota机制是比较温和的,即使超限了,只是在日志中报告一下,并不会限制Client的行为,Client可以继续操作znode
//日志内容:
//2016-11-03 16:38:08,876 [myid:1] - WARN [CommitProcessor:1:DataTree@301] - Quota exceeded: /test count=4 limit=3
//2016-11-03 16:38:12,998 [myid:1] - WARN [CommitProcessor:1:DataTree@301] - Quota exceeded: /test count=5 limit=3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
//查看quota内容
[zk: 10.0.2.15:2181(CONNECTED) 25] get /zookeeper/quota/test/zookeeper_limits
count=3,bytes=-1
cZxid = 0x10000001f
ctime = Thu Nov 03 16:34:55 CST 2016
mZxid = 0x10000001f
mtime = Thu Nov 03 16:34:55 CST 2016
pZxid = 0x10000001f
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 16
numChildren = 0

[zk: 10.0.2.15:2181(CONNECTED) 26] get /zookeeper/quota/test/zookeeper_stats
count=5,bytes=14
cZxid = 0x100000020
ctime = Thu Nov 03 16:34:55 CST 2016
mZxid = 0x100000020
mtime = Thu Nov 03 16:34:55 CST 2016
pZxid = 0x100000020
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 16
numChildren = 0

zookeeper 权限认证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
ZooKeeper 的权限管理亦即 ACL 控制功能通过 Server , Client 两端协调完成:
* Server 端
一个ZooKeeper的节点(znode)存储两部分内容:数据和状态(状态中包含ACL信息)
创建一个znode会产生一个ACL列表,列表中每个ACL包括:
1.验证模式 (scheme)
2.具体内容 (Id) (当 scheme="digest"时,Id为用户名密码,例如"root:J0sTy9BCUKubtK1y8pkbL7qoxSw="
3.权限(perms)

拓展:
ZooKeeper提供了如下几种验证模式(scheme):
1.digest
Client端由用户名和密码验证,譬如"user:password",digest的密码生成方式是Sha1摘要的base64形式
2.auth
不使用任何id,代表任何已确认用户
3.ip
Client 端由 IP 地址验证,譬如 172.2.0.0/24
4.world
固定用户为 anyone,为所有 Client 端开放权限
5.super
在这种scheme情况下,对应的id拥有超级权限,可以做任何事情(cdrwa)
注意:exists操作和getAcl操作并不受ACL许可控制,因此任何客户端可以查询节点的状态和节点的ACL

节点的权限(perms)主要有以下几种:
1.Create 允许对子节点 Create 操作,c
2.Read 允许对本节点 GetChildren 和 GetData 操作,r
3.Write 允许对本节点 SetData 操作,w
4.Delete 允许对子节点 Delete 操作,d
5.Admin 允许对本节点 setAcl 操作,a
1
2
3
4
5
6
//查看ACL
[zk: 10.0.2.15:2181(CONNECTED) 3] create /test "test auth"
Created /test
[zk: 10.0.2.15:2181(CONNECTED) 6] getAcl /test
'world,'anyone
: cdrwa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
2.设置ip策略
[zk: 10.0.2.15:2181(CONNECTED) 7] setAcl /test ip:10.0.2.15:crwda
cZxid = 0x100000013
ctime = Thu Nov 03 15:22:20 CST 2016
mZxid = 0x100000013
mtime = Thu Nov 03 15:22:20 CST 2016
pZxid = 0x100000013
cversion = 0
dataVersion = 0
aclVersion = 1
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0

[zk: 10.0.2.15:2181(CONNECTED) 8] getAcl /test
'ip,'10.0.2.15
: cdrwa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
3.设置digest策略
//生成密钥
penn@ubuntu:~$ cd /mnt/app/zookeeper.1/
penn@ubuntu:/mnt/app/zookeeper.1$ java -cp ./zookeeper-3.4.9.jar:./lib/log4j-1.2.16.jar:./lib/slf4j-log4j12-1.6.1.jar:./lib/slf4j-api-1.6.1.jar org.apache.zookeeper.server.auth.DigestAuthenticationProvider test:test
test:test->test:V28q/NynI4JI3Rk54h0r8O5kMug=

//设置权限
[zk: 10.0.2.15:2181(CONNECTED) 2] setAcl /test digest:test:V28q/NynI4JI3Rk54h0r8O5kMug=:crwda
cZxid = 0x100000013
ctime = Thu Nov 03 15:22:20 CST 2016
mZxid = 0x100000013
mtime = Thu Nov 03 15:22:20 CST 2016
pZxid = 0x100000013
cversion = 0
dataVersion = 0
aclVersion = 2
ephemeralOwner = 0x0
dataLength = 9
numChildren = 0

//查看权限
[zk: 10.0.2.15:2181(CONNECTED) 3] getAcl /test
'digest,'test:V28q/NynI4JI3Rk54h0r8O5kMug=
: cdrwa

//验证
[zk: 10.0.2.15:2181(CONNECTED) 4] ls /test
Authentication is not valid : /test
[zk: 10.0.2.15:2181(CONNECTED) 5] addauth digest test:test
[zk: 10.0.2.15:2181(CONNECTED) 6] ls /test
[]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
4.设置超级管理员
//设置super用户密
penn@ubuntu:/mnt/app/zookeeper.1$ java -cp ./zookeeper-3.4.9.jar:./lib/log4j-1.2.16.jar:./lib/slf4j-log4j12-1.6.1.jar:./lib/slf4j-api-1.6.1.jar org.apache.zookeeper.server.auth.DigestAuthenticationProvider super:super
super:super->super:gG7s8t3oDEtIqF6DM9LlI/R+9Ss=

//将密码添加到zkServer.sh中
penn@ubuntu:/mnt/app/zookeeper.1$ vim /mnt/app/zookeeper.1/bin/zkServer.sh
SUPER_ACL="-Dzookeeper.DigestAuthenticationProvider.superDigest=super:gG7s8t3oDEtIqF6DM9LlI/R+9Ss="

//在start启动选项中,添加
nohup "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" "${SUPER_ACL}" -cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" > "$_ZOO_DAEMON_OUT" 2>&1 < /dev/null &

//重启zookeeper
penn@ubuntu:/mnt/app/zookeeper.1$ /mnt/app/zookeeper.1/bin/zkServer.sh restart
1
2
3
4
5
6
7
8
5.ACL请求过程
一次Client对znode进行操作的验证ACL的方式为:
a.遍历znode的所有ACL:
i.对于每一个ACL,首先操作类型与权限(perms)匹配
ii.只有匹配权限成功才进行session的auth信息与ACL的用户名,密码匹配
b.如果两次匹配都成功,则允许操作;否则,返回权限不够error(rc=-102)

注意: 如果znode ACL List中任何一个ACL都没有setAcl权限,那么就算superDigest也修改不了它的权限,再假如这个znode还不开放delete权限,那么它的所有子节点都将不会被删除.唯一的办法是通过手动删除snapshot和log的方法,将ZK回滚到一个以前的状态,然后重启,当然这会影响到该znode以外其它节点的正常应用.
1
2
6.ACL缺陷
ACL仅仅是访问控制,并非完善的权限管理,通过这种方式做多集群隔离,还有很多局限性:ACL并无递归机制,任何一个znode创建后,都需要单独设置ACL,无法继承父节点的ACL设置.除了ip这种scheme,digest和auth的使用对用户都不是透明的,这也给使用带来了很大的成本,很多依赖zookeeper的开源框架也没有加入对ACL的支持,例如hbase,storm.

zookeeper install

java install

1
2
3
4
5
6
7
8
9
[root@localhost app]# tar xzf jdk-8u73-linux-x64.tar.gz
[root@localhost app]# mv jdk1.8.0_73 /mnt/app/java
[root@localhost app]# chown -R root.root /mnt/app/java

[root@localhost app]# echo 'export JAVA_HOME=/mnt/app/java' | tee /etc/profile.d/java.sh
[root@localhost app]# echo 'export JRE_HOME=${JAVA_HOME}/jre' | tee -a /etc/profile.d/java.sh
[root@localhost app]# echo 'export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib' | tee -a /etc/profile.d/java.sh
[root@localhost app]# echo 'export PATH=${JAVA_HOME}/bin:$PATH' | tee -a /etc/profile.d/java.sh
[root@localhost app]# source /etc/profile

zookeeper standard install

1
2
3
4
5
6
7
[root@10 app]# tar xzf zookeeper-3.4.6.tar.gz
[root@10 app]# mv zookeeper-3.4.6 /mnt/app/zookeeper

[root@10 app]# chown -R wisdom.wisdom /mnt/app/zookeeper

[root@10 app]# mkdir -p /mnt/{data,log}/zookeeper
[root@10 app]# chown -R wisdom.wisdom /mnt/{data,log}/zookeeper
1
2
3
4
[root@10 app]# echo 'export ZK_HOME=/mnt/app/zookeeper' | tee /etc/profile.d/zookeeper.sh
[root@10 app]# echo 'export ZK_BIN=${ZK_HOME}/bin' | tee -a /etc/profile.d/zookeeper.sh
[root@10 app]# echo 'export PATH=${ZK_BIN}:$PATH' | tee -a /etc/profile.d/zookeeper.sh
[root@10 app]# source /etc/profile
1
2
3
4
5
6
7
8
9
10
11
[root@10 app]# cat > /mnt/app/zookeeper/conf/zoo.cfg <<EOF
> tickTime=2000
> initLimit=10
> syncLimit=5
> clientPort=2181
> clientPortAddress=10.0.2.113
> dataDir=/mnt/data/zookeeper
> dataLogDir=/mnt/log/zookeeper
> autopurge.snapRetainCount=3
> autopurge.purgeInterval=1
> EOF
1
2
3
[root@10 app]# cat > /mnt/app/zookeeper/conf/zookeeper-env.sh <<EOF
> ZOO_LOG_DIR=/mnt/log/zookeeper
> EOF
1
2
3
4
[root@10 ~]# cat > /mnt/app/zookeeper/conf/java.env <<EOF
> export JAVA_HOME=/mnt/app/java
> export JVMFLAGS="-Xms1024m -Xmx1024m $JVMFLAGS"
> EOF
1
2
3
[root@10 app]# su - wisdom
[wisdom@10 ~]$ /mnt/app/zookeeper/bin/zkServer.sh start
[wisdom@10 ~]$ /mnt/app/zookeeper/bin/zkServer.sh status

zookeeper cluster install

1
2
3
4
5
6
7
[root@localhost app]# tar xzf zookeeper-3.4.6.tar.gz
[root@localhost app]# mv zookeeper-3.4.6 /mnt/app/zookeeper

[root@localhost app]# chown -R root.root /mnt/app/zookeeper

[root@localhost app]# mkdir -p /mnt/{data,log}/zookeeper
[root@localhost app]# chown -R wisdom.wisdom /mnt/{data,log}/zookeeper
1
2
3
4
[root@localhost app]# echo 'export ZK_HOME=/mnt/app/zookeeper' | tee /etc/profile.d/zookeeper.sh
[root@localhost app]# echo 'export ZK_BIN=${ZK_HOME}/bin' | tee -a /etc/profile.d/zookeeper.sh
[root@localhost app]# echo 'export PATH=${ZK_BIN}:$PATH' | tee -a /etc/profile.d/zookeeper.sh
[root@localhost app]# source /etc/profile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
ZK-cluster-1:
[root@localhost app]# cat > /mnt/app/zookeeper/conf/zookeeper-env.sh <<EOF
> ZOO_LOG_DIR=/mnt/log/zookeeper
> EOF

[root@localhost app]# cat > /mnt/data/zookeeper/myid <<EOF
> 228
> EOF

[root@localhost app]# cat > /mnt/app/zookeeper/conf/zoo.cfg <<EOF
> tickTime=2000
> initLimit=5
> syncLimit=2
> clientPort=2181
> clientPortAddress=192.168.18.228
> maxClientCnxns=2000
> autopurge.snapRetainCount=5
> autopurge.purgeInterval=3
> dataDir=/mnt/data/zookeeper
> dataLogDir=/mnt/log/zookeeper
> server.228=192.168.18.228:3181:4181
> server.229=192.168.18.229:3181:4181
> server.230=192.168.18.230:3181:4181
> EOF

[root@localhost app]# cat > /mnt/app/zookeeper/conf/java.env <<EOF
> export JAVA_HOME=/mnt/app/java
> export JVMFLAGS="-Xms1024m -Xmx1024m $JVMFLAGS"
> EOF

[root@localhost app]# chown -R wisdom.wisdom /mnt/app/zookeeper/conf
[root@localhost app]# su - wisdom
[wisdom@localhost ~]$ /mnt/app/zookeeper/bin/zkServer.sh start
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
ZK-cluster-2:
[root@localhost app]# cat > /mnt/app/zookeeper/conf/zookeeper-env.sh <<EOF
> ZOO_LOG_DIR=/mnt/log/zookeeper
> EOF

[root@localhost app]# cat > /mnt/data/zookeeper/myid <<EOF
> 229
> EOF

[root@localhost app]# cat > /mnt/app/zookeeper/conf/zoo.cfg <<EOF
> tickTime=2000
> initLimit=5
> syncLimit=2
> clientPort=2181
> clientPortAddress=192.168.18.229
> maxClientCnxns=2000
> autopurge.snapRetainCount=5
> autopurge.purgeInterval=3
> dataDir=/mnt/data/zookeeper
> dataLogDir=/mnt/log/zookeeper
> server.228=192.168.18.228:3181:4181
> server.229=192.168.18.229:3181:4181
> server.230=192.168.18.230:3181:4181
> EOF

[root@localhost app]# cat > /mnt/app/zookeeper/conf/java.env <<EOF
> export JAVA_HOME=/mnt/app/java
> export JVMFLAGS="-Xms1024m -Xmx1024m $JVMFLAGS"
> EOF

[root@localhost app]# chown -R wisdom.wisdom /mnt/app/zookeeper/conf
[root@localhost app]# su - wisdom
[wisdom@localhost ~]$ /mnt/app/zookeeper/bin/zkServer.sh start
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
ZK-cluster-3:
[root@localhost app]# cat > /mnt/app/zookeeper/conf/zookeeper-env.sh <<EOF
> ZOO_LOG_DIR=/mnt/log/zookeeper
> EOF

[root@localhost app]# cat > /mnt/data/zookeeper/myid <<EOF
> 230
> EOF

[root@localhost app]# cat > /mnt/app/zookeeper/conf/zoo.cfg <<EOF
> tickTime=2000
> initLimit=5
> syncLimit=2
> clientPort=2181
> clientPortAddress=192.168.18.230
> maxClientCnxns=2000
> autopurge.snapRetainCount=5
> autopurge.purgeInterval=3
> dataDir=/mnt/data/zookeeper
> dataLogDir=/mnt/log/zookeeper
> server.228=192.168.18.228:3181:4181
> server.229=192.168.18.229:3181:4181
> server.230=192.168.18.230:3181:4181
> EOF

[root@localhost app]# cat > /mnt/app/zookeeper/conf/java.env <<EOF
> export JAVA_HOME=/mnt/app/java
> export JVMFLAGS="-Xms1024m -Xmx1024m $JVMFLAGS"
> EOF

[root@localhost app]# chown -R wisdom.wisdom /mnt/app/zookeeper/conf
[root@localhost app]# su - wisdom
[wisdom@localhost ~]$ /mnt/app/zookeeper/bin/zkServer.sh start

zookeeper cluster info

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@localhost app]# cat /mnt/app/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=5
syncLimit=2
clientPort=2181
clientPortAddress=192.168.18.230
maxClientCnxns=2000
autopurge.snapRetainCount=5
autopurge.purgeInterval=3
dataDir=/mnt/data/zookeeper
dataLogDir=/mnt/log/zookeeper
server.228=192.168.18.228:3181:4181
server.229=192.168.18.229:3181:4181
server.230=192.168.18.230:3181:4181

集群参数说明:
server.A=B:C:D
* A是一个数字,表示这个是第几号服务器
* B是这个服务器的ip地址
* C表示这个服务器与集群中的Leader服务器交换信息的端口
* D表示的是集群选举通信端口.例如:集群中的Leader服务器挂了,需要一个端口来重新进行选举,选出一个新的Leader,而这个端口就是用来执行选举时服务器相互通信的端口

rsync install

rsync介绍

1
2
3
4
5
6
7
8
rsync是类unix系统下的数据镜像备份工具,从软件的命名上就可以看出来了——remote sync
它的特性如下:
1.可以镜像保存整个目录树和文件系统
2.可以很容易做到保持原来文件的权限、时间、软硬链接等等
3.无须特殊权限即可安装
4.优化的流程,文件传输效率高
5.可以使用rcp,ssh等方式来传输文件,当然也可以通过直接的socket连接
6.支持匿名传输,以方便进行网站镜象

rsync 安装

  1. 安装

    1
    [root@localhost ~]# yum -y install rsync* inotify-tools*
  2. 创建rsyncd用户

    1
    [root@localhost ~]# useradd -M -s /sbin/nologin rsyncd
  3. 创建配置文件目录

    1
    [root@localhost ~]# mkdir -p /etc/rsyncd
  4. 创建配置文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    [root@localhost ~]# touch /etc/rsyncd/rsyncd.conf
    [root@localhost ~]# touch /etc/rsyncd/rsyncd.secrets
    [root@localhost ~]# touch /etc/rsyncd/rsyncd.motd
    说明:
    rsyncd.conf 主配置文件
    rsyncd.secrets 用户名密码文件,一组用户一行,用户名和密码使用:分割
    rsyncd.motd 非必须,连接上rsyncd显示的欢迎信息

    [root@localhost ~]# chmod 600 /etc/rsyncd/rsyncd.secrets
    注意: rsyncd.secrets权限必须是600
  5. 编辑配置文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    [root@localhost ~]# vim /etc/rsyncd/rsyncd.conf
    #rsyncd 守护进程运行系统用户全局配置,也可在具体的块中独立配置,
    uid = root
    gid = root
    # 守护进程监听ip
    address = 0.0.0.0
    # 护进程监听端口
    port = 873
    # 守护进程pid文件
    pid file = /var/run/rsyncd.pid
    # 守护进程lock文件
    lock file = /var/run/rsyncd.lock
    #允许chroot,提升安全性,客户端连接模块,首先chroot到模块path参数指定的目录下
    #chroot为yes时必须使用root权限,且不能备份path路径外的链接文件
    use chroot = yes
    #只读
    read only = no
    #只写
    write only = no
    #允许访问rsyncd服务的ip,ip端或者单独ip之间使用空格隔开
    hosts allow = 0.0.0.0/0
    #不允许访问rsyncd服务的ip,*是全部(不涵盖在hosts allow中声明的ip,注意和hosts allow的先后顺序)
    hosts deny = *
    #客户端最大连接数
    max connections = 20
    #欢迎文件路径,可选的
    #motd file = /etc/rsyncd/rsyncd.motd
    # 是否记录传输文件日志
    transfer logging = yes
    # 日志文件格式
    log format = %t %a %m %f %b
    # 指定rsync发送消息日志文件,而不是发送给syslog,如果不填这个参数默认发送给syslog
    log file = /var/log/rsyncd.log
    # syslog facility rsync发送消息给syslog时的消息级别
    syslog facility = local3
    # timeout连接超时时间
    timeout = 300

    # 模块 模块名称必须使用[]环绕
    # 比如:要访问data1,则地址应该是data1user@192.168.1.2::data1
    [test]
    #模块根目录,必须指定
    path = /home/vsftpduser/ftpdata/test
    #只读
    read only = yes
    #是否允许列出模块里的内容
    list=no
    # 忽略错误
    # ignore errors
    # 模块验证用户名称,可使用空格或者逗号隔开多个用户名
    auth users = testsync
    # 模块验证密码文件 可放在全局配置里
    secrets file = /etc/rsyncd/rsyncd.secrets
    # 注释
    comment = test ftp data
    # 排除目录,多个之间使用空格隔开
    exclude = test/
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    补充:
    # 主要定义符及含义:
    # %h远程主机名
    # %a远程IP地址
    # %l文件长度字符数
    # %p该次rsync会话的进程id
    # %o操作类型:"send"或"recv"
    # %f文件名
    # %P模块路径
    # %m模块名
    # %t当前时间
    # %u认证的用户名(匿名时是null)
    # %b实际传输的字节数
    # %c当发送文件时,该字段记录该文件的校验码
    #默认log格式为:"%o %h [%a] %m (%u) %f %l",一般来说,在每行的头上会添加"%t [%p] "。
    log format=%o %h [%a] %m (%u) %f %l
  6. 启动rsyncd服务

    1
    [root@localhost ~]# /usr/bin/rsync --daemon --config=/etc/rsyncd/rsyncd.conf
  7. 客户端测试

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    方式一: 通过SSH同步
    1.将当前key拷贝到服务端用户目录
    [ec2-user@god ~]$ ssh-keygen
    [ec2-user@god ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@1.1.1.1 -p22

    2.通过客户端进行同步
    //表示同步目录
    [ec2-user@god ~]$ rsync -avzHP --delete --progress --bwlimit=1000000 -e "ssh -p 22" root@1.1.1.1:/home/vsftpduser/ftpdata/test ./test
    //表示同步目录下的文件(不包含目录本身)
    [ec2-user@god ~]$ rsync -avzHP --delete --progress --bwlimit=1000000 -e "ssh -p 22" root@1.1.1.1:/home/vsftpduser/ftpdata/test/ ./test
    注意:
    a 保留文件原有权限,用户,用户组,时间且递归的copy包括链接文件,块设备在内的所有文件
    v 显示传输信息
    P 显示传输进度
    H 保留硬链接
    z 压缩传输内容进行传输
    --delete 删除那些DST中SRC没有的文件
    --progress 显示备份过程
    --bwlimit 限制带宽,字节/秒
    1
    2
    3
    4
    5
    6
    7
    8
    方式二:通过rsync账号和密码同步
    客户端:
    [ec2-user@god ~]$ vim rsync.pass //注意文件中只保存密码
    qxn9fav
    [ec2-user@god ~]$ chmod 600 rsync.pass //密码文件必须是600

    //表示同步目录下的文件
    [ec2-user@god ~]$ rsync -avzHP --delete --progress --bwlimit=1000000 --port 52003 --password-file=rsync.pass sspsync@1.1.1.1::test ./test

inotify-tools安装

  1. 安装后文件存放在:

    1
    2
    /usr/local/bin/inotifywait
    /usr/local/bin/inotifywatch
  2. 修改内核参数:

    1
    2
    3
    4
    5
    vim /etc/sysctl.conf
    fs.inotify.max_queued_events=99999999
    fs.inotify.max_user_watches=99999999
    fs.inotify.max_user_instances=65536
    sysctl -p
    1
    2
    3
    4
    说明:
    max_queued_events inotify队列最大长度,如果值太小,会出现"** Event Queue Overflow **"错误,导致监控文件不准确
    max_user_watches 要同步的文件包含多少目录,可以用:"find /home/www.osyunwei.com -type d | wc -l"统计,必须保证max_user_watches值大于统计结果
    max_user_instances 每个用户创建inotify实例最大值
  3. 编写脚本进行实时推送

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    rsync_inotify.sh
    #!/bin/bash
    src=/opt/test/
    /usr/local/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,delete,create,attrib $src | while read file;do
    /usr/bin/rsync -arzuq $src 192.168.0.2::www/
    echo " ${file} was rsynced" >>/opt/soft/log/rsync.log 2>&1
    done

    chmod +x rsync_inotify.sh

    ./rsync_inotify.sh &
  4. 拓展

    1
    2
    3
    4
    -m 是保持一直监听
    -r 是递归查看目录
    -q 是打印出事件
    -e create,move,delete,modify,attrib 是指"监听 创建 移动 删除 写入 权限"事件

python modules path

  • 查看当前python的模块路径

    1
    2
    3
    >>> import sys
    >>> print(sys.path)
    ['', '/root/.pyenv/versions/2.7.13/lib/python27.zip', '/root/.pyenv/versions/2.7.13/lib/python2.7', '/root/.pyenv/versions/2.7.13/lib/python2.7/plat-linux2', '/root/.pyenv/versions/2.7.13/lib/python2.7/lib-tk', '/root/.pyenv/versions/2.7.13/lib/python2.7/lib-old', '/root/.pyenv/versions/2.7.13/lib/python2.7/lib-dynload', '/root/.pyenv/versions/2.7.13/lib/python2.7/site-packages']
  • 添加模块路径

    1
    [root@localhost ~]# export PYTHONPATH=/path/to/pylib:$PYTHONPATH

python pyenv install

  • pyenv 安装依赖包

    1
    [root@dev ~]# yum -y install readline readline-devel readline-static openssl openssl-devel openssl-static sqlite-devel bzip2-devel bzip2-libs
  • pyenv 安装

    1
    2
    3
    [root@localhost ~]# git clone https://github.com/yyuu/pyenv.git ~/.pyenv
    or:
    [root@localhost ~]# git clone https://github.com/yyuu/pyenv.git /mnt/app/pyenv
  • pyenv 环境变量

    1
    2
    3
    4
    5
    6
    7
    8
    [root@localhost ~]# echo 'export PYENV_ROOT="$HOME/.pyenv"' | tee /etc/profile.d/pyenv.sh
    or:
    [root@localhost ~]# echo 'export PYENV_ROOT="/mnt/app/pyenv"' | tee /etc/profile.d/pyenv.sh

    [root@localhost ~]# echo 'export PATH="$PYENV_ROOT/bin:$PATH"' | tee -a /etc/profile.d/pyenv.sh
    [root@localhost ~]# echo 'eval "$(pyenv init -)"' | tee -a /etc/profile.d/pyenv.sh
    [root@localhost ~]# source /etc/profile
    [root@localhost ~]# exec $SHELL
  • pyenv-virtual 安装

    1
    [root@localhost ~]# git clone https://github.com/yyuu/pyenv-virtualenv.git $(pyenv root)/plugins/pyenv-virtualenv
  • pyenv-virtual 环境变量设置

    1
    2
    3
    [root@localhost ~]# echo 'eval "$(pyenv virtualenv-init -)"' | tee -a /etc/profile.d/pyenv.sh
    [root@localhost ~]# source /etc/profile
    [root@localhost ~]# exec $SHELL

  • pyenv 更新

    1
    2
    [root@localhost ~]# cd $(pyenv root)
    [root@localhost .pyenv]# git pull
  • pyenv 删除

    1
    2
    3
    4
    [root@localhost ~]# rm -rf $(pyenv root)
    [root@localhost ~]# rm -rf /etc/profile.d/pyenv.sh
    [root@localhost ~]# source /etc/profile
    [root@localhost ~]# exec $SHELL
  • pyenv 查看当前版本(系统版本)

    1
    2
    [root@localhost ~]# pyenv versions
    * system (set by /root/.pyenv/version)
  • pyenv 查看版本列表

    1
    [root@localhost ~]# pyenv install --list
  • pyenv 安装python

    1
    2
    3
    4
    5
    6
    [root@localhost ~]# pyenv install 2.7.13
    [root@localhost ~]# pyenv rehash

    or:
    [root@localhost ~]# mkdir -p ${PYENV_ROOT}/cache
    [root@localhost ~]# v=2.7.13;wget -P /usr/local/pyenv/cache/ http://mirrors.sohu.com/python/$v/Python-$v.tar.xz;python install $v
  • pyenv 设置全局版本

    1
    2
    3
    4
    5
    6
    [root@localhost ~]# pyenv global 2.7.13
    [root@localhost ~]# pyenv versions
    system
    * 2.7.13 (set by /root/.pyenv/version)
    [root@localhost ~]# python -V
    Python 2.7.13
  • pyenv 设置本地版本,优先级高于global

    1
    [root@localhost ~]# pyenv local 2.7.13
  • pyenv 设置shell版本,优先级高于local,global

    1
    2
    [root@localhost ~]# pyenv shell 2.7.13
    [root@localhost ~]# pyenv shell --unset 2.7.13
  • python 卸载python

    1
    2
    [root@localhost ~]# pyenv uninstall 2.7.13
    [root@localhost ~]# pyenv rehash

  • 创建虚拟环境

    1
    [root@localhost ~]# pyenv virtualenv 2.7.13 myenv
  • 查看当前的虚拟环境

    1
    2
    3
    [root@localhost ~]# pyenv virtualenvs
    2.7.13/envs/myenv (created from /root/.pyenv/versions/2.7.13)
    myenv (created from /root/.pyenv/versions/2.7.13)
  • 删除虚拟环境

    1
    [root@localhost ~]# pyenv uninstall myenv
  • 切换到虚拟环境

    1
    2
    3
    4
    5
    6
    7
    [root@localhost ~]# pyenv activate myenv
    pyenv-virtualenv: prompt changing will be removed from future release. configure `export PYENV_VIRTUALENV_DISABLE_PROMPT=1' to simulate the behavior.
    (myenv) [root@localhost ~]#

    注意:
    export PYENV_VIRTUALENV_DISABLE_PROMPT=1 表示关闭(myenv)
    export PYENV_VIRTUALENV_DISABLE_PROMPT=$1 表示打开(myenv)
  • 退出虚拟环境

    1
    2
    (myenv) [root@localhost ~]# source deactivate
    pyenv-virtualenv: deactivate 2.7.13/envs/myenv