redis cluster在设计的时候,就考虑到了去中心化,去中间件,也就是说,集群中的
每个节点都是平等的关系,都是对等的,每个节点都保存各自的数据和整个集群的
状态。每个节点都和其他所有节点连接,而且这些连接保持活跃,这样就保证了
我们只需要连接集群中的任意一个节点,就可以获取到其他节点的数据。
Redis 集群没有并使用传统的一致性哈希来分配数据,而是采用另外一种叫做哈希槽
(hash slot)的方式来分配的。redis cluster 默认分配了 16384 个slot,当我们set一个
key 时,会用CRC16算法来取模得到所属的slot,然后将这个key 分到哈希槽区间的节点上,
具体算法就是:CRC16(key) % 16384。所以我们在测试的时候看到set 和 get 的时候,直接
跳转到了7000端口的节点。
Redis 集群会把数据存在一个 master 节点,然后在这个 master 和其对应的salve 之
间进行数据同步。当读取数据时,也根据一致性哈希算法到对应的 master 节点获取
数据。只有当一个master 挂掉之后,才会启动一个对应的 salve 节点,充当 master
。
需要注意的是:必须要3个或以上的主节点,否则在创建集群时会失败,并且当存活的主节
点小于总节点数的一半时,整个集群就无法提供服务了。
参考官网:https://redis.io/topics/cluster-tutorial
下载 redis-4.0.9.tar.gz
执行make
这样可执行程序就有了。
建立redis用户,比如 redis7379
进入用户目录,手工配置redis.conf
bind xx.xx.xx.xx
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /home/redis6379/redis_6379.pid
loglevel notice
logfile "/data/logs/6379/redis_6379.log"
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data/redisdata6379
masterauth asdfasdfasdf
requirepass asdfasdfasdf
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble no
lua-time-limit 5000
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
同样在redis目录拷贝一个redis的启动和停止脚本 redis_6379:
#!/bin/sh
#
# Simple Redis init.d script conceived to work on Linux systems
# as it does use of the /proc filesystem.
REDISPORT=6379
EXEC=/usr/local/bin/redis-server
CLIEXEC=/usr/local/bin/redis-cli
IPIP=`/sbin/ifconfig eth0 | sed -nr 's/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'`
PIDFILE=$HOME/redis_${REDISPORT}.pid
CONF="$HOME/${REDISPORT}.conf"
case "$1" in
start)
if [ -f $PIDFILE ]
then
echo "$PIDFILE exists, process is already running or crashed"
else
echo "Starting Redis server..."
$EXEC $CONF
fi
;;
stop)
if [ ! -f $PIDFILE ]
then
echo "$PIDFILE does not exist, process is not running"
else
PID=$(cat $PIDFILE)
echo "Stopping ..."
# 下一行是停止服务的时候需要加个密码
$CLIEXEC -h $IPIP -p $REDISPORT -a asdfasdfasdf shutdown
while [ -x /proc/${PID} ]
do
echo "Waiting for Redis to shutdown ..."
sleep 1
done
echo "Redis stopped"
fi
;;
*)
echo "Please use start or stop as first argument"
;;
esac
启动redis:./redis_6379 start
同理可以启动另外的5个,接下来可以创建集群:
redis-trib.rb create --replicas 1 192.168.137.131:6379 192.168.137.132:6379 192.168.137.133:6379 192.168.137.134:6379 192.168.137.135:6379 192.168.137.136:6379
压力测试
./redis-benchmark -h 192.168.137.131 -p 6379 -a asdfasdfasdf
别名命令 redis
.bashrc add:
alias redis='redis-cli -h 192.168.137.131 -p 6379 -c -a asdfasdfasdf'
启动:
redis_6379 start
关闭:
redis_6379 stop
启动脚本里shutdown 的密码asdfasdfasdf
==========创建集群===============
redis-trib.rb create
如果错误:[ERR] Sorry, can't connect to node 192.168.137.131:6379
以root身份修改:
/var/lib/gems/2.3.0/gems/redis-4.0.1/lib/redis/client.rb
的密码为 6379 端口的密码 asdfasdfasdf
重试即可
========创建集群==========
check 集群:
redis-trib.rb check 192.168.137.131:6379
清理aof: (64mb 自动清理)
192.168.137.131:6379> bgrewriteaof
Background append only file rewriting started