修改本页

Redis Commands Clients Documentation Community Download Issues Support License

分区:如何在多台Redis中分离数据。

分区是一个将数据分离到多台Redis中的处理过程,因此每台Redis将只会包括一部分键。 文档的第一部分将会给你介绍分区的概念,第二部分将会给你展示一些Redis分区的替代方案。

为什么分区有用

在Redis服务器中实现分区的两个主要目标:

  • 分区能使用多台计算机的内存来建立更大的数据库。没有分区的话你仅被限制于单台计算机所有的有限内存。
  • 分区能使用多核和多机以及多台计算机的网络带宽和网络适配器来大大增加计算能力。

分区基础知识

分区的标准是有差异的。 假设我们实例化四台Redis R0, R1, R2, R3, 很多表示用户的键像 user:1, user:2, … 等等, 我们可以找到不同的方式在我们存储的键的实例中来查询。也就是说有 不同的系统来映射 一个指定的键到一个指定的Redis服务器。

其中最简单的分区方式是 范围分区, 根据对象的范围映射到特定的某台Redis中来实现。例如,我假设将ID为0-10000的用户分到 R0, 而ID为10001-20000的用户分到 R1 ,以此类推。

这种体系可行并且被用到实际中,然而它的缺点是需要用一张表来实现映射范围。这个表需要被管理并且它还要适合我们所有的对象。 对于Redis来说通常这不是一个好的方法。

另一种范围分区方式是哈希分区。 此方式适合于任何键, 没有必要存在形如 object_name: 的键,只是像这样简单:

  • 用一个键名和一个哈希函数来返回一个数值。例如我可以用 crc32 哈希函数。所以如果我用的这个键是 foobarcrc32(foobar) 就会输出值 93024922。
  • 我对这个数值取来返回一个在0到3之间的数, 所以我可以将这个值映射到前面说的四台Redis中的一台。 因为 93024922 模 4 等于 2, 所以我知道键 foobar 应该被存储到 R2 实例中. 注意: 取模就是除法取余,在很多程序语言中用 % 来表示。

虽然有很多其他的方式来实现分区,但是通过这两个例子你应该明白分区的思想了。一种改进的哈希分区方式叫 consistent hashing 并被一些Redis客户端和代理实现。

分区的不同实现方案

分区可以是一个软件栈的不同部分来负责的。

  • 客户端分区 是客户端直接指定一个合适的 node where to write or read a given key. 很多Redis客户端实现了客户端分区。
  • 代理协助分区 means that our clients send requests to a proxy that is able to speak the Redis protocol, instead of sending requests directly to the right Redis instance. The proxy will make sure to forward our request to the right Redis instance accordingly to the configured partitioning schema, and will send the replies back to the client. The Redis and Memcached proxy Twemproxy implements proxy assisted partitioning.
  • 查询路由 means that you can send your query to a random instance, and the instance will make sure to forward your query to the right node. Redis Cluster implements an hybrid form of query routing, with the help of the client (the request is not directly forwarded from a Redis instance to another, but the client gets redirected to the right node).

分区的缺点

Redis的一些特点影响分区:

  • Operations involving multiple keys are usually not supported. For instance you can't perform the intersection between two sets if they are stored in keys that are mapped to different Redis instances (actually there are ways to do this, but not directly).
  • Redis transactions involving multiple keys can not be used.
  • The partitioning granuliary is the key, so it is not possible to shard a dataset with a single huge key like a very big sorted set.
  • When partitioning is used, data handling is more complex, for instance you have to handle multiple RDB / AOF files, and to make a backup of your data you need to aggregate the persistence files from multiple instances and hosts.
  • Adding and removing capacity can be complex. For instance Redis Cluster supports mostly transparent rebalancing of data with the ability to add and remove nodes at runtime, but other systems like client side partitioning and proxies don't support this feature. However a technique called Presharding helps in this regard.

Data store or cache?

Partitioning when using Redis as a data store or cache is conceptually the same, however there is a huge difference. While when Redis is used as a data store you need to be sure that a given key always maps to the same instance, when Redis is used as a cache if a given node is unavailable it is not a big problem if we start using a different node, altering the key-instance map as we wish to improve the availability of the system (that is, the ability of the system to reply to our queries).

Consistent hashing implementations are often able to switch to other nodes if the preferred node for a given key is not available. Similarly if you add a new node, part of the new keys will start to be stored on the new node.

The main concept here is the following:

  • If Redis is used as a cache scaling up and down using consistent hashing is easy.
  • If Redis is used as a store, we need to take the map between keys and nodes fixed, and a fixed number of nodes. Otherwise we need a system that is able to rebalance keys between nodes when we add or remove nodes, and currently only Redis Cluster is able to do this, but Redis Cluster is currently in beta, and not yet considered production ready.

Presharding

We learned that a problem with partitioning is that, unless we are using Redis as a cache, to add and remove nodes can be tricky, and it is much simpler to use a fixed keys-instances map.

However the data storage needs may vary over the time. Today I can live with 10 Redis nodes (instances), but tomorrow I may need 50 nodes.

Since Redis is extremely small footprint and lightweight (a spare instance uses 1 MB of memory), a simple approach to this problem is to start with a lot of instances since the start. Even if you start with just one server, you can decide to live in a distributed world since your first day, and run multiple Redis instances in your single server, using partitioning.

And you can select this number of instances to be quite big since the start. For example, 32 or 64 instances could do the trick for most users, and will provide enough room for growth.

In this way as your data storage needs increase and you need more Redis servers, what to do is to simply move instances from one server to another. Once you add the first additional server, you will need to move half of the Redis instances from the first server to the second, and so forth.

Using Redis replication you will likely be able to do the move with minimal or no downtime for your users:

  • Start empty instances in your new server.
  • Move data configuring these new instances as slaves for your source instances.
  • Stop your clients.
  • Update the configuration of the moved instances with the new server IP address.
  • Send the SLAVEOF NO ONE command to the slaves in the new server.
  • Restart your clients with the new updated configuration.
  • Finally shut down the no longer used instances in the old server.

Redis分区实现

So far we covered Redis partitioning in theory, but what about practice? What system should you use?

Redis Cluster

Redis Cluster is the preferred way to get automatic sharding and high availability. It is currently not production ready, but finally entered beta stage, so we recommend you to start experimenting with it. You can get more information about Redis Cluster in the Cluster tutorial.

Once Redis Cluster will be available, and if a Redis Cluster complaint client is available for your language, Redis Cluster will be the de facto standard for Redis partitioning.

Redis Cluster is a mix between query routing and client side partitioning.

Twemproxy

Twemproxy is a proxy developed at Twitter for the Memcached ASCII and the Redis protocol. It is single threaded, it is written in C, and is extremely fast. It is open source software released under the terms of the Apache 2.0 license.

Twemproxy supports automatic partitioning among multiple Redis instances, with optional node ejection if a node is not available (this will change the keys-instances map, so you should use this feature only if you are using Redis as a cache).

It is not a single point of failure since you can start multiple proxies and instruct your clients to connect to the first that accepts the connection.

Basically Twemproxy is an intermediate layer between clients and Redis instances, that will reliably handle partitioning for us with minimal additional complexities. Currently it is the suggested way to handle partitioning with Redis.

You can read more about Twemproxy in this antirez blog post.

Clients supporting consistent hashing

An alternative to Twemproxy is to use a client that implements client side partitioning via consistent hashing or other similar algorithms. There are multiple Redis clients with support for consistent hashing, notably Redis-rb and Predis.

Please check the full list of Redis clients to check if there is a mature client with consistent hashing implementation for your language.

This website is open source software developed by Citrusbyte.

The Redis logo was designed by Carlos Prioglio. See more credits.

Sponsored by Redis Support