负载均衡策略
目前 Dubbo 内置了如下负载均衡算法,用户可直接配置使用:
| 算法 | 特性 | 备注 |
|---|---|---|
| RandomLoadBalance | 加权随机 | 默认算法,默认权重相同 |
| RoundRobinLoadBalance | 加权轮询 | 借鉴于 Nginx 的平滑加权轮询算法,默认权重相同, |
| LeastActiveLoadBalance | 最少活跃优先 + 加权随机 | 背后是能者多劳的思想 |
| ShortestResponseLoadBalance | 最短响应优先 + 加权随机 | 更加关注响应速度 |
| ConsistentHashLoadBalance | 一致性 Hash | 确定的入参,确定的提供者,适用于有状态请求 |
源码版本
查看Dubbo源码版本是2.77,maven依赖如下:
<dependency><groupId>org.apache.dubbo</groupId><artifactId>dubbo-spring-boot-starter</artifactId><version>${dubbo-starter.version}</version></dependency>
负载均衡的抽象父类
public abstract class AbstractLoadBalance implements LoadBalance {static int calculateWarmupWeight(int uptime, int warmup, int weight) {//uptime:启动时间 warmup:invoker的预热时间,weight:invoker配置的权重//权重=uptime/(warmup/weight)int ww = (int) ( uptime / ((float) warmup / weight));//计算后的权重小于1则返回1,否则去计算后的权重和配置的权重中较小那个return ww < 1 ? 1 : (Math.min(ww, weight));}@Overridepublic <T> Invoker<T> select(List<Invoker<T>> invokers, URL url, Invocation invocation) {if (CollectionUtils.isEmpty(invokers)) {return null;}if (invokers.size() == 1) {return invokers.get(0);}return doSelect(invokers, url, invocation);}//抽象方法,由负载均衡实现类实现方法protected abstract <T> Invoker<T> doSelect(List<Invoker<T>> invokers, URL url, Invocation invocation);int getWeight(Invoker<?> invoker, Invocation invocation) {int weight;URL url = invoker.getUrl();// Multiple registry scenario, load balance among multiple registriesif (REGISTRY_SERVICE_REFERENCE_PATH.equals(url.getServiceInterface())) {weight = url.getParameter(REGISTRY_KEY + "." + WEIGHT_KEY, DEFAULT_WEIGHT);} else {// 获取invoker配置的权重,默认权重(int DEFAULT_WEIGHT = 100;)weight = url.getMethodParameter(invocation.getMethodName(), WEIGHT_KEY, DEFAULT_WEIGHT);if (weight > 0) {// 获取invoker的启动时间戳long timestamp = invoker.getUrl().getParameter(TIMESTAMP_KEY, 0L);if (timestamp > 0L) {//invoker 启动时间long uptime = System.currentTimeMillis() - timestamp;if (uptime < 0) {return 1;}// invoker的预热时间,默认10分钟(int DEFAULT_WARMUP = 10 * 60 * 1000;)int warmup = invoker.getUrl().getParameter(WARMUP_KEY, DEFAULT_WARMUP);// 如果服务启动时间小于预热时间,则重新计算权重if (uptime > 0 && uptime < warmup) {//计算算法weight = calculateWarmupWeight((int)uptime, warmup, weight);}}}}return Math.max(weight, 0);}}
加权随机(默认)
源码
public class RandomLoadBalance extends AbstractLoadBalance {public static final String NAME = "random";/*** Select one invoker between a list using a random criteria* @param invokers List of possible invokers* @param url URL* @param invocation Invocation* @param <T>* @return The selected invoker*/@Overrideprotected <T> Invoker<T> doSelect(List<Invoker<T>> invokers, URL url, Invocation invocation) {// invoker集合长度int length = invokers.size();// 先假设所有invoker的权重都一样boolean sameWeight = true;// invokers是个集合,weights记录各个invoker的权重int[] weights = new int[length];// 先获取第一个invoker的权重int firstWeight = getWeight(invokers.get(0), invocation);weights[0] = firstWeight;// 所有invoker总权重,int totalWeight = firstWeight;// 循环更新weights,累加totalWeight,并根据权重修改sameWeightfor (int i = 1; i < length; i++) {int weight = getWeight(invokers.get(i), invocation);// save for later useweights[i] = weight;// SumtotalWeight += weight;if (sameWeight && weight != firstWeight) {sameWeight = false;}}// 存在invoker权重不一致,总权重大于0if (totalWeight > 0 && !sameWeight) {// 随机获取[0,totalWeight)范围的数值,取得一个随机权重int offset = ThreadLocalRandom.current().nextInt(totalWeight);// 下面的循环是从权重数组里获取对应invoker的权重for (int i = 0; i < length; i++) {// 随机权重减去invoker的权重值offset -= weights[i];if (offset < 0) {// 如果随机权重见到小于0后,返回当前下标的invokerreturn invokers.get(i);}}}// 如果总权重为0,或者所有权重一样,随机返回一个return invokers.get(ThreadLocalRandom.current().nextInt(length));}}
最短优先+加权轮询随机、最少活跃优先+加权随机策略最后的实现与加权随机基本一致。
加权轮询
官网介绍
- 加权轮询,按公约后的权重设置轮询比率,循环调用节点
- 缺点:同样存在慢的提供者累积请求的问题。
加权轮询过程过程中,如果某节点权重过大,会存在某段时间内调用过于集中的问题。
例如 ABC 三节点有如下权重:{A: 3, B: 2, C: 1}
那么按照最原始的轮询算法,调用过程将变成:A A A B B C
对此,Dubbo 借鉴 Nginx 的平滑加权轮询算法,对此做了优化,调用过程可抽象成下表:
| 轮前加和权重 | 本轮胜者 | 合计权重 | 轮后权重(胜者减去合计权重) |
|---|---|---|---|
| 起始轮 | \ | \ | A(0), B(0), C(0) |
A(3), B(2), C(1) |
A | 6 | A(-3), B(2), C(1) |
A(0), B(4), C(2) |
B | 6 | A(0), B(-2), C(2) |
A(3), B(0), C(3) |
A | 6 | A(-3), B(0), C(3) |
A(0), B(2), C(4) |
C | 6 | A(0), B(2), C(-2) |
A(3), B(4), C(-1) |
B | 6 | A(3), B(-2), C(-1) |
A(6), B(0), C(0) |
A | 6 | A(0), B(0), C(0) |
我们发现经过合计权重(3+2+1)轮次后,循环又回到了起点,整个过程中节点流量是平滑的,且哪怕在很短的时间周期内,概率都是按期望分布的。
如果用户有加权轮询的需求,可放心使用该算法。
源码
public class RoundRobinLoadBalance extends AbstractLoadBalance {public static final String NAME = "roundrobin";private static final int RECYCLE_PERIOD = 60000;protected static class WeightedRoundRobin {// 服务提供者权重private int weight;// 当前权重private AtomicLong current = new AtomicLong(0);// 最后一次更新时间private long lastUpdate;public int getWeight() {return weight;}public void setWeight(int weight) {this.weight = weight;// 初始值为0current.set(0);}public long increaseCurrent() {// CAS操作 current = current + weightreturn current.addAndGet(weight);}public void sel(int total) {// CAS操作 current = current - totalcurrent.addAndGet(-1 * total);}public long getLastUpdate() {return lastUpdate;}public void setLastUpdate(long lastUpdate) {this.lastUpdate = lastUpdate;}}private ConcurrentMap<String, ConcurrentMap<String, WeightedRoundRobin>> methodWeightMap = new ConcurrentHashMap<String, ConcurrentMap<String, WeightedRoundRobin>>();/*** get invoker addr list cached for specified invocation* <p>* <b>for unit test only</b>** @param invokers* @param invocation* @return*/protected <T> Collection<String> getInvokerAddrList(List<Invoker<T>> invokers, Invocation invocation) {String key = invokers.get(0).getUrl().getServiceKey() + "." + invocation.getMethodName();Map<String, WeightedRoundRobin> map = methodWeightMap.get(key);if (map != null) {return map.keySet();}return null;}@Overrideprotected <T> Invoker<T> doSelect(List<Invoker<T>> invokers, URL url, Invocation invocation) {// 获取url到weightRoundRobin映射表,如果为空,则创建一个新的String key = invokers.get(0).getUrl().getServiceKey() + "." + invocation.getMethodName();ConcurrentMap<String, WeightedRoundRobin> map = methodWeightMap.computeIfAbsent(key, k -> new ConcurrentHashMap<>());int totalWeight = 0;long maxCurrent = Long.MIN_VALUE;long now = System.currentTimeMillis();Invoker<T> selectedInvoker = null;WeightedRoundRobin selectedWRR = null;for (Invoker<T> invoker : invokers) {String identifyString = invoker.getUrl().toIdentityString();int weight = getWeight(invoker, invocation);//检测当前Invoker是否有相应的WeightedRoundRobin,没有则创建WeightedRoundRobin weightedRoundRobin = map.computeIfAbsent(identifyString, k -> {WeightedRoundRobin wrr = new WeightedRoundRobin();// 设置Invoker 权重wrr.setWeight(weight);return wrr;});// Invoker 权重不等于weightedRoundRobin中保存的权重,说明权重变化,进行更新if (weight != weightedRoundRobin.getWeight()) {//weight changedweightedRoundRobin.setWeight(weight);}// CAS操作 cur = current+weightlong cur = weightedRoundRobin.increaseCurrent();// 设置 lastUpdate,表示近期更新过weightedRoundRobin.setLastUpdate(now);if (cur > maxCurrent) {maxCurrent = cur;// 将具有最大current 权重的Invoker赋值给selectedInvokerselectedInvoker = invoker;// 将Invoker对应的weightedRoundRobin赋值给selectedWRR,后面用到selectedWRR = weightedRoundRobin;}//计算权重之和totalWeight += weight;}// 如果Invoker个数与ConcurrentMap<String, WeightedRoundRobin>的key个数不一致if (invokers.size() != map.size()) {//去除长时间未被更新的节点(60s)map.entrySet().removeIf(item -> now - item.getValue().getLastUpdate() > RECYCLE_PERIOD);}if (selectedInvoker != null) {// CAS操作,将最大权重的Invoker减去权重总和selectedWRR.sel(totalWeight);// 返回具有最大current的Invokerreturn selectedInvoker;}// should not happen herereturn invokers.get(0);}}
最短响应优先 + 加权随机策略
org.apache.dubbo.rpc.cluster.loadbalance.ShortestResponseLoadBalance
主要流程如下:
- 从多个服务提供者中选择出调用成功的且响应时间最短的服务提供者,由于满足这样条件的服务提供者有可能有多个。所以当选择出多个服务提供者后要根据他们的权重做分析
- 如果只选择出来了一个,直接用选出来这个
- 如果真的有多个,看它们的权重是否一样,如果不一样,则走加权随机算法的逻辑
- 如果它们的权重是一样的,则随机调用一个
负载均衡实现代码如下:
protected <T> Invoker<T> doSelect(List<Invoker<T>> invokers, URL url, Invocation invocation) {//服务提供者的数量int length = invokers.size();//所有服务提供者的估计最短响应时间(只是一个临时变量,在循环中存储当前最短响应时间是多少)long shortestResponse = Long.MAX_VALUE;//具有相同最短响应时间的服务提供者个数,初始化为 0int shortestCount = 0;//数组里面放的是具有相同最短响应时间的服务提供者的下标int[] shortestIndexes = new int[length];//每一个服务提供者的权重int[] weights = new int[length];//多个具有相同最短响应时间的服务提供者对应的预热int totalWeight = 0;//第一个具有最短响应时间的服务提供者的权重int firstWeight = 0;//多个满足条件的提供者的权重是否一致boolean sameWeight = true;// Filter out all the shortest response invokersfor (int i = 0; i < length; i++) {Invoker<T> invoker = invokers.get(i);RpcStatus rpcStatus = RpcStatus.getStatus(invoker.getUrl(), invocation.getMethodName());//获取调用成功的平均时间//getSucceededAverageElapsed():调用成功的请求数总数对应的总耗时 / 调用成功的请求数总数long succeededAverageElapsed = rpcStatus.getSucceededAverageElapsed();int active = rpcStatus.getActive();long estimateResponse = succeededAverageElapsed * active;//获取预热后的权重int afterWarmup = getWeight(invoker, invocation);weights[i] = afterWarmup;// Same as LeastActiveLoadBalanceif (estimateResponse < shortestResponse) {//如果出现有更短的响应时间的服务提供者,记录更短的响应时间和当前服务者的下标shortestResponse = estimateResponse;shortestCount = 1;shortestIndexes[0] = i;totalWeight = afterWarmup;firstWeight = afterWarmup;sameWeight = true;} else if (estimateResponse == shortestResponse) {//如果出现时间一样长的服务提供者,更新shortestIndexes下标,计算总权重,判断权重是否相等shortestIndexes[shortestCount++] = i;totalWeight += afterWarmup;if (sameWeight && i > 0&& afterWarmup != firstWeight) {sameWeight = false;}}}if (shortestCount == 1) {//经过选择后只有一个服务提供者,直接返回这个服务提供者return invokers.get(shortestIndexes[0]);}//加权随机负载均衡的实现if (!sameWeight && totalWeight > 0) {int offsetWeight = ThreadLocalRandom.current().nextInt(totalWeight);for (int i = 0; i < shortestCount; i++) {int shortestIndex = shortestIndexes[i];offsetWeight -= weights[shortestIndex];if (offsetWeight < 0) {return invokers.get(shortestIndex);}}}//从多个满足条件且权重一样的服务提供者中随机选择一个return invokers.get(shortestIndexes[ThreadLocalRandom.current().nextInt(shortestCount)]);}
最少活跃优先 + 加权随机策略
org.apache.dubbo.rpc.cluster.loadbalance.LeastActiveLoadBalance
代码如下:
public class LeastActiveLoadBalance extends AbstractLoadBalance {public static final String NAME = "leastactive";@Overrideprotected <T> Invoker<T> doSelect(List<Invoker<T>> invokers, URL url, Invocation invocation) {//首先获取服务提供者(invoker)的数量,目前我们的服务提供者大小为3int length = invokers.size();//初始化最小活跃次数(minActive)//活跃次数一定是一个大于等于0的数,所以这里设置为-1就是为了minActive绐替換掉int leastActive = -1;// minactive相同的invoker的数量int leastCount = 0;// invokers是个集合,leastIndexes的作用是记录minActive相同的invoker下标int[] leastIndexes = new int[length];// invokers是个集合,weights记录各个invoker的权重,当前权重是[5,10,15]int[] weights = new int[length];// 用于记录所有invoker的权重之和,当前权重是5+10+15;int totalWeight = 0;// 记录第一个minActive的invoker的权重// 在下面的循环中和其他的具有相同minActive的invoker进行对比// 作用是判断是否所有有相同minActive的invoker的权重都一样// 例子:A,8两个服务器权重都配置的200,minActive也相等,则随机选择一个int firstWeight = 0;// 先假设所有invoker的权重都一样boolean sameWeight = true;// Filter out all the least active invokersfor (int i = 0; i < length; i++) {// 遍历出每个invokerInvoker<T> invoker = invokers.get(i);// 读取当前invoker的活跃数int active = RpcStatus.getStatus(invoker.getUrl(), invocation.getMethodName()).getActive();// 获取当前invoker的权重int afterWarmup = getWeight(invoker, invocation);// 将invoker的权重存储到weights对应的下标weights[i] = afterWarmup;// 第一次循环的时候minActive从leastActive=-1变成了leastActive = active;(当前invoker活跃数)// 后续循环,找到活跃数更小的进行替换// leastCount、leastIndexs、totalWeight、firstWeight、sameWeight都重新开始if (leastActive == -1 || active < leastActive) {// 替换活跃度更小的值leastActive = active;// 最小活跃度的invoker重置为1leastCount = 1;// 记录活跃度更小的invoker的下标到leastIndexesleastIndexes[0] = i;// Reset totalWeighttotalWeight = afterWarmup;// Record the weight the first least active invokerfirstWeight = afterWarmup;// Each invoke has the same weight (only one invoker here)sameWeight = true;// If current invoker's active value equals with leaseActive, then accumulating.} else if (active == leastActive) {// 当前invoker的活跃数如果和minActive相等,在leastIndexes记录当前下标,leastCount+1leastIndexes[leastCount++] = i;// 更新总权重totalWeight += afterWarmup;// 判断这个具有相同active的invoker,和之前记录的invoker权重是否相等if (sameWeight && afterWarmup != firstWeight) {sameWeight = false;}}}// 最小活跃数的invoker为1,那么直接取leastIndexes第一个if (leastCount == 1) {// If we got exactly one invoker having the least active value, return this invoker directly.return invokers.get(leastIndexes[0]);}// 走到这里肯定是leastCount>1// 加权随机负载均衡的实现if (!sameWeight && totalWeight > 0) {int offsetWeight = ThreadLocalRandom.current().nextInt(totalWeight);for (int i = 0; i < leastCount; i++) {int leastIndex = leastIndexes[i];offsetWeight -= weights[leastIndex];if (offsetWeight < 0) {return invokers.get(leastIndex);}}}// 两个及以上权重相同或总权重为0,随机返回一个invokerreturn invokers.get(leastIndexes[ThreadLocalRandom.current().nextInt(leastCount)]);}}
想要模拟最小活跃数,先把生产者里三个服务提供者修改修改不同的权重,provider01、provider02、provider03权重分别设置5、10、15,为了让多个消费者都有一些调用次数,生产者所有接口响应都进行休眠10分钟,并修改接口超时时间,其中以provider03代码示例如下
/*** 功能描述:** @param:* @return:* @auther: Zywoo Lee* @date: 2021/11/20 3:02 下午*/@DubboService(weight = 15,timeout = 700000, loadbalance = "leastactive")public class DemoServiceImpl implements DemoService {private final Logger logger = LoggerFactory.getLogger(getClass());@Value("${dubbo.application.name}")private String serviceName;@Overridepublic String sayHello(String name) {System.out.println("权重为15的服务器接受到请求了:" + name);try {TimeUnit.MINUTES.sleep(10);} catch (InterruptedException e) {e.printStackTrace();}return String.format("[%s] : Hello, %s", serviceName, name);}}
然后服务消费者进行改造,一共发送21个请求,前20请求分别落在三台服务提供者上,然后最后debug第21个线程,代码如下:
/*** @Auther: Zywoo Lee* @Date: 2021/11/20 15:35* @Description:*/@RestController@RequestMapping("/test")@Api(tags = "Dubbo服务测试类")public class DemoController {@DubboReference(loadbalance = "leastactive",filter = "activelimit")private DemoService demoService;@ApiOperation(value = "服务调用测试", notes = "")@GetMapping("sayHello")public void sayHello() {for (int i = 0; i < 20; i++) {int finalI = i;new Thread(() -> {demoService.sayHello("ZywooLee-" + finalI);}).start();}try {TimeUnit.SECONDS.sleep(10);} catch (InterruptedException e) {e.printStackTrace();}demoService.sayHello("ZywooLee-debug");}}
active为0的情况
请注意:@DubboReference一定要配置filter = “activelimit”,不然在负载均衡的doSelect()方法会出现active为0的情况。原因在于客户端没有配置ActiveLimitFilter
int active = RpcStatus.getStatus(invoker.getUrl(), invocation.getMethodName()).getActive();
测试结果



然后开始打断点,进入第21个请求。最终可以看到请求落向选择最少连接数的提供者

一致性哈希负载均衡
org.apache.dubbo.rpc.cluster.loadbalance.ConsistentHashLoadBalance
因为原代码debug查看hash环生成不方便,所以copy源码加入一些输出打印,并继承AbstractLoadBalance,然后指定自定义负载均衡
package com.dubbo.loadbalance;import com.alibaba.dubbo.rpc.support.RpcUtils;import org.apache.dubbo.common.URL;import org.apache.dubbo.rpc.Invocation;import org.apache.dubbo.rpc.Invoker;import org.apache.dubbo.rpc.cluster.loadbalance.AbstractLoadBalance;import org.springframework.stereotype.Component;import java.nio.charset.StandardCharsets;import java.security.MessageDigest;import java.security.NoSuchAlgorithmException;import java.util.List;import java.util.Map;import java.util.TreeMap;import java.util.concurrent.ConcurrentHashMap;import java.util.concurrent.ConcurrentMap;import static org.apache.dubbo.common.constants.CommonConstants.COMMA_SPLIT_PATTERN;/*** @Auther: Zywoo Lee* @Date: 2021/11/20 18:00* @Description:*/public class ZywooLeeConsistentHashLoadBalance extends AbstractLoadBalance {public static final String NAME = "consistenthash";/*** Hash nodes name*/public static final String HASH_NODES = "hash.nodes";/*** Hash arguments name*/public static final String HASH_ARGUMENTS = "hash.arguments";private final ConcurrentMap<String, ConsistentHashSelector<?>> selectors = new ConcurrentHashMap<String, ConsistentHashSelector<?>>();@SuppressWarnings("unchecked")@Overrideprotected <T> Invoker<T> doSelect(List<Invoker<T>> invokers, URL url, Invocation invocation) {String methodName = RpcUtils.getMethodName(invocation);String key = invokers.get(0).getUrl().getServiceKey() + "." + methodName;System.out.println("selectors中获取value的key="+key);// 获取invokers的hashCodeint invokersHashCode = invokers.hashCode();ConsistentHashSelector<T> selector = (ConsistentHashSelector<T>) selectors.get(key);// 如果invokers是一个新的List对象,意味着服务提供者数量发生变化,可能新增也可能减少// 此时selector.identityHashCode != invokersHashCode条件成立// 如果第一次调用 selector == null成立if (selector == null || selector.identityHashCode != invokersHashCode) {System.out.println("新的invokers:"+ invokersHashCode+",原"+(selector==null ? "null" : selector.identityHashCode));selectors.put(key, new ConsistentHashSelector<T>(invokers, methodName, invokersHashCode));selector = (ConsistentHashSelector<T>) selectors.get(key);System.out.println("--哈希环构建完成--");for (Map.Entry<Long, Invoker<T>> entry : selector.virtualInvokers.entrySet()) {System.out.println("key(哈希值)="+entry.getKey()+", value(虚拟节点)="+entry.getValue());}}System.out.println("---执行select方法选择invoker---");// 创建新的ConsistentHashSelector的select方法选择invokerreturn selector.select(invocation);}private static final class ConsistentHashSelector<T> {// 使用TreeMap存储Invoker的虚拟节点private final TreeMap<Long, Invoker<T>> virtualInvokers;// 虚拟节点数private final int replicaNumber;// hashcodeprivate final int identityHashCode;// 请求中的参数下标// 需要对请求中对应的下标的参数进行哈希计算private final int[] argumentIndex;ConsistentHashSelector(List<Invoker<T>> invokers, String methodName, int identityHashCode) {this.virtualInvokers = new TreeMap<Long, Invoker<T>>();this.identityHashCode = identityHashCode;URL url = invokers.get(0).getUrl();// 即时启动多个invoker,每个invoker对应的url上的虚拟节点数配置的都是一样的// 这里默认160个,可以在服务提供者配置parameters->hash.nodes,我这边设置为4个便于debugthis.replicaNumber = url.getMethodParameter(methodName, HASH_NODES, 160);// 获取参与哈希计算的参数下标值,默认对第一个参数进行哈希计算String[] index = COMMA_SPLIT_PATTERN.split(url.getMethodParameter(methodName, HASH_ARGUMENTS, "0"));argumentIndex = new int[index.length];// for循环对argumentIndex赋值for (int i = 0; i < index.length; i++) {argumentIndex[i] = Integer.parseInt(index[i]);}// 遍历服务提供者for (Invoker<T> invoker : invokers) {// 获取每个invoker的地址String address = invoker.getUrl().getAddress();for (int i = 0; i < replicaNumber / 4; i++) {// adress+i进行md5运算得到一个长度为16的字节数组byte[] digest = md5(address + i);// 对digest部分字节进行4次hash运算得到四个不同的long型正整数for (int h = 0; h < 4; h++) {// h==0:digest中下标0~3的4个字节进行位运算// h==1:digest中下标4~7的4个字节进行位运算// h==2:digest中下标8~11的4个字节进行位运算// h==3:digest中下标12~15的4个字节进行位运算long m = hash(digest, h);// 将hash到invoker的映射关系存储到virtualInvokers中// virtualInvokers需要提供高效查询操作,使用TreeMap数据结构virtualInvokers.put(m, invoker);}}}}public Invoker<T> select(Invocation invocation) {String key = toKey(invocation.getArguments());byte[] digest = md5(key);// 取digest数组的前四个字节进行hash运算,再将hash值传给selectForKey()// 寻找合适的invokerlong hash = hash(digest, 0);System.out.println("参与hash计算的key:"+"经过hash计算后hash="+hash);return selectForKey(hash);}// argumentIndex转换成keyprivate String toKey(Object[] args) {StringBuilder buf = new StringBuilder();for (int i : argumentIndex) {if (i >= 0 && i < args.length) {buf.append(args[i]);}}return buf.toString();}private Invoker<T> selectForKey(long hash) {//到TreeMap中找到第一个节点值大于或等于当前hash的InvokerMap.Entry<Long, Invoker<T>> entry = virtualInvokers.ceilingEntry(hash);//如果hash大于Invoker在圆环上最大位置,此时entry=nullif (entry == null) {// 将TreeMap头节点赋值给entryentry = virtualInvokers.firstEntry();}System.out.println("根据key计算出hash="+hash+",获取出来的invoker是:"+entry.getValue());return entry.getValue();}private long hash(byte[] digest, int number) {return (((long) (digest[3 + number * 4] & 0xFF) << 24)| ((long) (digest[2 + number * 4] & 0xFF) << 16)| ((long) (digest[1 + number * 4] & 0xFF) << 8)| (digest[number * 4] & 0xFF))& 0xFFFFFFFFL;}private byte[] md5(String value) {MessageDigest md5;try {md5 = MessageDigest.getInstance("MD5");} catch (NoSuchAlgorithmException e) {throw new IllegalStateException(e.getMessage(), e);}md5.reset();byte[] bytes = value.getBytes(StandardCharsets.UTF_8);md5.update(bytes);return md5.digest();}}}
测试结果
服务提供者代码如下:
parameters如果不指定的话,默认为160个,在源码里可以看到
@DubboService(timeout = 700000, loadbalance = "zywooLeeConsistentHash",parameters={"hash.nodes","4"})public class DemoServiceImpl implements DemoService {private final Logger logger = LoggerFactory.getLogger(getClass());@Value("${dubbo.application.name}")private String serviceName;@Overridepublic String sayHello(String name) {return String.format("[%s] : Hello, %s", serviceName, name);}}
服务消费者代码如下:
/*** @Auther: Zywoo Lee* @Date: 2021/11/20 15:35* @Description:*/@RestController@RequestMapping("/test")@Api(tags = "Dubbo服务测试类")public class DemoController {@DubboReference(loadbalance = "zywooLeeConsistentHash",filter = "activelimit")private DemoService demoService;@ApiOperation(value = "服务调用测试", notes = "")@GetMapping("sayHello")public void sayHello() {String name = demoService.sayHello("ZywooLee-debug");System.out.println(name);}}
首先先启动两个服务提供者,调用方法控制台输出如下:

服务提供者数量不变,第二次调用,可以看到selector不为null,invokers没有变化,自然selector.identityHashCode != invokersHashCode为false,直接进入selector.select(invocation)逻辑。

第二次调用控制台输出如下:

然后开启第三个服务提供者,此时invokers自然发生了变化,再次发起服务请求,控制台打印如下:

