- 分享主题:Clustered Federated Learning
- 论文标题:An Efficient Framework for Clustered Federated Learning
- 论文链接:https://arxiv.org/pdf/2006.04088v2.pdf
- 分享人:唐共勇

1. Summary

【必写】,推荐使用 grammarly 检查语法问题,尽量参考论文 introduction 的写作方式。需要写出

  1. 这篇文章解决了什么问题?
  2. 作者使用了什么方法(不用太细节)来解决了这个问题?
  3. 你觉得你需要继续去研究哪些概念才会加深你对这篇文章的理解?

This paper adopts iterative clustering and sharing among clusters. This paper needs to specify the number of clusters [Formula] in advance. The nodes in each cluster share a set of average parameters, and in each round of iteration, it will re-estimate the cluster to which each node belongs by minimizing the loss function and using the average parameters of each cluster. In addition, each iteration will recalculate the average value of the parameters corresponding to each cluster. The K-th cluster broadcasts the parameter to all its nodes, that is, each cluster constitutes a learning task, which is responsible for learning the personalized model belonging to this cluster

2. 你对于论文的思考

需要写出你自己对于论文的思考,例如优缺点,你的takeaways

(1) 第 Clustered Federated Learning - 图1 个client节点执行

  • 从server接收簇参数 Clustered Federated Learning - 图2
  • 估计其所属的簇: Clustered Federated Learning - 图3

  • 对簇参数执行 Clustered Federated Learning - 图4 个局部epoch的SGD:

Clustered Federated Learning - 图5
(此处将局部数据 Clustered Federated Learning - 图6 划分为多个 Clustered Federated Learning - 图7 )

  • 将最终得到的簇参数做为 Clustered Federated Learning - 图8 ,并和该client对应的簇划分一起发往server。

(2) server节点执行

  • Clustered Federated Learning - 图9 个client接收 Clustered Federated Learning - 图10 和各client的簇划分情况。
  • 根据节点参数的平均值来更新簇参数:

Clustered Federated Learning - 图11

3. 其他

【可选】

该算法在具体实现时效仿多任务学习中的权值共享机制,允许不同簇(任务)之间共享部分参数。具体地,即在训练神经网络模型时,先使用所有client的训练数据学习一个共享表示,然后再运行聚类算法为每个簇学习神经网络的最后一层(即多任务层)。
image.png