§ 代码 - 图1

一、Traditional transfer learning methods

1.1 SVM(baseline)

  1. function [acc,y_pred,time_pass] = SVM(Xs,Ys,Xt,Yt)
  2. Xs = double(Xs);
  3. Xt = double(Yt);
  4. time_start = clock();
  5. [acc,y_pred] = LinAccuracy(Xs,Xt,Ys,Yt);
  6. time_end = clock();
  7. time_pass = etime(time_end,time_start);
  8. end
  9. function [acc,predicted_label] = LinAccuracy(trainset,testset,trainlbl,testlbl)
  10. model = trainSVM_Model(trainset,trainlbl);
  11. [predicted_label, accuracy, decision_values] = svmpredict(testlbl, testset, model);
  12. acc = accuracy(1,1);
  13. end
  14. function svmmodel = trainSVM_Model(trainset,trainlbl)
  15. C = [0.001 0.01 0.1 1.0 10 100 ];
  16. parfor i = 1 :size(C,2)
  17. model(i) = libsvmtrain(double(trainlbl), sparse(double((trainset))),sprintf('-c %d -q -v 2',C(i) ));
  18. end
  19. [val indx]=max(model);
  20. CVal = C(indx);
  21. svmmodel = libsvmtrain(double(trainlbl), sparse(double((trainset))),sprintf('-c %d -q',CVal));
  22. end

1.2 TCA(Transfer Component Analysis, TNN-11)[1]

这部分是关于TCA在Python和MATLAB中的一个运用。注:TCA的核心是一个广义特征分解问题。在MATLAB中,可以使用eigs()函数来解决。在Python中,通过调用scipy.linalg.eig()函数也可以实现。他们有一些不同,所以最后的结果可能也会有些不一样。下面的Python文件可以直接使用,但是MATLAB只包含TCA的核心函数。要想使用MATLAB的代码,需要学习BDA然后设定参数。

1.2.1 MATLAB

TCA.m

1.2.2 Python

TCA.py

1.2.3 Reference

Pan S J, Tsang I W, Kwok J T, et al. Domain adaptation via transfer component analysis[J]. IEEE Transactions on Neural Networks, 2011, 22(2): 199-210.

1.3 KMM (Kernel Mean Matching, NIPS-06) [67]

Python类型:KMM.py

1.4 GFK (Geodesic Flow Kernel, CVPR-12) [2]

这部分是GFK在MATLAB和Python中的运用。在MATLAB中,仅需要GFK.m文件,注意getGFKDim.m文件是GFK论文中的子空间分歧测量的应用(subspace disagreement measurement)。在Python中,因为某些不可抗力因素,自带的函数有一些问题。建议使用混编的方式,将MATLAB的代码给Python使用。

1.4.1 MATLAB

GFK.mgetGFKDim.m

1.4.2 Reference

Gong B, Shi Y, Sha F, et al. Geodesic flow kernel for unsupervised domain adaptation[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012: 2066-2073.

1.5 DA-NBNN (Frustratingly Easy NBNN Domain Adaptation, ICCV-13) [39]

DANBNN_demo.zip
这部分主要解释如何使用ICCV 2013名为”Frustratingly Easy NBNN Domain Adaptation”论文中提出的DA-NBNN算法,该算法需要两个工具。

  1. FLANN(Fast Library for Approximate Nearest Neighbors)。点击此链接下载代码,然后将”flann-1.8.4-src/src/matlab”路径下的所有文件拷贝到路径”flann”中。flann-1.8.4-src.zip
  2. k-Nearest-Neighbor。代码使用的是 KNearestNeighbors(dataMatrix, queryMatrix, k)函数,点击此链接下载该函数,这个函数应该被拷贝到路径”functions”中。kNearestNeighbors.zip

运行下面的MATLAB脚本:
demo:这个demo文件运行的是论文中的5.2节和Figure 3描述的 Caltech->Amazon实验,以及在Amazon->Amazon域上的运行。它给出了BOW-Nearest Neighbor, NBNN, DA-NBNN等的结果。
这个代码首先界定了Caltech和Amazon数据的分隔线。第一步:DA-NBNN被NBNN的结果初始化;第二步:从训练数据集中移除20个样本,然后这20个样本从目标域(target)的训练集和测试集再补全。在两种情况中,从10个类中选取2个样本出来,所以总共是20个样本。DA-NBNN方法要求不断地迭代第2步,但是从结果中我们可以看到相较于NBNN方法,即使我们只经过了步骤1分类准确率仍有提升。
下面是这个文件夹中剩下的文件夹内容。
Office+Caltech/
这个路径是Amazon和Caltech的数据文件,这个文件来自于Office+Caltech数据集。我们从原始的Office+Caltech图片中通过使用OpenSURF提取出SURF特征
Office+Caltech/amazon_vocabulary_BOW/
通过使用k均值或者随机选择Amazon图片,最后得到了一个800-visual-word BOW单词表。所有的图片可以通过类似于引用的方式来表达出来。
splits/
上面说的分割的mat文件最后保存在这个路径中。
flann/
我们的demo文件要使用的FLANN文件在这里,matlab的FLANN源文件需要放在这个路径中。在这里同样有 nearest_neighbors.cpp的编译好的mex文件。这个是在MATLAB R2013a和64位的Linux系统中编译的,可能需要重新编译一下。
functions/
在这个路径中存放着许多运行demo文件所必须的主函数。

  • select.m:主要用来选择每个域的样本,然后将样本以.mat格式保存在 /splits路径中。
  • run_NN.m:它在同样的域或者交叉的域中运行BOW-NN实验。
  • run_NBNN.m:在同样的域环境中运行NBNN.
  • run_DA-NBNN.m:现在交叉域中运行NBNN,然后紧接着运行DA-NBNN,我们仅需要考虑样本选择和矩阵优化这一步。
  • adaptation.m
  • fn_create_dist.m
  • fn_create_metric.m
  • add.m

该算法的一个训练结果:点击查看

1.6 JDA (Joint Distribution Adaptation, ICCV-13) [3]

这部分是关于JDA在Python和MATLAB中的一个运用。注:JDA的核心是一个广义特征分解问题。在MATLAB中,可以使用eigs()函数来解决。在Python中,通过调用scipy.linalg.eig()函数也可以实现。他们有一些不同,所以最后的结果可能也会有些不一样。
下面的Python文件可以直接使用,但是MATLAB只包含JDA的核心函数。要想使用MATLAB的代码,需要学习BDA然后设定参数。

1.6.1 MATLAB

JDA.m

1.6.2 Python

JDA.py

1.6.3 Reference

Long M, Wang J, Ding G, et al. Transfer feature learning with joint distribution adaptation[C]//Proceedings of the IEEE international conference on computer vision. 2013: 2200-2207.

1.7 TJM (Transfer Joint Matching, CVPR-14) [4]

MyTJM.m

1.8 CORAL (CORrelation ALignment, AAAI-15) [5]

在Python中,直接使用CORAL.py文件即可,在MATLAB中,如果想要使用CORAL方法转变特征的话,使用CORAL.m即可。另外,如果想要使用CORAL来做变换,然后使用SVM做预测的话,可以使用CORAL_SVM.m文件。
下面的Python文件可以直接使用,但是MATLAB只包含CORAL的核心函数。要想使用MATLAB的代码,需要学习BDA然后设定参数。

1.8.1 MATLAB

CORAL.mCORAL_SVM.m

1.8.2 Python

CORAL.py

1.8.3 作者Github学习

https://github.com/VisionLearningGroup/CORAL

1.8.4 Reference

Sun B, Feng J, Saenko K. Return of frustratingly easy domain adaptation[C]//AAAI. 2016, 6(7): 8.

1.9 JGSA (Joint Geometrical and Statistical Alignment, CVPR-17) [6]

Github中的链接全都已经失效,如果后期更新的话再补上。

1.10 TrAdaBoost (ICML-07)[8]

TransferLearningGame.pyTrAdaboost.py
如果对该算法有兴趣的可以加作者的微信交流,地址

1.11 SA (Subspace Alignment, ICCV-13) [11]

1.11.1 官方网站

该算法有官方网站,可以进去查看,其中有详细的教程和例子。

1.11.2 MATLAB

SA_SVM.m

1.12 BDA (Balanced Distribution Adaptation for Transfer Learning, ICDM-17) [15]

这里同样有MATLAB和Python两种格式。注:原始的BDA(带§ 代码 - 图2)的已经被扩展到MEDA(Manifold Embedded Distribution Alignment)方法中,这是2018年在ACM国际会议的一篇论文中提出的,点击查看它的代码。因此,BDA以后就特指这篇ICAM论文中的W-BDA,主要是用于不平衡迁移。而如果还想用平衡迁移的话,自己在代码中修改即可。

1.12.1 MATLAB

BDA.mdemo_BDA.m

1.12.2 Python

BDA.py

1.13 MTLF (Metric Transfer Learning, TKDE-17) [16]

这是一个完整的Metric Transfer Learning框架。
MTLF-master.zip

1.14 Open Set Domain Adaptation (ICCV-17) [19]

域适应分类任务。这部分的话只给出链接,不给出相关的代码,点击进入官网学习

1.15 TAISL (When Unsupervised Domain Adaptation Meets Tensor Representations, ICCV-17) [21]

这部分的话同样也只给出链接,不给出相关的代码,点击进入官网学习

1.16 STL (Stratified Transfer Learning for Cross-domain Activity Recognition, PerCom-18) [22]

这部分的话同样也只给出链接,不给出相关的代码,点击进入官网学习

1.17 OTL (Online Transfer Learning, ICML-10) [31]

作者给出的官网同样出现了问题,坐等更新。

1.18 RWA (Random Walking, arXiv, simple but powerful) [46]

这部分的话同样也只给出链接,不给出相关的代码,点击进入官网学习

1.19 MEDA (Manifold Embedded Distribution Alignment, ACM MM-18) [47]

这部分的话同样也只给出链接,不给出相关的代码,点击进入官网学习

1.20 EasyTL (Practically Easy Transfer Learning, ICME-19) [63]

这部分的话同样也只给出链接,不给出相关的代码,点击进入MATLAB代码,另外还有Python版本的,点击进入

1.21 SCA (Scatter Component Analysis, TPAMI-17) [79]

这部分的话同样也只给出链接,不给出相关的代码,点击进入官网学习

二、深度迁移

用到的时候再更新,先占个坑。

三、参考文献

[1] Pan S J, Tsang I W, Kwok J T, et al. Domain adaptation via transfer component analysis[J]TNN, 2011, 22(2): 199-210.
[2] Gong B, Shi Y, Sha F, et al. Geodesic flow kernel for unsupervised domain adaptation[C]//CVPR, 2012: 2066-2073.
[3] Long M, Wang J, Ding G, et al. Transfer feature learning with joint distribution adaptation[C]//ICCV. 2013: 2200-2207.
[4] Long M, Wang J, Ding G, et al. Transfer joint matching for unsupervised domain adaptation[C]//CVPR. 2014: 1410-1417.
[5] Sun B, Feng J, Saenko K. Return of Frustratingly Easy Domain Adaptation[C]//AAAI. 2016, 6(7): 8.
[6] Zhang J, Li W, Ogunbona P. Joint Geometrical and Statistical Alignment for Visual Domain Adaptation[C]//CVPR 2017.
[8] Dai W, Yang Q, Xue G R, et al. Boosting for transfer learning[C]//ICML, 2007: 193-200.
[9] Long M, Cao Y, Wang J, et al. Learning transferable features with deep adaptation networks[C]//ICML. 2015: 97-105.
[10] Long M, Wang J, Jordan M I. Deep transfer learning with joint adaptation networks[J]//ICML 2017.
[11] Fernando B, Habrard A, Sebban M, et al. Unsupervised visual domain adaptation using subspace alignment[C]//ICCV. 2013: 2960-2967.
[12] Long M, Zhu H, Wang J, et al. Unsupervised domain adaptation with residual transfer networks[C]//NIPS. 2016.
[13] Tzeng E, Hoffman J, Saenko K, et al. Adversarial discriminative domain adaptation[J]. arXiv preprint arXiv:1702.05464, 2017.
[14] Ganin Y, Lempitsky V. Unsupervised domain adaptation by backpropagation[C]//International Conference on Machine Learning. 2015: 1180-1189.
[15] Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, and Zhiqi Shen. Balanced Distribution Adaptation for Transfer Learning. ICDM 2017.
[16] Y. Xu et al., “A Unified Framework for Metric Transfer Learning,” in IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 6, pp. 1158-1171, June 1 2017. doi: 10.1109/TKDE.2017.2669193
[17] Ganin Y, Ustinova E, Ajakan H, et al. Domain-adversarial training of neural networks[J]. Journal of Machine Learning Research, 2016, 17(59): 1-35.
[18] Haeusser P, Frerix T, Mordvintsev A, et al. Associative Domain Adaptation[C]. ICCV, 2017.
[19] Pau Panareda Busto, Juergen Gall. Open set domain adaptation. ICCV 2017.
[20] Venkateswara H, Eusebio J, Chakraborty S, et al. Deep hashing network for unsupervised domain adaptation[C]. CVPR 2017.
[21] H. Lu, L. Zhang, et al. When Unsupervised Domain Adaptation Meets Tensor Representations. ICCV 2017.
[22] J. Wang, Y. Chen, L. Hu, X. Peng, and P. Yu. Stratified Transfer Learning for Cross-domain Activity Recognition. 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom).
[23] Motiian S, Piccirilli M, Adjeroh D A, et al. Unified deep supervised domain adaptation and generalization[C]//The IEEE International Conference on Computer Vision (ICCV). 2017, 2.
[24] Long M, Cao Z, Wang J, et al. Learning Multiple Tasks with Multilinear Relationship Networks[C]//Advances in Neural Information Processing Systems. 2017: 1593-1602.
[25] Maria Carlucci F, Porzi L, Caputo B, et al. AutoDIAL: Automatic DomaIn Alignment Layers[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5067-5075.
[26] Bousmalis K, Trigeorgis G, Silberman N, et al. Domain separation networks[C]//Advances in Neural Information Processing Systems. 2016: 343-351.
[27] M. Ghifary, W. B. Kleijn, M. Zhang, D. Balduzzi, and W. Li. “Deep Reconstruction-Classification Networks for Unsupervised Domain Adaptation (DRCN)”, European Conference on Computer Vision (ECCV), 2016
[28] M. Ghifary, W. B. Kleijn, M. Zhang, D. Balduzzi. Domain Generalization for Object Recognition with Multi-task Autoencoders, accepted in International Conference on Computer Vision (ICCV 2015), Santiago, Chile.
[29] Aljundi R, Emonet R, Muselet D, et al. Landmarks-based kernelized subspace alignment for unsupervised domain adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 56-63.
[30] Rannen A, Aljundi R, Blaschko M B, et al. Encoder based lifelong learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 1320-1328.
[31] Peilin Zhao and Steven C.H. Hoi. OTL: A Framework of Online Transfer Learning. ICML 2010.
[32] Pietro Morerio, Jacopo Cavazza, Vittorio Murino. Minimal-Entropy Correlation Alignment for Unsupervised Deep Domain Adaptation. ICLR 2018.
[33] Sun B, Saenko K. Deep coral: Correlation alignment for deep domain adaptation[C]//European Conference on Computer Vision. Springer, Cham, 2016: 443-450.
[34] Tolstikhin I, Bousquet O, Gelly S, et al. Wasserstein Auto-Encoders[J]. arXiv preprint arXiv:1711.01558, 2017.
[35] Saito K, Ushiku Y, Harada T. Asymmetric tri-training for unsupervised domain adaptation[J]. arXiv preprint arXiv:1702.08400, 2017.
[36] Bousmalis K, Silberman N, Dohan D, et al. Unsupervised pixel-level domain adaptation with generative adversarial networks[C]//The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017, 1(2): 7.
[37] Shen J, Qu Y, Zhang W, et al. Adversarial representation learning for domain adaptation[J]. arXiv preprint arXiv:1707.01217, 2017.
[38] Kim T, Cha M, Kim H, et al. Learning to discover cross-domain relations with generative adversarial networks[J]. arXiv preprint arXiv:1703.05192, 2017.
[39] Tommasi T, Caputo B. Frustratingly Easy NBNN Domain Adaptation[C]. international conference on computer vision, 2013: 897-904.
[40] Pei Z, Cao Z, Long M, et al. Multi-Adversarial Domain Adaptation[C] // AAAI 2018.
[41] Ghifary M, Kleijn W B, Zhang M. Domain adaptive neural networks for object recognition[C]//Pacific Rim International Conference on Artificial Intelligence. Springer, Cham, 2014: 898-904.
[42] Saito K, Watanabe K, Ushiku Y, et al. Maximum Classifier Discrepancy for Unsupervised Domain Adaptation[J]. arXiv preprint arXiv:1712.02560, 2017.
[43] Volpi R, Morerio P, Savarese S, et al. Adversarial Feature Augmentation for Unsupervised Domain Adaptation[J]. arXiv preprint arXiv:1711.08561, 2017.
[44] Zhang Y, Xiang T, Hospedales T M, et al. Deep Mutual Learning[C]. CVPR 2018.
[45] French G, Mackiewicz M, Fisher M. Self-ensembling for visual domain adaptation[C]//International Conference on Learning Representations. 2018.
[46] van Laarhoven T, Marchiori E. Unsupervised Domain Adaptation with Random Walks on Target Labelings[J]. arXiv preprint arXiv:1706.05335, 2017.
[47] Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, Philip S. Yu. Visual Domain Adaptation with Manifold Embedded Distribution Alignment. ACM Multimedia conference 2018.
[48] Zhangjie Cao, Mingsheng Long, et al. Partial Adversarial Domain Adaptation. ECCV 2018.
[49] Zhang W, Ouyang W, Li W, et al. Collaborative and Adversarial Network for Unsupervised domain adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 3801-3809.
[50] Zhang J, Ding Z, Li W, et al. Importance Weighted Adversarial Nets for Partial Domain Adaptation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8156-8164.
[51] Saito K, Yamamoto S, Ushiku Y, et al. Open Set Domain Adaptation by Backpropagation[J]. arXiv preprint arXiv:1804.10427, 2018.
[52] Shen J, Qu Y, Zhang W, et al. Wasserstein Distance Guided Representation Learning for Domain Adaptation[C]//AAAI. 2018.
[53] Chen C, Chen Z, Jiang B, et al. Joint Domain Alignment and Discriminative Feature Learning for Unsupervised Deep Domain Adaptation[J]. arXiv preprint arXiv:1808.09347, 2018.
[54] Felix R, Vijay Kumar B G, Reid I, et al. Multi-modal Cycle-consistent Generalized Zero-Shot Learning. ECCV 2018.
[55] Xie S, Zheng Z, Chen L, et al. Learning Semantic Representations for Unsupervised Domain Adaptation[C]//International Conference on Machine Learning. 2018: 5419-5428.
[56] Cao Z, Long M, Wang J, et al. Partial transfer learning with selective adversarial networks. CVPR 2018.
[57] Issam Laradji, Reza Babanezhad. M-ADDA: Unsupervised Domain Adaptation with Deep Metric Learning. ICML 2018 workshop.
[58] Saito K, Yamamoto S, Ushiku Y, et al. Open Set Domain Adaptation by Backpropagation[J]. arXiv preprint arXiv:1804.10427, 2018.
[59] Shu R, Bui H H, Narui H, et al. A DIRT-T Approach to Unsupervised Domain Adaptation[J]. arXiv preprint arXiv:1802.08735, 2018.
[60] Mingsheng Long, et al. Conditional Adversarial Domain Adaptation. NeurIPS 2018.
[61] W.Zellinger, T. Grubinger, E. Lughofer, T. Natschlaeger, and Susanne Saminger-Platz, “Central moment discrepancy (cmd) for domain-invariant representation learning,” ICLR 2017.
[62] W. Zellinger, B.A. Moser, T. Grubinger, E. Lughofer, T. Natschlaeger, and S. Saminger-Platz, “Robust unsupervised domain adaptation for neural networks via moment alignment,” Information Sciences (in press), 2019, https://doi.org/10.1016/j.ins.2019.01.025, arXiv preprint arxiv:1711.06114
[63] Jindong Wang, Yiqiang Chen, Han Yu, Meiyu Huang, Qiang Yang. Easy Transfer Learning By Exploiting Intra-domain Structures. IEEE International Conference on Multimedia & Expo (ICME) 2019.
[64] Saito K, Yamamoto S, Ushiku Y, et al. Open set domain adaptation by backpropagation[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 153-168.
[65] Chaohui Yu, Jindong Wang, Yiqiang Chen, Zijing Wu. Accelerating Deep Unsupervised Domain Adaptation with Transfer Channel Pruning. IJCNN 2019.
[66] Shikun Liu, Edward Johns, and Andrew Davison. End-to-End Multi-Task Learning with Attention. CVPR 2019.
[67] Huang J, Gretton A, Borgwardt K, et al. Correcting sample selection bias by unlabeled data[C]//Advances in neural information processing systems. 2007: 601-608.
[68] Yunhun Jang, Hankook Lee, Sung Ju Hwang, Jinwoo Shin. Learning what and where to transfer. ICML 2019.
[69] Sebastian Ruder, Barbara Plank (2017). Learning to select data for transfer learning with Bayesian Optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark.
[70] Liu M, Song Y, Zou H, et al. Reinforced Training Data Selection for Domain Adaptation[C]//Proceedings of the 57th Conference of the Association for Computational Linguistics. 2019: 1957-1968.
[71] Saito K, Kim D, Sclaroff S, et al. Semi-supervised Domain Adaptation via Minimax Entropy. ICCV 2019.
[72] Zhu Y, Zhuang F, Wang J, et al. Multi-representation adaptation network for cross-domain image classification[J]. Neural Networks, 2019.
[73] Min-Hung Chen, Zsolt Kira, Ghassan AlRegib, et al. Temporal Attentive Alignment for Large-Scale Video Domain Adaptation. ICCV 2019.
[74] Zhao H, Zhang S, Wu G, et al. Multiple source domain adaptation with adversarial learning. NeurIPS 2018.
[75] Jie Song, et al. Deep model transferrability from attirbution maps. NeurIPS 2019.
[76] Ilse, M., Tomczak, J. M., C. Louizos & Welling, M. (2018). DIVA: Domain Invariant Variational Autoencoders. arXiv preprint arXiv:1905.10427
[77] Lin K., et al. Cross-Domain Complementary Learning with Synthetic Data for Multi-Person Part Segmentation[J]. arXiv preprint arXiv:1907.05193, ICCV demo, 2019.
[78] Lee S., Kim D., et al. Drop to Adapt: Learning Discriminative Features for Unsupervised Domain Adaptation. ICCV 2019.
[79] Ghifary M, Balduzzi D, Kleijn W B, et al. Scatter component analysis: A unified framework for domain adaptation and domain generalization[J]. IEEE transactions on pattern analysis and machine intelligence, 2016, 39(7): 1414-1430.
[80] Chaohui Yu, Jindong Wang, Yiqiang Chen, Meihu Huang. Transfer learnign with dynamic adversarial adaptation network. ICDM 2019.
[81] Kaiyang Zhou, Yongxin Yang, Yu Qiao, Tao Xiang. Domain Adaptive Ensemble Learning. ArXiv preprint, 2020.