https://github.com/anvoynov/GANLatentDiscovery
https://paperswithcode.com/paper/unsupervised-discovery-of-interpretable
Unsupervised Discovery of Interpretable Directions in the GAN Latent Space
The latent spaces of GAN models often havesemantically meaningful directions.Mov-ing in these directiohe latent spaces of GAN models often havesemantically meaningful directions.Mov-ing in these directions corresponds to human-interpretable image transformations, such aszooming or recoloring, enabling a more control-lable generation process. However, the discov-ery of such directions is currently performed in asupervised manner, requiring human labels, pre-trained models, or some form of self-supervision.These requirements severely restrict a range ofdirections existing approaches can discover.
In this paper, we introduce anunsupervisedmethod to identify interpretable directions in thelatent space of a pretrained GAN model. By asimple model-agnostic procedure, we find direc-tions corresponding to sensible semantic manip-ulations without any form of (self-)supervision.Furthermore, we reveal several non-trivial find-ings, which would be difficult to obtain by exist-ing methods, e.g., a direction corresponding tobackground removal. As an immediate practicalbenefit of our work, we show how to exploit thisfinding to achieve competitive performance forweakly-supervised saliency detection. The imple-mentation of our method is available online.
GAN模型的潜在空间通常具有语义意义的方向。在这些方向上移动氮化镓模型的潜在空间通常具有语义意义的方向。向这些方向移动对应于人类可理解的图像转换,如缩放或重新上色,使生成过程更可控。然而,这种方向的发现目前是在监督下进行的,需要人类标签、预先训练的模型或某种形式的自我监督。这些要求严重限制了现有方法可以发现的方向范围。
在本文中,我们引入了一种无监督方法来识别预先训练过的GAN模型的潜在空间中的可解释方向。通过简单的模型不可知过程,我们找到了与没有任何形式(自我)监督的敏感语义操作相对应的方向。此外,我们揭示了一些非平凡的发现,这些发现很难通过现有的方法获得,例如,与背景去除相对应的方向。作为我们工作的一个直接的实际好处,我们展示了如何利用这一发现来实现弱监督的显著性检测的竞争性能。我们的方法可以在网上得到实现。how how to exploit thebackground removal direction for weakly-supervisedsaliency detectionAs our main contrib
