其他分享
首页 > 其他分享> > 论文阅读笔记StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery

论文阅读笔记StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery

作者:互联网

combine CLIP with StyleGAN

一.introduction and related work

1、CLIP主要完成的任务是:给定一幅图像,在32768个随机抽取的文本片段中,找到能匹配的那个文本。为了完成这个任务,CLIP这个模型需要学习识别图像中各种视觉概念,并将视觉概念将图片关联,也因此,CLIP可以用于几乎任意视觉人类任务。例如,一个数据集的任务为区分猫和狗,则CLIP模型预测图像更匹配文字描述“一张狗的照片”还是“一张猫的照片”。

2、text prompt 文本提示

3、related work about  image manipulation base on text-guided 

Some  methods  [10,  31,  27]  use  a  GAN-based  encoder-decoder architecture, to disentangle the semantics of both input images and text descriptions.  ManiGAN [22] introduces a novel text-image combination module, which produces high-quality images. 

A  concurrent  work  to  ours,  TediGAN  [51],  also  uses StyleGAN for text-guided image generation and manipulation. 

[10]  H. Dong, Simiao Yu, Chao Wu, and Y. Guo. Semantic imagesynthesis via adversarial learning.Proc. ICCV, pages 5707–5715, 2017

[27]Yahui Liu, Marco De Nadai, Deng Cai, Huayang Li, XavierAlameda-Pineda,  N.  Sebe,  and  Bruno  Lepri.Describewhat to change: A text-guided unsupervised image-to-imagetranslation approach.Proceedings of the 28th ACM Interna-tional Conference on Multimedia, 2020

[31]Seonghyeon  Nam,  Yunji  Kim,  and  S.  Kim.   Text-adaptivegenerative adversarial networks:  Manipulating images withnatural language. InNeurIPS, 2018

4、While most works perform image manipulations in the W or W+ spaces,  Wuet  al.  [50]  proposed  to  use  the StyleSpace S, and showed that it is better disentangled than W and W+

 Our latent optimizer and mapper work in the W+ space, while the input-agnostic directions that we detect are in S.

二.contributions

In this work we explore three ways for text-driven image manipulation:

1.We first introduce an optimization scheme that utilizes a CLIP-based loss to modify an input latent vector in response to a user-provided text prompt.

2.we describe a latent mapper  that  infers  a  text-guided  latent  manipulation  step  fora given input image, allowing faster and more stable text-based manipulation.

3.Finally, we present a method for mapping  a  text  prompts  to  input-agnostic  directions  in  Style-GAN’s style space,  enabling interactive text-driven image manipulation. 

中文:

Latent Optimization: 将CLIP作为loss网络,这是最通用的方法,但是修改一张图片需要好几分钟。
Latent Mapper:固定文本提示,以待修改的图片作为起点,Mapper推理根据文本提示该如何修改图片,然后对图片进行修改。
Global Direction:与方法2类似,将文本提示映射到StyleGAN的‘style’空间,从而修改图像。

 三.method

 

 

 

 

 

 

 

 

 

 

 

标签:StyleGAN,CLIP,Text,image,manipulation,Driven,text,input,work
来源: https://www.cnblogs.com/h694879357/p/15525192.html