论文《Unsupervised Cross-Domain Image Generation》摘要:
We study the problem of transferring a sample in one domain to an analog
sample in another domain. Given two related domains, S and T, we would like to
learn a generative function G that maps an input sample from S to the domain T,
such that the output of a given function f, which accepts inputs in either
domains, would remain unchanged. Other than the function f, the training data
is unsupervised and consist of a set of samples from each domain. The Domain
Transfer Network (DTN) we present employs a compound loss function that
includes a multiclass GAN loss, an f-constancy component, and a regularizing
component that encourages G to map samples from T to themselves. We apply our
method to visual domains including digits and face images and demonstrate its
ability to generate convincing novel images of previously unseen entities,
while preserving their identity.
论文链接:
https://arxiv.org/abs/1611.02200
代码链接:
https://github.com/yunjey/dtn-tensorflow
原文链接:
http://weibo.com/5501429448/Es9L6oxfG?ref=collection&type=comment#_rnd1485161563006