D2F: Description to Face Synthesis Using GAN
8 Pages Posted: 8 Oct 2020 Last revised: 13 Oct 2020
Date Written: June 26, 2020
All the recent breakthroughs in the field of deep learning have enabled us to extract lower dimension knowledge from higher dimension data, this has led to state of the art utilization of tremendous data. At the same time, the generative aspect of deep learning models are rapidly emerging, out of all the generative models Generative Adversarial Networks (GANs) are immensely popular due to its ease of implementation and enhanced performance. Tasks like mapping textual data to visual data (i.e. lower dimension to higher dimension) is a feat which is still not easily achievable even by experts. The challenge here is extracting every possible information from lower dimension data to generate comparatively higher dimension data. This work collaborates GANs and Natural Language Processing (NLP) to come up with a model which will effectively translate human facial features from characters to pixels. In other words the model will generate realistic human faces given the description of the faces. This paper compares different architectures of GANs trained on different datasets. It also specifies dataset collection method using crowd intelligence and also proposes a deep learning model for description to face synthesis.
Keywords: NLP, GAN, Face2Text ,Transfer learning, Generative model
Suggested Citation: Suggested Citation