Home / Papers / Adversarial Text to Continuous Image Generation

Adversarial Text to Continuous Image Generation

88 Citations2024
Kilichbek Haydarov, Aashiq Muhamed, Xiaoqian Shen
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

This paper proposes a word-level attention-based weight modulation operator that controls the generation process of INR-GAN based on hypernetworks and shows that HyperCGAN achieves competitive performance to existing pixel-based methods and retains the properties of continuous generative models.

Abstract

Existing GAN-based text-to-image models treat images as 2D pixel arrays. In this paper, we approach the text-to-image task from a different perspective, where a 2D image is represented as an implicit neural representation (INR). We show that straightforward conditioning of the unconditional INR-based GAN method on text inputs is not enough to achieve good performance. We propose a word-level attention-based weight modulation operator that controls the generation process of INR-GAN based on hypernetworks. Our experiments on benchmark datasets show that HyperCGAN achieves competitive performance to existing pixel-based methods and retains the properties of continuous generative models. Project page link: https://kilichbek.github.io/webpagelhypercgan.