A single-image GAN model using self-attention mechanism and DenseNets


YILDIZ E., Yuksel M. E., SEVGEN S.

Neurocomputing, cilt.596, 2024 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 596
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1016/j.neucom.2024.127873
  • Dergi Adı: Neurocomputing
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, INSPEC, zbMATH
  • Anahtar Kelimeler: Data augmentation, DenseNets, Generative adversarial networks, Image synthesis, Limited data, Self-attention, Single image generation
  • Erzincan Binali Yıldırım Üniversitesi Adresli: Evet

Özet

Image generation from a single natural image using generative adversarial networks (GANs) has attracted extensive attention recently due to the GANs’ practical ability to produce photo-realistic images and their potential applications in computer vision. However, learning a powerful generative model that generates realistic, high-quality images from only a single natural image is still a challenging problem. Training GANs in limited data regimes often causes some issues, such as overfitting, memorization, training divergence, poor image quality, and a long training time. In this study, we investigated the state-of-the-art GAN models in computer vision tasks. We conducted several experiments to deeply understand the challenges of learning a powerful generative model. We introduced a novel unconditional GAN model that produces realistic, high-quality, diverse images based on a single training image. In our model, we employed a self-attention mechanism (SAM), a densely connected convolutional network (DenseNet) architecture, and a relativistic average least-squares GAN with gradient penalty (RaLSGAN-GP) for both the generator and discriminator networks to perform image generation tasks better. SAM controls the global contextual information level. It is complementary to convolutions for large feature maps and gives the generator and discriminator more capability to capture long-range dependencies in feature maps. It compensates for the long training time and low image quality issues. DenseNet connects each layer to every other layer in a feed-forward manner to ensure maximum information flow between layers in the network. It is highly parameter efficient and requires less computation to achieve high performance. It provides improved information and gradient flow throughout the network for easy training. It has a regularizing effect that reduces overfitting in image generation. RaLSGAN-GP further improves data generation quality and the stability of our model at no computational cost and provides much more stable training. Thanks to the appropriate combination of SAM, DenseNet, and RaLSGAN-GP, our model successfully generates realistic, high-quality, diverse images while maintaining the global context of the training image. We conducted experiments, user studies, and model evaluation methods to test our model's performance and compared it with the previous well-known models on three datasets (Places, LSUN, ImageNet). We demonstrated our model's capability in image synthesis and image manipulation tasks. In our experiments, we showed that our model utilized parameters more efficiently, prevented overfitting, better captured the internal patch statistics of images with complex structures and textures, achieved comparable performance in single image generation tasks, and had much better visual results than its competitive peers. User studies confirmed that the images generated by our model were commonly confused with the original images. Our model can provide a powerful tool for various image manipulation tasks as well as data augmentation in domains dealing with limited training data.