Deep learning models for synthesizing images with adverse weather conditions for solving computer vision problems

Authors

DOI:

https://doi.org/10.17308/sait/1995-5499/2025/2/89-104

Keywords:

image generation, image stylization, image processing in adverse weather conditions, neural networks, data augmentation, transformers

Abstract

This paper presents an analysis of known and newly developed algorithms for generating images of real-world scenes under atmospheric precipitation. The goal is to use these images for augmentation and stylization. It is proved that classical (heuristic) algorithms for generating such images with artificial distortions caused by precipitation have limitations. These include possible disruption of image structure and partial loss of realism. A modern approach to overcome this issue involves training deep neural networks for every specific task. However, such solutions often lack generalization, are complex, and require significant computational resources. In this work, a new and relatively simple algorithm is proposed. It introduces precipitation artifacts into images using a dual-input transformer model. The proposed model extracts distortions in the form of precipitation from one image and transfers them to another image. This method is intended for dataset augmentation and processing of real-world scenes under different weather conditions. The neural network architecture is studied to determine optimal parameters, including the number of neural network layers and the attention map formation strategy. The paper demonstrates the use of the proposed method for data augmentation in object detection, image segmentation and image restoration tasks. It is shown that the generated images help address the problem of data leakage during neural network training and reduce model bias during testing. The proposed deep learning model can be applied in any domain where a pair of images is available: the original and a template containing distortions, noise or precipitation. An example of image restoration shows that, using only noisy images, it is possible to extract relevant distortions and transfer them to the original image.

Author Biographies

  • Nikita I. Berezhnov, Voronezh State University

    3nd year PHD student, Department of Information Security and Processing Technologies, Faculty of Computer Sciences, Voronezh State University

  • Alexander A. Sirota, Voronezh State University

    DSc in Technical Sciences, Head of the Department of Information Security and Processing Technologies, Faculty of Computer Sciences, Voronezh State University

References

Published

2025-09-02

Issue

Section

Intelligent Information Systems, Data Analysis and Machine Learning

How to Cite

Deep learning models for synthesizing images with adverse weather conditions for solving computer vision problems. (2025). Proceedings of Voronezh State University. Series: Systems Analysis and Information Technologies, 2, 89-104. https://doi.org/10.17308/sait/1995-5499/2025/2/89-104

Most read articles by the same author(s)