Deep learning models for synthesizing images with adverse weather conditions for solving computer vision problems
DOI:
https://doi.org/10.17308/sait/1995-5499/2025/2/89-104Keywords:
image generation, image stylization, image processing in adverse weather conditions, neural networks, data augmentation, transformersAbstract
This paper presents an analysis of known and newly developed algorithms for generating images of real-world scenes under atmospheric precipitation. The goal is to use these images for augmentation and stylization. It is proved that classical (heuristic) algorithms for generating such images with artificial distortions caused by precipitation have limitations. These include possible disruption of image structure and partial loss of realism. A modern approach to overcome this issue involves training deep neural networks for every specific task. However, such solutions often lack generalization, are complex, and require significant computational resources. In this work, a new and relatively simple algorithm is proposed. It introduces precipitation artifacts into images using a dual-input transformer model. The proposed model extracts distortions in the form of precipitation from one image and transfers them to another image. This method is intended for dataset augmentation and processing of real-world scenes under different weather conditions. The neural network architecture is studied to determine optimal parameters, including the number of neural network layers and the attention map formation strategy. The paper demonstrates the use of the proposed method for data augmentation in object detection, image segmentation and image restoration tasks. It is shown that the generated images help address the problem of data leakage during neural network training and reduce model bias during testing. The proposed deep learning model can be applied in any domain where a pair of images is available: the original and a template containing distortions, noise or precipitation. An example of image restoration shows that, using only noisy images, it is possible to extract relevant distortions and transfer them to the original image.
References
Downloads
Published
Issue
Section
License
Условия передачи авторских прав in English













