Regularization of the learning process of graph neural networks using the label propagation method
DOI:
https://doi.org/10.17308/sait/1995-5499/2024/3/92-101Keywords:
deep learning, regularization, LPA, GNN, graphs, GCN, GraphSAGE, GATAbstract
Graph neural networks are a topic of increasing interest in the fields of machine learning and data analysis. These specialized architectures allow for the effective modeling and analysis of complex data structures, such as social networks, bioinformatics networks, transportation networks, and more. As the volume of data presented in graph form continues to grow, the importance of graph neural networks as a tool for understanding and forecasting complex relationships and trends becomes more significant. This work aims to evaluate the effectiveness of the L2 regularization method in machine learning, specifically in the context of clustering graph nodes. Clustering involves grouping nodes based on their connectivity, and this study uses a special regularization technique and the Label Propagation Algorithm (LPA) to implement it. Additionally, it extends this approach to two popular graph neural network architectures, GraphSAGE and GAT. The study compares the effectiveness of LPA on various datasets commonly used in scientific and practical applications. The results demonstrate a significant improvement in the accuracy of analyzing graph data using this approach. This research contributes to a better understanding of the impact of L2 regularization on training graph neural networks.
References
Downloads
Published
Issue
Section
License
Условия передачи авторских прав in English













