1. BEKZOD SHARIPOV - Independent Data Scientist.
Residual connections have become a cornerstone of modern deep learning architectures, enabling efficient gradient propagation and improving convergence stability in complex models. However, their precise impact on model robustness and generalization under domain shift remains insufficiently examined. This study investigates how residual architectures influence learning behavior when models are exposed to distributional changes between training and testing data. Using benchmark datasets and controlled shift scenarios, we compare residual and non-residual neural networks across metrics such as accuracy degradation, calibration error, and feature transferability. Experimental results demonstrate that residual connections significantly enhance stability and mitigate performance loss under moderate shifts, primarily by preserving reusable hierarchical representations. The findings offer new insights into architectural design choices that promote resilient learning in dynamic data environments. This research contributes to the broader discourse on trustworthy and adaptable machine learning, offering implications for real-world applications where domain adaptation and robust AI are critical.
Residual Networks, Model Robustness, Domain Shift, Generalisation, Machine Learning, Domain Adaptation.