A Convexity-Dependent Two-Phase Training Algorithm for Deep Neural Networks
Published in Proceedings of the 17th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K KDIR 2025), 2025
Proposes a two-phase optimization algorithm that detects when the loss landscape transitions from non-convex to convex regions, switching from Adam to Conjugate Gradient to substantially improve convergence speed and accuracy.
Recommended citation: Hrycej, T., Bermeitinger, B., Pavone, M., Wiegand, G.-H., & Handschuh, S. (2025). "A Convexity-Dependent Two-Phase Training Algorithm for Deep Neural Networks." Proceedings of the 17th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, 78–86.
Download Paper
