Adaptive Gradient Descent on Riemannian Manifolds and Its Applications to Gaussian Variational Inference.) J. Park*, J. J. Suh*, B. Wang, A. Bhattacharya, S. Ma, International Conference on Learning Representations, 2026 (to appear). *Lead authors
PEPFlow: A Python Library for the Workflow of Performance Estimation of Optimization Algorithms. J. J. Suh, B. Ying, X. Jiang, E. D. H. Nguyen, NeurIPS Workshop on GPU-accelerated and Scalable Optimization, 2025. [Website] [Github]
Exact worst-case convergence rates for Douglas–Rachford and Davis–Yin splitting methods. E. D. H. Nguyen, J. J. Suh, X. Jiang, S. Ma, (under revision in INFORMS Journal on Optimization), 2025 .
An Adaptive and Parameter-Free Nesterov’s Accelerated Gradient Method for Convex Optimization. J. J. Suh, S. Ma, (under review in Mathematical Programming), 2025.
Numerical Analysis of HiPPO-LegS ODE for Deep State Space Models. J. R. Park, J. J. Suh, Y. Hong, E. K. Ryu, (under revision in TMLR), 2024.
Optimization Algorithm Design via Electric Circuits. S. Boyd, T. Parshakova*, E. K. Ryu, J. J. Suh*, Neural Information Processing Systems (spotlight), 2024.
*Lead authors (author list ordered alphabetically)
Optimal Acceleration for Minimax and Fixed-Point Problems is Not Unique. T. Yoon, J. Kim, J. J. Suh, E. K. Ryu, International Conference on Machine Learning (spotlight, top 335/9483=3.5% of papers), 2024.
Continuous-time Analysis of Anchor Acceleration. J. J. Suh, J. Park, E. K. Ryu, Neural Information Processing Systems, 2023.
Continuous-Time Analysis of AGM via Conservation Laws in Dilated Coordinate Systems. J. J. Suh, G. Roh, E. K. Ryu, International Conference on Machine Learning (long presentation, top 118/5630=2% of papers), 2022.