Optimization Research Papers in JMLR Volume 23
Table of Contents
Optimization Research Papers in JMLR Volume 23 (2022) #
This document lists papers from JMLR Volume 23 (2022) that focus on optimization research, categorized by their primary themes. Each paper is numbered starting from 1 within its subsection, with a brief description of its key contributions to optimization theory, algorithms, or applications.
Convex Optimization #
Papers addressing convex optimization problems, including sparse PCA, L1-regularized SVMs, and metric-constrained problems.
Solving Large-Scale Sparse PCA to Certifiable (Near) Optimality
Authors: Dimitris Bertsimas, Ryan Cory-Wright, Jean Pauphilet
Description: Develops convex optimization techniques for large-scale sparse principal component analysis with certifiable near-optimal solutions.Novel Min-Max Reformulations of Linear Inverse Problems
Authors: Mohammed Rayyan Sheriff, Debasish Chatterjee
Description: Proposes min-max reformulations for linear inverse problems using convex optimization frameworks.New Insights for the Multivariate Square-Root Lasso
Authors: Aaron J. Molstad
Description: Analyzes the square-root Lasso in multivariate settings, focusing on its convex optimization properties.Towards An Efficient Approach for the Nonconvex lp Ball Projection: Algorithm and Analysis
Authors: Xiangyu Yang, Jiashan Wang, Hao Wang
Description: Develops efficient algorithms for lp ball projection, addressing both convex and nonconvex aspects.Solving L1-Regularized SVMs and Related Linear Programs: Revisiting the Effectiveness of Column and Constraint Generation
Authors: Antoine Dedieu, Rahul Mazumder, Haoyue Wang
Description: Investigates L1-regularized SVMs using convex optimization with column and constraint generation.Extensions to the Proximal Distance Method of Constrained Optimization
Authors: Alfonso Landeros, Oscar Hernan Madrid Padilla, Hua Zhou, Kenneth Lange
Description: Extends the proximal distance method for constrained convex optimization problems.Stochastic Subgradient for Composite Convex Optimization with Functional Constraints
Authors: Ion Necoara, Nitesh Kumar Singh
Description: Analyzes stochastic subgradient methods for composite convex optimization with functional constraints.On Regularized Square-Root Regression Problems: Distributionally Robust Interpretation and Fast Computations
Authors: Hong T.M. Chu, Kim-Chuan Toh, Yangjing Zhang
Description: Studies regularized square-root regression with a distributionally robust perspective and efficient computational methods.Project and Forget: Solving Large-Scale Metric Constrained Problems
Authors: Rishi Sonthalia, Anna C. Gilbert
Description: Proposes a convex optimization approach for large-scale metric-constrained problems.Faster Randomized Interior Point Methods for Tall/Wide Linear Programs
Authors: Agniva Chowdhury, Gregory Dexter, Palma London, Haim Avron, Petros Drineas
Description: Develops randomized interior point methods for efficient optimization of tall/wide linear programs.
Nonconvex Optimization #
Papers tackling nonconvex optimization, focusing on optimality, stability, and convergence in nonsmooth and game settings.
Optimality and Stability in Non-Convex Smooth Games
Authors: Guojun Zhang, Pascal Poupart, Yaoliang Yu
Description: Analyzes optimality and stability in nonconvex smooth games with convergence guarantees.Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
Authors: Zhize Li, Jian Li
Description: Proposes simple and optimal stochastic gradient methods for nonsmooth, nonconvex optimization.Oracle Complexity in Nonsmooth Nonconvex Optimization
Authors: Guy Kornowski, Ohad Shamir
Description: Studies the oracle complexity of nonsmooth nonconvex optimization problems.Distributed Stochastic Gradient Descent: Nonconvexity, Nonsmoothness, and Convergence to Local Minima
Authors: Brian Swenson, Ryan Murray, H. Vincent Poor, Soummya Kar
Description: Investigates distributed SGD for nonconvex, nonsmooth optimization with convergence to local minima.
Stochastic Optimization #
Papers focusing on stochastic optimization methods, including bundle methods, zeroth-order algorithms, and adaptive techniques.
A Stochastic Bundle Method for Interpolation
Authors: Alasdair Paren, Leonard Berrada, Rudra P. K. Poudel, M. Pawan Kumar
Description: Introduces a stochastic bundle method for efficient interpolation in optimization.On Biased Stochastic Gradient Estimation
Authors: Derek Driggs, Jingwei Liang, Carola-Bibiane Schönlieb
Description: Analyzes biases in stochastic gradient estimation and their impact on optimization performance.Accelerated Zeroth-Order and First-Order Momentum Methods from Mini to Minimax Optimization
Authors: Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang
Description: Proposes accelerated zeroth-order and first-order momentum methods for a range of optimization problems.Stochastic Zeroth-Order Optimization under Nonstationarity and Nonconvexity
Authors: Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra
Description: Studies zeroth-order optimization in nonstationary and nonconvex settings.Accelerating Adaptive Cubic Regularization of Newton’s Method via Random Sampling
Authors: Xi Chen, Bo Jiang, Tianyi Lin, Shuzhong Zhang
Description: Enhances Newton’s method with adaptive cubic regularization using random sampling.A Momentumized, Adaptive, Dual Averaged Gradient Method
Authors: Aaron Defazio, Samy Jelassi
Description: Develops a momentum-based adaptive gradient method for stochastic optimization.Stochastic DCA with Variance Reduction and Applications in Machine Learning
Authors: Hoai An Le Thi, Hoang Phuc Hau Luu, Hoai Minh Le, Tao Pham Dinh
Description: Introduces a stochastic difference-of-convex-functions algorithm with variance reduction for machine learning.Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks
Authors: Alireza Fallah, Mert Gürbüzbalaban, Asuman Ozdaglar, Umut Şimşekli, Lingjiong Zhu
Description: Proposes robust stochastic gradient methods for distributed optimization in multi-agent networks.On Acceleration for Convex Composite Minimization with Noise-Corrupted Gradients and Approximate Proximal Mapping
Authors: Qiang Zhou, Sinno Jialin Pan
Description: Addresses acceleration in convex composite minimization with noisy gradients.Asymptotic Study of Stochastic Adaptive Algorithms in Non-Convex Landscape
Authors: Sébastien Gadat, Ioana Gavra
Description: Analyzes the asymptotic behavior of stochastic adaptive algorithms in nonconvex settings.Towards Practical Adam: Non-Convexity, Convergence Theory, and Mini-Batch Acceleration
Authors: Congliang Chen, Li Shen, Fangyu Zou, Wei Liu
Description: Studies the Adam optimizer, focusing on nonconvexity, convergence, and mini-batch acceleration.An Efficient Sampling Algorithm for Non-Smooth Composite Potentials
Authors: Wenlong Mou, Nicolas Flammarion, Martin J. Wainwright, Peter L. Bartlett
Description: Develops an efficient sampling algorithm for nonsmooth composite potentials in stochastic optimization.SGD with Coordinate Sampling: Theory and Practice
Authors: Rémi Leluc, François Portier
Description: Explores coordinate sampling in stochastic gradient descent with theoretical and practical insights.
Distributed/Decentralized Optimization #
Papers addressing distributed or decentralized optimization algorithms, focusing on communication efficiency and convergence.
Asymptotic Network Independence and Step-Size for a Distributed Subgradient Method
Authors: Alex Olshevsky
Description: Analyzes step-size and convergence for a distributed subgradient optimization method.Projection-Free Distributed Online Learning with Sublinear Communication Complexity
Authors: Yuanyu Wan, Guanghui Wang, Wei-Wei Tu, Lijun Zhang
Description: Develops projection-free algorithms for distributed online learning with reduced communication complexity.Variance Reduced EXTRA and DIGing and Their Optimal Acceleration for Strongly Convex Decentralized Optimization
Authors: Huan Li, Zhouchen Lin, Yongchun Fang
Description: Proposes variance-reduced methods for decentralized optimization with optimal acceleration.
Submodular Optimization #
Papers focusing on submodular optimization, particularly in model selection.
- Joint Continuous and Discrete Model Selection via Submodularity
Authors: Jonathan Bunton, Paulo Tabuada
Description: Uses submodularity for joint continuous and discrete model selection in optimization.
Bandits and Online Learning #
Papers addressing multi-armed bandits, online optimization, and regret minimization.
Multi-Agent Online Optimization with Delays: Asynchronicity, Adaptivity, and Optimism
Authors: Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos
Description: Studies multi-agent online optimization with delays, focusing on asynchronicity and optimism.Online Mirror Descent and Dual Averaging: Keeping Pace in the Dynamic Case
Authors: Huang Fang, Nicholas J. A. Harvey, Victor S. Portella, Michael P. Friedlander
Description: Analyzes online mirror descent and dual averaging for dynamic online optimization.No Weighted-Regret Learning in Adversarial Bandits with Delays
Authors: Ilai Bistritz, Zhengyuan Zhou, Xi Chen, Nicholas Bambos, Jose Blanchet
Description: Investigates regret minimization in adversarial bandits with delays.KL-UCB-Switch: Optimal Regret Bounds for Stochastic Bandits from Both a Distribution-Dependent and a Distribution-Free Viewpoints
Authors: Aurélien Garivier, Hédi Hadiji, Pierre Ménard, Gilles Stoltz
Description: Provides optimal regret bounds for stochastic bandits using KL-UCB-Switch.Multi-Agent Multi-Armed Bandits with Limited Communication
Authors: Mridul Agarwal, Vaneet Aggarwal, Kamyar Azizzadenesheli
Description: Explores multi-agent bandits with limited communication, focusing on regret minimization.Nonstochastic Bandits with Composite Anonymous Feedback
Authors: Nicolò Cesa-Bianchi, Tommaso Cesari, Roberto Colomboni, Claudio Gentile, Yishay Mansour
Description: Studies nonstochastic bandits with composite feedback, analyzing regret and optimization.Expected Regret and Pseudo-Regret are Equivalent When the Optimal Arm is Unique
Authors: Daron Anderson, Douglas J. Leith
Description: Proves equivalence of expected regret and pseudo-regret in specific bandit settings.
Bayesian and Hyperparameter Optimization #
Papers addressing Bayesian optimization and hyperparameter tuning for efficient optimization.
SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization
Authors: Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, René Sass, Frank Hutter
Description: Presents SMAC3, a versatile Bayesian optimization package for hyperparameter tuning.Implicit Differentiation for Fast Hyperparameter Selection in Non-Smooth Convex Learning
Authors: Quentin Bertrand, Quentin Klopfenstein, Mathurin Massias, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon
Description: Uses implicit differentiation for efficient hyperparameter selection in nonsmooth convex optimization.Auto-Sklearn 2.0: Hands-Free AutoML via Meta-Learning
Authors: Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, Frank Hutter
Description: Introduces Auto-Sklearn 2.0, leveraging meta-learning for automated hyperparameter optimization.
Optimization in Reinforcement Learning #
Papers focusing on optimization techniques for reinforcement learning, including policy gradient and value estimation.
A Generalized Projected Bellman Error for Off-Policy Value Estimation in Reinforcement Learning
Authors: Andrew Patterson, Adam White, Martha White
Description: Develops optimization methods for off-policy value estimation using a generalized projected Bellman error.Greedification Operators for Policy Optimization: Investigating Forward and Reverse KL Divergences
Authors: Alan Chan, Hugo Silva, Sungsu Lim, Tadashi Kozuno, A. Rupam Mahmood, Martha White
Description: Investigates greedification operators for policy optimization, focusing on KL divergences.Policy Gradient and Actor-Critic Learning in Continuous Time and Space: Theory and Algorithms
Authors: Yanwei Jia, Xun Yu Zhou
Description: Analyzes policy gradient and actor-critic methods for continuous-time RL optimization.On the Convergence Rates of Policy Gradient Methods
Authors: Lin Xiao
Description: Studies convergence rates of policy gradient methods in reinforcement learning.Global Optimality and Finite Sample Analysis of Softmax Off-Policy Actor-Critic under State Distribution Mismatch
Authors: Shangtong Zhang, Remi Tachet des Combes, Romain Laroche
Description: Examines global optimality in softmax off-policy actor-critic methods under distribution mismatch.
Other Optimization Topics #
Papers covering miscellaneous optimization topics, including proximal algorithms, tensor completion, and learning-to-optimize frameworks.
TFPnP: Tuning-Free Plug-and-Play Proximal Algorithms with Applications to Inverse Imaging Problems
Authors: Kaixuan Wei, Angelica Aviles-Rivero, Jingwei Liang, Ying Fu, Hua Huang, Carola-Bibiane Schönlieb
Description: Introduces tuning-free proximal algorithms for inverse imaging problems.On the Complexity of Approximating Multimarginal Optimal Transport
Authors: Tianyi Lin, Nhat Ho, Marco Cuturi, Michael I. Jordan
Description: Analyzes the complexity of approximating multimarginal optimal transport problems.Riemannian Stochastic Proximal Gradient Methods for Nonsmooth Optimization over the Stiefel Manifold
Authors: Bokun Wang, Shiqian Ma, Lingzhou Xue
Description: Proposes stochastic proximal gradient methods for nonsmooth optimization on the Stiefel manifold.Provable Tensor-Train Format Tensor Completion by Riemannian Optimization
Authors: Jian-Feng Cai, Jingyang Li, Dong Xia
Description: Develops Riemannian optimization for tensor-train format tensor completion.Let’s Make Block Coordinate Descent Converge Faster: Faster Greedy Rules, Message-Passing, Active-Set Complexity, and Superlinear Convergence
Authors: Julie Nutini, Issam Laradji, Mark Schmidt
Description: Enhances block coordinate descent with faster convergence techniques.On the Efficiency of Entropic Regularized Algorithms for Optimal Transport
Authors: Tianyi Lin, Nhat Ho, Michael I. Jordan
Description: Studies entropic regularization for efficient optimal transport algorithms.Explicit Convergence Rates of Greedy and Random Quasi-Newton Methods
Authors: Dachao Lin, Haishan Ye, Zhihua Zhang
Description: Provides explicit convergence rates for greedy and random quasi-Newton methods.Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation from Incomplete Measurements
Authors: Tian Tong, Cong Ma, Ashley Prater-Bennette, Erin Tripp, Yuejie Chi
Description: Addresses nonconvex low-rank tensor estimation with provable guarantees.Learning to Optimize: A Primer and A Benchmark
Authors: Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin
Description: Provides a primer and benchmark for learning-to-optimize techniques.Clustering with Semidefinite Programming and Fixed Point Iteration
Authors: Pedro Felzenszwalb, Caroline Klivans, Alice Paul
Description: Uses semidefinite programming and fixed-point iteration for clustering optimization.A Bregman Learning Framework for Sparse Neural Networks
Authors: Leon Bungert, Tim Roith, Daniel Tenbrinck, Martin Burger
Description: Introduces a Bregman learning framework for optimizing sparse neural networks.When is the Convergence Time of Langevin Algorithms Dimension Independent? A Composite Optimization Viewpoint
Authors: Yoav Freund, Yi-An Ma, Tong Zhang
Description: Analyzes dimension-independent convergence of Langevin algorithms from a composite optimization perspective.Sparse Continuous Distributions and Fenchel-Young Losses
Authors: André F. T. Martins, Marcos Treviso, António Farinhas, Pedro M. Q. Aguiar, Mário A. T. Figueiredo, Mathieu Blondel, Vlad Niculae
Description: Explores sparse continuous distributions using Fenchel-Young losses for optimization.Handling Hard Affine SDP Shape Constraints in RKHSs
Authors: Pierre-Cyril Aubin-Frankowski, Zoltan Szabo
Description: Addresses affine SDP constraints in reproducing kernel Hilbert spaces for optimization.OMLT: Optimization & Machine Learning Toolkit
Authors: Francesco Ceccon, Jordan Jalving, Joshua Haddad, Alexander Thebelt, Calvin Tsay, Carl D Laird, Ruth Misener
Description: Presents OMLT, a toolkit integrating optimization and machine learning techniques.