Optimization Research Papers in JMLR Volume 24
Table of Contents
Optimization Research Papers in JMLR Volume 24 (2023) #
This document lists papers from JMLR Volume 24 (2023) that focus on optimization research, categorized by their primary themes. Each paper is numbered starting from 1 within its subsection, with a brief description of its key contributions to optimization theory, algorithms, or applications.
Convex Optimization #
Papers addressing convex optimization problems, including sparse PCA, L0 regularization, and matrix decomposition.
Sparse PCA: A Geometric Approach
Authors: Dimitris Bertsimas, Driss Lahlou Kitane
Description: Develops a geometric approach for sparse principal component analysis using convex optimization techniques.Fundamental Limits and Algorithms for Sparse Linear Regression with Sublinear Sparsity
Authors: Lan V. Truong
Description: Investigates algorithms and theoretical limits for sparse linear regression with sublinear sparsity in a convex framework.Sparse Training with Lipschitz Continuous Loss Functions and a Weighted Group L0-norm Constraint
Authors: Michael R. Metel
Description: Proposes sparse training methods using Lipschitz continuous loss functions and group L0-norm constraints.MARS: A Second-Order Reduction Algorithm for High-Dimensional Sparse Precision Matrices Estimation
Authors: Qian Li, Binyan Jiang, Defeng Sun
Description: Presents a second-order reduction algorithm for sparse precision matrix estimation using convex optimization.Sparse GCA and Thresholded Gradient Descent
Authors: Sheng Gao, Zongming Ma
Description: Develops sparse generalized correlation analysis with thresholded gradient descent in a convex framework.A Parameter-Free Conditional Gradient Method for Composite Minimization under Hölder Condition
Authors: Masaru Ito, Zhaosong Lu, Chuan He
Description: Introduces a parameter-free conditional gradient method for composite minimization under Hölder smoothness.L0Learn: A Scalable Package for Sparse Learning using L0 Regularization
Authors: Hussein Hazimeh, Rahul Mazumder, Tim Nonet
Description: Presents a scalable package for sparse learning with L0 regularization in convex optimization.Sparse Plus Low Rank Matrix Decomposition: A Discrete Optimization Approach
Authors: Dimitris Bertsimas, Ryan Cory-Wright, Nicholas A. G. Johnson
Description: Proposes a discrete optimization approach for sparse plus low-rank matrix decomposition using convex methods.Distributed Sparse Regression via Penalization
Authors: Yao Ji, Gesualdo Scutari, Ying Sun, Harsha Honnappa
Description: Develops distributed sparse regression algorithms using penalization techniques in convex optimization.Elastic Gradient Descent, an Iterative Optimization Method Approximating the Solution Paths of the Elastic Net
Authors: Oskar Allerbo, Johan Jonasson, Rebecka Jörnsten
Description: Introduces an iterative method approximating elastic net solution paths in convex settings.A Novel Integer Linear Programming Approach for Global L0 Minimization
Authors: Diego Delle Donne, Matthieu Kowalski, Leo Liberti
Description: Proposes an integer linear programming approach for global L0 minimization in convex optimization.
Nonconvex Optimization #
Papers tackling nonconvex optimization, focusing on descent algorithms, majorization minimization, and minimax problems.
A Line-Search Descent Algorithm for Strict Saddle Functions with Complexity Guarantees
Authors: Michael J. O’Neill, Stephen J. Wright
Description: Develops a line-search descent algorithm for nonconvex strict saddle functions with complexity guarantees.An Inertial Block Majorization Minimization Framework for Nonsmooth Nonconvex Optimization
Authors: Le Thi Khanh Hien, Duy Nhat Phan, Nicolas Gillis
Description: Proposes an inertial block majorization minimization framework for nonsmooth nonconvex optimization.Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the O(epsilon^(-7/4)) Complexity
Authors: Huan Li, Zhouchen Lin
Description: Introduces a restarted accelerated gradient descent method for nonconvex optimization, eliminating polylogarithmic factors.Preconditioned Gradient Descent for Overparameterized Nonconvex Burer-Monteiro Factorization with Global Optimality Certification
Authors: Gavin Zhang, Salar Fattahi, Richard Y. Zhang
Description: Develops preconditioned gradient descent for nonconvex Burer-Monteiro factorization with global optimality guarantees.Zeroth-Order Alternating Gradient Descent Ascent Algorithms for A Class of Nonconvex-Nonconcave Minimax Problems
Authors: Zi Xu, Zi-Qi Wang, Jun-Lin Wang, Yu-Hong Dai
Description: Proposes zeroth-order alternating gradient descent ascent for nonconvex-nonconcave minimax problems.
Stochastic Optimization #
Papers focusing on stochastic optimization methods, including gradient descent, proximal point methods, and continuous-time approaches.
On the Convergence of Stochastic Gradient Descent with Bandwidth-Based Step Size
Authors: Xiaoyu Wang, Ya-xiang Yuan
Description: Analyzes convergence of stochastic gradient descent with bandwidth-based step sizes.Stochastic Optimization under Distributional Drift
Authors: Joshua Cutler, Dmitriy Drusvyatskiy, Zaid Harchaoui
Description: Studies stochastic optimization under distributional drift with theoretical guarantees.Improved Powered Stochastic Optimization Algorithms for Large-Scale Machine Learning
Authors: Zhuang Yang
Description: Proposes improved powered stochastic optimization algorithms for large-scale machine learning.Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Authors: Xiao-Tong Yuan, Ping Li
Description: Provides a sharper analysis of minibatch stochastic proximal point methods, focusing on stability and smoothness.A Continuous-Time Stochastic Gradient Descent Method for Continuous Data
Authors: Kexin Jin, Jonas Latz, Chenguang Liu, Carola-Bibiane Schönlieb
Description: Introduces a continuous-time stochastic gradient descent method for continuous data optimization.Sensitivity-Free Gradient Descent Algorithms
Authors: Ion Matei, Maksym Zhenirovskyy, Johan de Kleer, John Maxwell
Description: Develops sensitivity-free gradient descent algorithms for stochastic optimization.
Distributed/Decentralized Optimization #
Papers addressing distributed or decentralized optimization algorithms, focusing on federated learning, asynchronous updates, and network topology.
Decentralized Learning: Theoretical Optimality and Practical Improvements
Authors: Yucheng Lu, Christopher De Sa
Description: Analyzes theoretical optimality and practical improvements for decentralized learning algorithms.A General Theory for Federated Optimization with Asynchronous and Heterogeneous Clients Updates
Authors: Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi
Description: Provides a general theory for federated optimization with asynchronous and heterogeneous client updates.Buffered Asynchronous SGD for Byzantine Learning
Authors: Yi-Rui Yang, Wu-Jun Li
Description: Proposes buffered asynchronous SGD for Byzantine-resilient distributed learning.Minimax Estimation for Personalized Federated Learning: An Alternative Between FedAvg and Local Training
Authors: Shuxiao Chen, Qinqing Zheng, Qi Long, Weijie J. Su
Description: Investigates minimax estimation for personalized federated learning, comparing FedAvg and local training.Removing Data Heterogeneity Influence Enhances Network Topology Dependence of Decentralized SGD
Authors: Kun Yuan, Sulaiman A. Alghunaim, Xinmeng Huang
Description: Enhances decentralized SGD by addressing data heterogeneity and network topology dependence.Multi-Consensus Decentralized Accelerated Gradient Descent
Authors: Haishan Ye, Luo Luo, Ziang Zhou, Tong Zhang
Description: Develops multi-consensus decentralized accelerated gradient descent for distributed optimization.Accelerated Primal-Dual Mirror Dynamics for Centralized and Distributed Constrained Convex Optimization Problems
Authors: You Zhao, Xiaofeng Liao, Xing He, Mingliang Zhou, Chaojie Li
Description: Proposes accelerated primal-dual mirror dynamics for centralized and distributed convex optimization.Beyond Spectral Gap: The Role of the Topology in Decentralized Learning
Authors: Thijs Vogels, Hadrien Hendrikx, Martin Jaggi
Description: Examines the role of network topology in decentralized learning optimization.
Bandits and Online Learning #
Papers addressing multi-armed bandits, online optimization, and regret minimization.
Adaptation to the Range in K-Armed Bandits
Authors: Hédi Hadiji, Gilles Stoltz
Description: Studies adaptation to the range in k-armed bandit problems with regret minimization.Dimension Reduction in Contextual Online Learning via Nonparametric Variable Selection
Authors: Wenhao Li, Ningyuan Chen, L. Jeff Hong
Description: Proposes dimension reduction techniques for contextual online learning with nonparametric variable selection.Non-Stationary Online Learning with Memory and Non-Stochastic Control
Authors: Peng Zhao, Yu-Hu Yan, Yu-Xiang Wang, Zhi-Hua Zhou
Description: Investigates non-stationary online learning with memory and non-stochastic control strategies.Online Non-Stochastic Control with Partial Feedback
Authors: Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou
Description: Develops online non-stochastic control methods with partial feedback for optimization.A New Look at Dynamic Regret for Non-Stationary Stochastic Bandits
Authors: Yasin Abbasi-Yadkori, András György, Nevena Lazić
Description: Analyzes dynamic regret in non-stationary stochastic bandit problems.A PDE Approach for Regret Bounds under Partial Monitoring
Authors: Erhan Bayraktar, Ibrahim Ekren, Xin Zhang
Description: Uses a PDE-based approach to derive regret bounds for partial monitoring in online learning.Continuous-in-Time Limit for Bayesian Bandits
Authors: Yuhua Zhu, Zachary Izzo, Lexing Ying
Description: Explores the continuous-time limit for Bayesian bandit algorithms with theoretical guarantees.Bandit Problems with Fidelity Rewards
Authors: Gábor Lugosi, Ciara Pike-Burke, Pierre-André Savalle
Description: Studies bandit problems with fidelity rewards, focusing on regret minimization.Linear Partial Monitoring for Sequential Decision Making: Algorithms, Regret Bounds and Applications
Authors: Johannes Kirschner, Tor Lattimore, Andreas Krause
Description: Develops algorithms and regret bounds for linear partial monitoring in sequential decision-making.
Optimization in Reinforcement Learning #
Papers focusing on optimization techniques for reinforcement learning, including actor-critic methods and constrained RL.
Reinforcement Learning for Joint Optimization of Multiple Rewards
Authors: Mridul Agarwal, Vaneet Aggarwal
Description: Focuses on reinforcement learning for optimizing multiple rewards simultaneously.Provably Sample-Efficient Model-Free Algorithm for MDPs with Peak Constraints
Authors: Qinbo Bai, Vaneet Aggarwal, Ather Gattami
Description: Proposes a sample-efficient model-free algorithm for MDPs with peak constraints.Off-Policy Actor-Critic with Emphatic Weightings
Authors: Eric Graves, Ehsan Imani, Raksha Kumaraswamy, Martha White
Description: Develops off-policy actor-critic methods with emphatic weightings for RL optimization.q-Learning for MDPs with General Spaces: Convergence and Near Optimality via Quantization under Weak Continuity
Authors: Yanwei Jia, Xun Yu Zhou
Description: Analyzes q-learning convergence and near-optimality for MDPs with general state spaces.Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity
Authors: Kaiqing Zhang, Sham M. Kakade, Tamer Basar, Lin F. Yang
Description: Studies model-based multi-agent RL in zero-sum Markov games with near-optimal sample complexity.F2A2: Flexible Fully-Decentralized Approximate Actor-Critic for Cooperative Multi-Agent Reinforcement Learning
Authors: Wenhao Li, Bo Jin, Xiangfeng Wang, Junchi Yan, Hongyuan Zha
Description: Proposes a flexible fully-decentralized approximate actor-critic method for cooperative multi-agent RL.Adaptation Augmented Model-Based Policy Optimization
Authors: Jian Shen, Hang Lai, Minghuan Liu, Han Zhao, Yong Yu, Weinan Zhang
Description: Introduces adaptation-augmented model-based policy optimization for RL.Single Timescale Actor-Critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees
Authors: Mo Zhou, Jianfeng Lu
Description: Develops a single timescale actor-critic method for linear quadratic regulators with convergence guarantees.Convex Reinforcement Learning in Finite Trials
Authors: Mirco Mutti, Riccardo De Santi, Piersilvio De Bartolomeis, Marcello Restelli
Description: Investigates convex reinforcement learning with finite trials, focusing on optimization techniques.Double Duality: Variational Primal-Dual Policy Optimization for Constrained Reinforcement Learning
Authors: Zihao Li, Boyi Liu, Zhuoran Yang, Zhaoran Wang, Mengdi Wang
Description: Proposes a variational primal-dual policy optimization method for constrained RL.Instance-Dependent Confidence and Early Stopping for Reinforcement Learning
Authors: Eric Xia, Koulik Khamaru, Martin J. Wainwright, Michael I. Jordan
Description: Develops instance-dependent confidence bounds and early stopping strategies for RL optimization.
Other Optimization Topics #
Papers covering miscellaneous optimization topics, including Riemannian optimization, matrix completion, and optimal transport.
A Relaxed Inertial Forward-Backward-Forward Algorithm for Solving Monotone Inclusions with Application to GANs
Authors: Radu I. Bot, Michael Sedlmayer, Phan Tu Vuong
Description: Proposes a relaxed inertial forward-backward-forward algorithm for monotone inclusions with applications to GANs.Discrete Variational Calculus for Accelerated Optimization
Authors: Cédric M. Campos, Alejandro Mahillo, David Martín de Diego
Description: Introduces discrete variational calculus for accelerating optimization processes.Online Optimization over Riemannian Manifolds
Authors: Xi Wang, Zhipeng Tu, Yiguang Hong, Yingyi Wu, Guodong Shi
Description: Develops online optimization algorithms over Riemannian manifolds.Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave Min-Max Problems with PL Condition
Authors: Zhishuai Guo, Yan Yan, Zhuoning Yuan, Tianbao Yang
Description: Analyzes fast convergence for non-convex strongly-concave min-max problems under the Polyak-Łojasiewicz condition.Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees
Authors: Hamid Reza Feyzmahdavian, Mikael Johansson
Description: Provides new sequence results and sharper guarantees for asynchronous optimization iterations.The Proximal ID Algorithm
Authors: Ilya Shpitser, Zach Wood-Doughty, Eric J. Tchetgen Tchetgen
Description: Introduces a proximal algorithm for identification problems in optimization.An Inexact Augmented Lagrangian Algorithm for Training Leaky ReLU Neural Network with Group Sparsity
Authors: Wei Liu, Xin Liu, Xiaojun Chen
Description: Develops an inexact augmented Lagrangian algorithm for training leaky ReLU networks with group sparsity.On the Optimality of Nuclear-Norm-Based Matrix Completion for Problems with Smooth Non-Linear Structure
Authors: Yunhua Xiang, Tianyu Zhang, Xu Wang, Ali Shojaie, Noah Simon
Description: Studies nuclear-norm-based matrix completion for problems with smooth nonlinear structures.Importance Sparsification for Sinkhorn Algorithm
Authors: Mengyu Li, Jun Yu, Tao Li, Cheng Meng
Description: Proposes importance sparsification techniques for the Sinkhorn algorithm in optimal transport.Near-Optimal Weighted Matrix Completion
Authors: Oscar López
Description: Investigates near-optimal weighted matrix completion using optimization techniques.Implicit Regularization and Entrywise Convergence of Riemannian Optimization for Low Tucker-Rank Tensor Completion
Authors: Haifeng Wang, Jinchi Chen, Ke Wei
Description: Analyzes implicit regularization and entrywise convergence in Riemannian optimization for low Tucker-rank tensor completion.On Unbalanced Optimal Transport: Gradient Methods, Sparsity and Approximation Error
Authors: Quang Minh Nguyen, Hoang H. Nguyen, Yi Zhou, Lam M. Nguyen
Description: Studies gradient methods for unbalanced optimal transport, focusing on sparsity and approximation error.