Nam Le

Optimization Research Papers in JMLR Volume 24

Nam Le
Table of Contents

Optimization Research Papers in JMLR Volume 24 (2023) #

This document lists papers from JMLR Volume 24 (2023) that focus on optimization research, categorized by their primary themes. Each paper is numbered starting from 1 within its subsection, with a brief description of its key contributions to optimization theory, algorithms, or applications.

Convex Optimization #

Papers addressing convex optimization problems, including sparse PCA, L0 regularization, and matrix decomposition.

  1. Sparse PCA: A Geometric Approach
    Authors: Dimitris Bertsimas, Driss Lahlou Kitane
    Description: Develops a geometric approach for sparse principal component analysis using convex optimization techniques.

  2. Fundamental Limits and Algorithms for Sparse Linear Regression with Sublinear Sparsity
    Authors: Lan V. Truong
    Description: Investigates algorithms and theoretical limits for sparse linear regression with sublinear sparsity in a convex framework.

  3. Sparse Training with Lipschitz Continuous Loss Functions and a Weighted Group L0-norm Constraint
    Authors: Michael R. Metel
    Description: Proposes sparse training methods using Lipschitz continuous loss functions and group L0-norm constraints.

  4. MARS: A Second-Order Reduction Algorithm for High-Dimensional Sparse Precision Matrices Estimation
    Authors: Qian Li, Binyan Jiang, Defeng Sun
    Description: Presents a second-order reduction algorithm for sparse precision matrix estimation using convex optimization.

  5. Sparse GCA and Thresholded Gradient Descent
    Authors: Sheng Gao, Zongming Ma
    Description: Develops sparse generalized correlation analysis with thresholded gradient descent in a convex framework.

  6. A Parameter-Free Conditional Gradient Method for Composite Minimization under Hölder Condition
    Authors: Masaru Ito, Zhaosong Lu, Chuan He
    Description: Introduces a parameter-free conditional gradient method for composite minimization under Hölder smoothness.

  7. L0Learn: A Scalable Package for Sparse Learning using L0 Regularization
    Authors: Hussein Hazimeh, Rahul Mazumder, Tim Nonet
    Description: Presents a scalable package for sparse learning with L0 regularization in convex optimization.

  8. Sparse Plus Low Rank Matrix Decomposition: A Discrete Optimization Approach
    Authors: Dimitris Bertsimas, Ryan Cory-Wright, Nicholas A. G. Johnson
    Description: Proposes a discrete optimization approach for sparse plus low-rank matrix decomposition using convex methods.

  9. Distributed Sparse Regression via Penalization
    Authors: Yao Ji, Gesualdo Scutari, Ying Sun, Harsha Honnappa
    Description: Develops distributed sparse regression algorithms using penalization techniques in convex optimization.

  10. Elastic Gradient Descent, an Iterative Optimization Method Approximating the Solution Paths of the Elastic Net
    Authors: Oskar Allerbo, Johan Jonasson, Rebecka Jörnsten
    Description: Introduces an iterative method approximating elastic net solution paths in convex settings.

  11. A Novel Integer Linear Programming Approach for Global L0 Minimization
    Authors: Diego Delle Donne, Matthieu Kowalski, Leo Liberti
    Description: Proposes an integer linear programming approach for global L0 minimization in convex optimization.

Nonconvex Optimization #

Papers tackling nonconvex optimization, focusing on descent algorithms, majorization minimization, and minimax problems.

  1. A Line-Search Descent Algorithm for Strict Saddle Functions with Complexity Guarantees
    Authors: Michael J. O’Neill, Stephen J. Wright
    Description: Develops a line-search descent algorithm for nonconvex strict saddle functions with complexity guarantees.

  2. An Inertial Block Majorization Minimization Framework for Nonsmooth Nonconvex Optimization
    Authors: Le Thi Khanh Hien, Duy Nhat Phan, Nicolas Gillis
    Description: Proposes an inertial block majorization minimization framework for nonsmooth nonconvex optimization.

  3. Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the O(epsilon^(-7/4)) Complexity
    Authors: Huan Li, Zhouchen Lin
    Description: Introduces a restarted accelerated gradient descent method for nonconvex optimization, eliminating polylogarithmic factors.

  4. Preconditioned Gradient Descent for Overparameterized Nonconvex Burer-Monteiro Factorization with Global Optimality Certification
    Authors: Gavin Zhang, Salar Fattahi, Richard Y. Zhang
    Description: Develops preconditioned gradient descent for nonconvex Burer-Monteiro factorization with global optimality guarantees.

  5. Zeroth-Order Alternating Gradient Descent Ascent Algorithms for A Class of Nonconvex-Nonconcave Minimax Problems
    Authors: Zi Xu, Zi-Qi Wang, Jun-Lin Wang, Yu-Hong Dai
    Description: Proposes zeroth-order alternating gradient descent ascent for nonconvex-nonconcave minimax problems.

Stochastic Optimization #

Papers focusing on stochastic optimization methods, including gradient descent, proximal point methods, and continuous-time approaches.

  1. On the Convergence of Stochastic Gradient Descent with Bandwidth-Based Step Size
    Authors: Xiaoyu Wang, Ya-xiang Yuan
    Description: Analyzes convergence of stochastic gradient descent with bandwidth-based step sizes.

  2. Stochastic Optimization under Distributional Drift
    Authors: Joshua Cutler, Dmitriy Drusvyatskiy, Zaid Harchaoui
    Description: Studies stochastic optimization under distributional drift with theoretical guarantees.

  3. Improved Powered Stochastic Optimization Algorithms for Large-Scale Machine Learning
    Authors: Zhuang Yang
    Description: Proposes improved powered stochastic optimization algorithms for large-scale machine learning.

  4. Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
    Authors: Xiao-Tong Yuan, Ping Li
    Description: Provides a sharper analysis of minibatch stochastic proximal point methods, focusing on stability and smoothness.

  5. A Continuous-Time Stochastic Gradient Descent Method for Continuous Data
    Authors: Kexin Jin, Jonas Latz, Chenguang Liu, Carola-Bibiane Schönlieb
    Description: Introduces a continuous-time stochastic gradient descent method for continuous data optimization.

  6. Sensitivity-Free Gradient Descent Algorithms
    Authors: Ion Matei, Maksym Zhenirovskyy, Johan de Kleer, John Maxwell
    Description: Develops sensitivity-free gradient descent algorithms for stochastic optimization.

Distributed/Decentralized Optimization #

Papers addressing distributed or decentralized optimization algorithms, focusing on federated learning, asynchronous updates, and network topology.

  1. Decentralized Learning: Theoretical Optimality and Practical Improvements
    Authors: Yucheng Lu, Christopher De Sa
    Description: Analyzes theoretical optimality and practical improvements for decentralized learning algorithms.

  2. A General Theory for Federated Optimization with Asynchronous and Heterogeneous Clients Updates
    Authors: Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi
    Description: Provides a general theory for federated optimization with asynchronous and heterogeneous client updates.

  3. Buffered Asynchronous SGD for Byzantine Learning
    Authors: Yi-Rui Yang, Wu-Jun Li
    Description: Proposes buffered asynchronous SGD for Byzantine-resilient distributed learning.

  4. Minimax Estimation for Personalized Federated Learning: An Alternative Between FedAvg and Local Training
    Authors: Shuxiao Chen, Qinqing Zheng, Qi Long, Weijie J. Su
    Description: Investigates minimax estimation for personalized federated learning, comparing FedAvg and local training.

  5. Removing Data Heterogeneity Influence Enhances Network Topology Dependence of Decentralized SGD
    Authors: Kun Yuan, Sulaiman A. Alghunaim, Xinmeng Huang
    Description: Enhances decentralized SGD by addressing data heterogeneity and network topology dependence.

  6. Multi-Consensus Decentralized Accelerated Gradient Descent
    Authors: Haishan Ye, Luo Luo, Ziang Zhou, Tong Zhang
    Description: Develops multi-consensus decentralized accelerated gradient descent for distributed optimization.

  7. Accelerated Primal-Dual Mirror Dynamics for Centralized and Distributed Constrained Convex Optimization Problems
    Authors: You Zhao, Xiaofeng Liao, Xing He, Mingliang Zhou, Chaojie Li
    Description: Proposes accelerated primal-dual mirror dynamics for centralized and distributed convex optimization.

  8. Beyond Spectral Gap: The Role of the Topology in Decentralized Learning
    Authors: Thijs Vogels, Hadrien Hendrikx, Martin Jaggi
    Description: Examines the role of network topology in decentralized learning optimization.

Bandits and Online Learning #

Papers addressing multi-armed bandits, online optimization, and regret minimization.

  1. Adaptation to the Range in K-Armed Bandits
    Authors: Hédi Hadiji, Gilles Stoltz
    Description: Studies adaptation to the range in k-armed bandit problems with regret minimization.

  2. Dimension Reduction in Contextual Online Learning via Nonparametric Variable Selection
    Authors: Wenhao Li, Ningyuan Chen, L. Jeff Hong
    Description: Proposes dimension reduction techniques for contextual online learning with nonparametric variable selection.

  3. Non-Stationary Online Learning with Memory and Non-Stochastic Control
    Authors: Peng Zhao, Yu-Hu Yan, Yu-Xiang Wang, Zhi-Hua Zhou
    Description: Investigates non-stationary online learning with memory and non-stochastic control strategies.

  4. Online Non-Stochastic Control with Partial Feedback
    Authors: Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou
    Description: Develops online non-stochastic control methods with partial feedback for optimization.

  5. A New Look at Dynamic Regret for Non-Stationary Stochastic Bandits
    Authors: Yasin Abbasi-Yadkori, András György, Nevena Lazić
    Description: Analyzes dynamic regret in non-stationary stochastic bandit problems.

  6. A PDE Approach for Regret Bounds under Partial Monitoring
    Authors: Erhan Bayraktar, Ibrahim Ekren, Xin Zhang
    Description: Uses a PDE-based approach to derive regret bounds for partial monitoring in online learning.

  7. Continuous-in-Time Limit for Bayesian Bandits
    Authors: Yuhua Zhu, Zachary Izzo, Lexing Ying
    Description: Explores the continuous-time limit for Bayesian bandit algorithms with theoretical guarantees.

  8. Bandit Problems with Fidelity Rewards
    Authors: Gábor Lugosi, Ciara Pike-Burke, Pierre-André Savalle
    Description: Studies bandit problems with fidelity rewards, focusing on regret minimization.

  9. Linear Partial Monitoring for Sequential Decision Making: Algorithms, Regret Bounds and Applications
    Authors: Johannes Kirschner, Tor Lattimore, Andreas Krause
    Description: Develops algorithms and regret bounds for linear partial monitoring in sequential decision-making.

Optimization in Reinforcement Learning #

Papers focusing on optimization techniques for reinforcement learning, including actor-critic methods and constrained RL.

  1. Reinforcement Learning for Joint Optimization of Multiple Rewards
    Authors: Mridul Agarwal, Vaneet Aggarwal
    Description: Focuses on reinforcement learning for optimizing multiple rewards simultaneously.

  2. Provably Sample-Efficient Model-Free Algorithm for MDPs with Peak Constraints
    Authors: Qinbo Bai, Vaneet Aggarwal, Ather Gattami
    Description: Proposes a sample-efficient model-free algorithm for MDPs with peak constraints.

  3. Off-Policy Actor-Critic with Emphatic Weightings
    Authors: Eric Graves, Ehsan Imani, Raksha Kumaraswamy, Martha White
    Description: Develops off-policy actor-critic methods with emphatic weightings for RL optimization.

  4. q-Learning for MDPs with General Spaces: Convergence and Near Optimality via Quantization under Weak Continuity
    Authors: Yanwei Jia, Xun Yu Zhou
    Description: Analyzes q-learning convergence and near-optimality for MDPs with general state spaces.

  5. Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity
    Authors: Kaiqing Zhang, Sham M. Kakade, Tamer Basar, Lin F. Yang
    Description: Studies model-based multi-agent RL in zero-sum Markov games with near-optimal sample complexity.

  6. F2A2: Flexible Fully-Decentralized Approximate Actor-Critic for Cooperative Multi-Agent Reinforcement Learning
    Authors: Wenhao Li, Bo Jin, Xiangfeng Wang, Junchi Yan, Hongyuan Zha
    Description: Proposes a flexible fully-decentralized approximate actor-critic method for cooperative multi-agent RL.

  7. Adaptation Augmented Model-Based Policy Optimization
    Authors: Jian Shen, Hang Lai, Minghuan Liu, Han Zhao, Yong Yu, Weinan Zhang
    Description: Introduces adaptation-augmented model-based policy optimization for RL.

  8. Single Timescale Actor-Critic Method to Solve the Linear Quadratic Regulator with Convergence Guarantees
    Authors: Mo Zhou, Jianfeng Lu
    Description: Develops a single timescale actor-critic method for linear quadratic regulators with convergence guarantees.

  9. Convex Reinforcement Learning in Finite Trials
    Authors: Mirco Mutti, Riccardo De Santi, Piersilvio De Bartolomeis, Marcello Restelli
    Description: Investigates convex reinforcement learning with finite trials, focusing on optimization techniques.

  10. Double Duality: Variational Primal-Dual Policy Optimization for Constrained Reinforcement Learning
    Authors: Zihao Li, Boyi Liu, Zhuoran Yang, Zhaoran Wang, Mengdi Wang
    Description: Proposes a variational primal-dual policy optimization method for constrained RL.

  11. Instance-Dependent Confidence and Early Stopping for Reinforcement Learning
    Authors: Eric Xia, Koulik Khamaru, Martin J. Wainwright, Michael I. Jordan
    Description: Develops instance-dependent confidence bounds and early stopping strategies for RL optimization.

Other Optimization Topics #

Papers covering miscellaneous optimization topics, including Riemannian optimization, matrix completion, and optimal transport.

  1. A Relaxed Inertial Forward-Backward-Forward Algorithm for Solving Monotone Inclusions with Application to GANs
    Authors: Radu I. Bot, Michael Sedlmayer, Phan Tu Vuong
    Description: Proposes a relaxed inertial forward-backward-forward algorithm for monotone inclusions with applications to GANs.

  2. Discrete Variational Calculus for Accelerated Optimization
    Authors: Cédric M. Campos, Alejandro Mahillo, David Martín de Diego
    Description: Introduces discrete variational calculus for accelerating optimization processes.

  3. Online Optimization over Riemannian Manifolds
    Authors: Xi Wang, Zhipeng Tu, Yiguang Hong, Yingyi Wu, Guodong Shi
    Description: Develops online optimization algorithms over Riemannian manifolds.

  4. Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave Min-Max Problems with PL Condition
    Authors: Zhishuai Guo, Yan Yan, Zhuoning Yuan, Tianbao Yang
    Description: Analyzes fast convergence for non-convex strongly-concave min-max problems under the Polyak-Łojasiewicz condition.

  5. Asynchronous Iterations in Optimization: New Sequence Results and Sharper Algorithmic Guarantees
    Authors: Hamid Reza Feyzmahdavian, Mikael Johansson
    Description: Provides new sequence results and sharper guarantees for asynchronous optimization iterations.

  6. The Proximal ID Algorithm
    Authors: Ilya Shpitser, Zach Wood-Doughty, Eric J. Tchetgen Tchetgen
    Description: Introduces a proximal algorithm for identification problems in optimization.

  7. An Inexact Augmented Lagrangian Algorithm for Training Leaky ReLU Neural Network with Group Sparsity
    Authors: Wei Liu, Xin Liu, Xiaojun Chen
    Description: Develops an inexact augmented Lagrangian algorithm for training leaky ReLU networks with group sparsity.

  8. On the Optimality of Nuclear-Norm-Based Matrix Completion for Problems with Smooth Non-Linear Structure
    Authors: Yunhua Xiang, Tianyu Zhang, Xu Wang, Ali Shojaie, Noah Simon
    Description: Studies nuclear-norm-based matrix completion for problems with smooth nonlinear structures.

  9. Importance Sparsification for Sinkhorn Algorithm
    Authors: Mengyu Li, Jun Yu, Tao Li, Cheng Meng
    Description: Proposes importance sparsification techniques for the Sinkhorn algorithm in optimal transport.

  10. Near-Optimal Weighted Matrix Completion
    Authors: Oscar López
    Description: Investigates near-optimal weighted matrix completion using optimization techniques.

  11. Implicit Regularization and Entrywise Convergence of Riemannian Optimization for Low Tucker-Rank Tensor Completion
    Authors: Haifeng Wang, Jinchi Chen, Ke Wei
    Description: Analyzes implicit regularization and entrywise convergence in Riemannian optimization for low Tucker-rank tensor completion.

  12. On Unbalanced Optimal Transport: Gradient Methods, Sparsity and Approximation Error
    Authors: Quang Minh Nguyen, Hoang H. Nguyen, Yi Zhou, Lam M. Nguyen
    Description: Studies gradient methods for unbalanced optimal transport, focusing on sparsity and approximation error.

Tags:
Categories: