Optimization Research Papers in JMLR Volume 21
Table of Contents
Optimization Research Papers in JMLR Volume 21 (2020) #
This document lists papers from JMLR Volume 21 (2020) that focus on optimization research, categorized by their primary themes. Each paper is numbered starting from 1 within its subsection, with a brief description of its key contributions to optimization theory, algorithms, or applications.
Convex Optimization #
Papers addressing convex optimization problems, including complexity bounds, convergence analysis, and applications in regression and assortment optimization.
A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints
Authors: Hao Yu, Michael J. Neely
Description: Proposes a low-complexity algorithm for online convex optimization with long-term constraints, achieving O(√T) regret and O(1) constraint violations.Lower Bounds for Parallel and Randomized Convex Optimization
Authors: Jelena Diakonikolas, Cristóbal Guzmán
Description: Establishes lower complexity bounds for parallel and randomized algorithms in convex optimization.Discerning the Linear Convergence of ADMM for Structured Convex Optimization through the Lens of Variational Analysis
Authors: Xiaoming Yuan, Shangzhi Zeng, Jin Zhang
Description: Analyzes the linear convergence of ADMM for structured convex optimization using variational analysis.A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints
Authors: Qihang Lin, Selvaprabu Nadarajah, Negar Soheili, Tianbao Yang
Description: Develops a data-efficient level set method for stochastic convex optimization with expectation constraints.Conic Optimization for Quadratic Regression Under Sparse Noise
Authors: Igor Molybog, Ramtin Madani, Javad Lavaei
Description: Applies conic optimization to quadratic regression under sparse noise conditions.Dynamic Assortment Optimization with Changing Contextual Information
Authors: Xi Chen, Yining Wang, Yuan Zhou
Description: Addresses dynamic assortment optimization with changing contextual information using convex optimization techniques.Convex Programming for Estimation in Nonlinear Recurrent Models
Authors: Sohail Bahmani, Justin Romberg
Description: Uses convex programming for parameter estimation in nonlinear recurrent models.
Nonconvex Optimization #
Papers tackling nonconvex optimization, focusing on guarantees for local minima, variance reduction, and algorithmic advancements.
Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis
Authors: Salar Fattahi, Somayeh Sojoudi
Description: Provides exact guarantees for the absence of spurious local minima in non-negative rank-1 robust PCA.Stochastic Nested Variance Reduction for Nonconvex Optimization
Authors: Dongruo Zhou, Pan Xu, Quanquan Gu
Description: Introduces a stochastic nested variance reduction method for nonconvex optimization.ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Authors: Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Quoc Tran-Dinh
Description: Proposes ProxSARAH, an efficient framework for stochastic composite nonconvex optimization.Convergence Rates for the Stochastic Gradient Descent Method for Non-Convex Objective Functions
Authors: Benjamin Fehrman, Benjamin Gess, Arnulf Jentzen
Description: Analyzes convergence rates of stochastic gradient descent for nonconvex objective functions.AdaGrad Stepsizes: Sharp Convergence Over Nonconvex Landscapes
Authors: Rachel Ward, Xiaoxia Wu, Leon Bottou
Description: Studies sharp convergence of AdaGrad stepsize schedules in nonconvex optimization.A Sparse Semismooth Newton Based Proximal Majorization-Minimization Algorithm for Nonconvex Square-Root-Loss Regression Problems
Authors: Peipei Tang, Chengjing Wang, Defeng Sun, Kim-Chuan Toh
Description: Develops a sparse semismooth Newton-based proximal majorization-minimization algorithm for nonconvex square-root-loss regression.
Stochastic Optimization #
Papers focusing on stochastic optimization methods, including gradient descent, variance reduction, and robustness to noise.
Convergences of Regularized Algorithms and Stochastic Gradient Methods with Random Projections
Authors: Junhong Lin, Volkan Cevher
Description: Analyzes convergence of regularized algorithms and stochastic gradient methods with random projections.Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent
Authors: Dominic Richards, Patrick Rebeschini
Description: Studies graph-dependent implicit regularization in distributed stochastic subgradient descent.Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions
Authors: Artin Spiridonoff, Alex Olshevsky, Ioannis Ch. Paschalidis
Description: Proposes a robust asynchronous stochastic gradient-push method with asymptotically optimal performance for strongly convex functions.On Stationary-Point Hitting Time and Ergodicity of Stochastic Gradient Langevin Dynamics
Authors: Xi Chen, Simon S. Du, Xin T. Tong
Description: Investigates stationary-point hitting time and ergodicity in stochastic gradient Langevin dynamics.Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization
Authors: Aryan Mokhtari, Hamed Hassani, Amin Karbasi
Description: Extends stochastic conditional gradient methods from convex minimization to submodular maximization.A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning
Authors: Aryan Mokhtari, Alec Koppel, Martin Takac, Alejandro Ribeiro
Description: Introduces parallel doubly stochastic algorithms for large-scale learning.Gradient Descent for Sparse Rank-One Matrix Completion for Crowd-Sourced Aggregation of Sparsely Interacting Workers
Authors: Yao Ma, Alex Olshevsky, Csaba Szepesvari, Venkatesh Saligrama
Description: Applies gradient descent to sparse rank-one matrix completion for crowd-sourced worker aggregation.Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms
Authors: Junhong Lin, Volkan Cevher
Description: Establishes optimal convergence rates for distributed learning using stochastic gradient methods and spectral algorithms.Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
Authors: Andrei Kulunchakov, Julien Mairal
Description: Develops estimate sequences for stochastic composite optimization with variance reduction and noise robustness.A Unified q-Memorization Framework for Asynchronous Stochastic Optimization
Authors: Bin Gu, Wenhan Xian, Zhouyuan Huo, Cheng Deng, Heng Huang
Description: Proposes a unified q-memorization framework for asynchronous stochastic optimization.Asymptotic Analysis via Stochastic Differential Equations of Gradient Descent Algorithms in Statistical and Computational Paradigms
Authors: Yazhen Wang, Shang Wu
Description: Analyzes gradient descent algorithms using stochastic differential equations in statistical and computational settings.The Error-Feedback Framework: SGD with Delayed Gradients
Authors: Sebastian U. Stich, Sai Praneeth Karimireddy
Description: Introduces an error-feedback framework for stochastic gradient descent with delayed gradients.
Distributed/Parallel Optimization #
Papers addressing distributed or parallel optimization algorithms, focusing on communication efficiency and scalability.
On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent
Authors: Huan Li, Zhouchen Lin
Description: Analyzes the complexity of primal solutions for accelerated randomized dual coordinate ascent in distributed settings.WONDER: Weighted One-shot Distributed Ridge Regression in High Dimensions
Authors: Edgar Dobriban, Yue Sheng
Description: Proposes WONDER, a weighted one-shot distributed ridge regression method for high-dimensional data.GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning
Authors: Anis Elgabli, Jihong Park, Amrit S. Bedi, Mehdi Bennis, Vaneet Aggarwal
Description: Introduces GADMM, a fast and communication-efficient framework for distributed machine learning.Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction
Authors: Boyue Li, Shicong Cen, Yuxin Chen, Yuejie Chi
Description: Develops communication-efficient distributed optimization with gradient tracking and variance reduction.On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond
Authors: Xiao-Tong Yuan, Ping Li
Description: Analyzes convergence of distributed approximate Newton methods with sharper bounds and globalization techniques.
Submodular Optimization #
Papers focusing on submodular optimization, including minimization and maximization problems.
Quadratic Decomposable Submodular Function Minimization: Theory and Practice
Authors: Pan Li, Niao He, Olgica Milenkovic
Description: Studies quadratic decomposable submodular function minimization with theoretical and practical insights.Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization
Authors: Rad Niazadeh, Tim Roughgarden, Joshua R. Wang
Description: Develops optimal algorithms for continuous non-monotone submodular and DR-submodular maximization.
Bayesian and Hyperparameter Optimization #
Papers addressing Bayesian optimization and hyperparameter tuning for scalable and robust optimization.
Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly
Authors: Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabas Poczos, Eric P. Xing
Description: Introduces Dragonfly, a scalable and robust Bayesian optimization framework for hyperparameter tuning.Distributionally Ambiguous Optimization for Batch Bayesian Optimization
Authors: Nikitas Rontsis, Michael A. Osborne, Paul J. Goulart
Description: Proposes distributionally ambiguous optimization for batch Bayesian optimization.The Kalai-Smorodinsky Solution for Many-Objective Bayesian Optimization
Authors: Mickael Binois, Victor Picheny, Patrick Taillandier, Abderrahmane Habbal
Description: Applies the Kalai-Smorodinsky solution to many-objective Bayesian optimization.Robust Reinforcement Learning with Bayesian Optimisation and Quadrature
Authors: Supratik Paul, Konstantinos Chatzilygeroudis, Kamil Ciosek, Jean-Baptiste Mouret, Michael A. Osborne, Shimon Whiteson
Description: Integrates Bayesian optimization and quadrature for robust reinforcement learning.
Optimization in Reinforcement Learning #
Papers focusing on optimization techniques for policy optimization and reinforcement learning.
Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems
Authors: Dhruv Malik, Ashwin Pananjady, Kush Bhatia, Koulik Khamaru, Peter L. Bartlett, Martin J. Wainwright
Description: Develops derivative-free methods for policy optimization in linear quadratic systems with guarantees.Expected Policy Gradients for Reinforcement Learning
Authors: Kamil Ciosek, Shimon Whiteson
Description: Introduces expected policy gradients for reinforcement learning optimization.Importance Sampling Techniques for Policy Optimization
Authors: Alberto Maria Metelli, Matteo Papini, Nico Montali, Marcello Restelli
Description: Proposes importance sampling techniques for efficient policy optimization in reinforcement learning.
Other Optimization Topics #
Papers covering miscellaneous optimization topics, including dictionary learning, neural network verification, and differential privacy.
Learning with Fenchel-Young Losses
Authors: Mathieu Blondel, André F.T. Martins, Vlad Niculae
Description: Introduces optimization with Fenchel-Young losses for structured prediction.Branch and Bound for Piecewise Linear Neural Network Verification
Authors: Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Philip H.S. Torr, Pushmeet Kohli, M. Pawan Kumar
Description: Applies branch and bound techniques for piecewise linear neural network verification.Conjugate Gradients for Kernel Machines
Authors: Simon Bartels, Philipp Hennig
Description: Develops conjugate gradient methods for optimization in kernel machines.Unique Sharp Local Minimum in L1-Minimization Complete Dictionary Learning
Authors: Yu Wang, Siqi Wu, Bin Yu
Description: Analyzes unique sharp local minima in L1-minimization for complete dictionary learning.Community-Based Group Graphical Lasso
Authors: Eugen Pircalabelu, Gerda Claeskens
Description: Proposes a community-based group graphical Lasso for structured optimization.Constrained Dynamic Programming and Supervised Penalty Learning Algorithms for Peak Detection in Genomic Data
Authors: Toby Dylan Hocking, Guillem Rigaill, Paul Fearnhead, Guillaume Bourque
Description: Develops constrained dynamic programming and supervised penalty learning for peak detection in genomic data.Loss Control with Rank-One Covariance Estimate for Short-Term Portfolio Optimization
Authors: Zhao-Rong Lai, Liming Tan, Xiaotian Wu, Liangda Fang
Description: Applies rank-one covariance estimation for loss control in short-term portfolio optimization.A Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks
Authors: Owen Marschall, Kyunghyun Cho, Cristina Savin
Description: Proposes a unified framework of online learning algorithms for training recurrent neural networks.Chaining Meets Chain Rule: Multilevel Entropic Regularization and Training of Neural Networks
Authors: Amir R. Asadi, Emmanuel Abbe
Description: Introduces multilevel entropic regularization for neural network training using chaining and chain rule.Nesterov’s Acceleration for Approximate Newton
Authors: Haishan Ye, Luo Luo, Zhihua Zhang
Description: Applies Nesterov’s acceleration to approximate Newton methods for optimization.New Insights and Perspectives on the Natural Gradient Method
Authors: James Martens
Description: Provides new insights into the natural gradient method for optimization.Complete Dictionary Learning via L4-Norm Maximization over the Orthogonal Group
Authors: Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, Yi Ma
Description: Develops complete dictionary learning via L4-norm maximization over the orthogonal group.Empirical Risk Minimization in the Non-Interactive Local Model of Differential Privacy
Authors: Di Wang, Marco Gaboardi, Adam Smith, Jinhui Xu
Description: Studies empirical risk minimization in the non-interactive local model of differential privacy.Stable Regression: On the Power of Optimization over Randomization
Authors: Dimitris Bertsimas, Ivan Paskov
Description: Analyzes the power of optimization over randomization in stable regression.Fast Exact Matrix Completion: A Unified Optimization Framework for Matrix Completion
Authors: Dimitris Bertsimas, Michael Lingzhi Li
Description: Proposes a unified optimization framework for fast exact matrix completion.Rank-Based Lasso - Efficient Methods for High-Dimensional Robust Model Selection
Authors: Wojciech Rejchel, Małgorzata Bogdan
Description: Develops rank-based Lasso methods for high-dimensional robust model selection.