Nam Le

Optimization Research Papers in JMLR Volume 21

Nam Le
Table of Contents

Optimization Research Papers in JMLR Volume 21 (2020) #

This document lists papers from JMLR Volume 21 (2020) that focus on optimization research, categorized by their primary themes. Each paper is numbered starting from 1 within its subsection, with a brief description of its key contributions to optimization theory, algorithms, or applications.

Convex Optimization #

Papers addressing convex optimization problems, including complexity bounds, convergence analysis, and applications in regression and assortment optimization.

  1. A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints
    Authors: Hao Yu, Michael J. Neely
    Description: Proposes a low-complexity algorithm for online convex optimization with long-term constraints, achieving O(√T) regret and O(1) constraint violations.

  2. Lower Bounds for Parallel and Randomized Convex Optimization
    Authors: Jelena Diakonikolas, Cristóbal Guzmán
    Description: Establishes lower complexity bounds for parallel and randomized algorithms in convex optimization.

  3. Discerning the Linear Convergence of ADMM for Structured Convex Optimization through the Lens of Variational Analysis
    Authors: Xiaoming Yuan, Shangzhi Zeng, Jin Zhang
    Description: Analyzes the linear convergence of ADMM for structured convex optimization using variational analysis.

  4. A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints
    Authors: Qihang Lin, Selvaprabu Nadarajah, Negar Soheili, Tianbao Yang
    Description: Develops a data-efficient level set method for stochastic convex optimization with expectation constraints.

  5. Conic Optimization for Quadratic Regression Under Sparse Noise
    Authors: Igor Molybog, Ramtin Madani, Javad Lavaei
    Description: Applies conic optimization to quadratic regression under sparse noise conditions.

  6. Dynamic Assortment Optimization with Changing Contextual Information
    Authors: Xi Chen, Yining Wang, Yuan Zhou
    Description: Addresses dynamic assortment optimization with changing contextual information using convex optimization techniques.

  7. Convex Programming for Estimation in Nonlinear Recurrent Models
    Authors: Sohail Bahmani, Justin Romberg
    Description: Uses convex programming for parameter estimation in nonlinear recurrent models.

Nonconvex Optimization #

Papers tackling nonconvex optimization, focusing on guarantees for local minima, variance reduction, and algorithmic advancements.

  1. Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis
    Authors: Salar Fattahi, Somayeh Sojoudi
    Description: Provides exact guarantees for the absence of spurious local minima in non-negative rank-1 robust PCA.

  2. Stochastic Nested Variance Reduction for Nonconvex Optimization
    Authors: Dongruo Zhou, Pan Xu, Quanquan Gu
    Description: Introduces a stochastic nested variance reduction method for nonconvex optimization.

  3. ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
    Authors: Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Quoc Tran-Dinh
    Description: Proposes ProxSARAH, an efficient framework for stochastic composite nonconvex optimization.

  4. Convergence Rates for the Stochastic Gradient Descent Method for Non-Convex Objective Functions
    Authors: Benjamin Fehrman, Benjamin Gess, Arnulf Jentzen
    Description: Analyzes convergence rates of stochastic gradient descent for nonconvex objective functions.

  5. AdaGrad Stepsizes: Sharp Convergence Over Nonconvex Landscapes
    Authors: Rachel Ward, Xiaoxia Wu, Leon Bottou
    Description: Studies sharp convergence of AdaGrad stepsize schedules in nonconvex optimization.

  6. A Sparse Semismooth Newton Based Proximal Majorization-Minimization Algorithm for Nonconvex Square-Root-Loss Regression Problems
    Authors: Peipei Tang, Chengjing Wang, Defeng Sun, Kim-Chuan Toh
    Description: Develops a sparse semismooth Newton-based proximal majorization-minimization algorithm for nonconvex square-root-loss regression.

Stochastic Optimization #

Papers focusing on stochastic optimization methods, including gradient descent, variance reduction, and robustness to noise.

  1. Convergences of Regularized Algorithms and Stochastic Gradient Methods with Random Projections
    Authors: Junhong Lin, Volkan Cevher
    Description: Analyzes convergence of regularized algorithms and stochastic gradient methods with random projections.

  2. Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent
    Authors: Dominic Richards, Patrick Rebeschini
    Description: Studies graph-dependent implicit regularization in distributed stochastic subgradient descent.

  3. Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions
    Authors: Artin Spiridonoff, Alex Olshevsky, Ioannis Ch. Paschalidis
    Description: Proposes a robust asynchronous stochastic gradient-push method with asymptotically optimal performance for strongly convex functions.

  4. On Stationary-Point Hitting Time and Ergodicity of Stochastic Gradient Langevin Dynamics
    Authors: Xi Chen, Simon S. Du, Xin T. Tong
    Description: Investigates stationary-point hitting time and ergodicity in stochastic gradient Langevin dynamics.

  5. Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization
    Authors: Aryan Mokhtari, Hamed Hassani, Amin Karbasi
    Description: Extends stochastic conditional gradient methods from convex minimization to submodular maximization.

  6. A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning
    Authors: Aryan Mokhtari, Alec Koppel, Martin Takac, Alejandro Ribeiro
    Description: Introduces parallel doubly stochastic algorithms for large-scale learning.

  7. Gradient Descent for Sparse Rank-One Matrix Completion for Crowd-Sourced Aggregation of Sparsely Interacting Workers
    Authors: Yao Ma, Alex Olshevsky, Csaba Szepesvari, Venkatesh Saligrama
    Description: Applies gradient descent to sparse rank-one matrix completion for crowd-sourced worker aggregation.

  8. Optimal Convergence for Distributed Learning with Stochastic Gradient Methods and Spectral Algorithms
    Authors: Junhong Lin, Volkan Cevher
    Description: Establishes optimal convergence rates for distributed learning using stochastic gradient methods and spectral algorithms.

  9. Estimate Sequences for Stochastic Composite Optimization: Variance Reduction, Acceleration, and Robustness to Noise
    Authors: Andrei Kulunchakov, Julien Mairal
    Description: Develops estimate sequences for stochastic composite optimization with variance reduction and noise robustness.

  10. A Unified q-Memorization Framework for Asynchronous Stochastic Optimization
    Authors: Bin Gu, Wenhan Xian, Zhouyuan Huo, Cheng Deng, Heng Huang
    Description: Proposes a unified q-memorization framework for asynchronous stochastic optimization.

  11. Asymptotic Analysis via Stochastic Differential Equations of Gradient Descent Algorithms in Statistical and Computational Paradigms
    Authors: Yazhen Wang, Shang Wu
    Description: Analyzes gradient descent algorithms using stochastic differential equations in statistical and computational settings.

  12. The Error-Feedback Framework: SGD with Delayed Gradients
    Authors: Sebastian U. Stich, Sai Praneeth Karimireddy
    Description: Introduces an error-feedback framework for stochastic gradient descent with delayed gradients.

Distributed/Parallel Optimization #

Papers addressing distributed or parallel optimization algorithms, focusing on communication efficiency and scalability.

  1. On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent
    Authors: Huan Li, Zhouchen Lin
    Description: Analyzes the complexity of primal solutions for accelerated randomized dual coordinate ascent in distributed settings.

  2. WONDER: Weighted One-shot Distributed Ridge Regression in High Dimensions
    Authors: Edgar Dobriban, Yue Sheng
    Description: Proposes WONDER, a weighted one-shot distributed ridge regression method for high-dimensional data.

  3. GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning
    Authors: Anis Elgabli, Jihong Park, Amrit S. Bedi, Mehdi Bennis, Vaneet Aggarwal
    Description: Introduces GADMM, a fast and communication-efficient framework for distributed machine learning.

  4. Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction
    Authors: Boyue Li, Shicong Cen, Yuxin Chen, Yuejie Chi
    Description: Develops communication-efficient distributed optimization with gradient tracking and variance reduction.

  5. On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond
    Authors: Xiao-Tong Yuan, Ping Li
    Description: Analyzes convergence of distributed approximate Newton methods with sharper bounds and globalization techniques.

Submodular Optimization #

Papers focusing on submodular optimization, including minimization and maximization problems.

  1. Quadratic Decomposable Submodular Function Minimization: Theory and Practice
    Authors: Pan Li, Niao He, Olgica Milenkovic
    Description: Studies quadratic decomposable submodular function minimization with theoretical and practical insights.

  2. Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization
    Authors: Rad Niazadeh, Tim Roughgarden, Joshua R. Wang
    Description: Develops optimal algorithms for continuous non-monotone submodular and DR-submodular maximization.

Bayesian and Hyperparameter Optimization #

Papers addressing Bayesian optimization and hyperparameter tuning for scalable and robust optimization.

  1. Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly
    Authors: Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabas Poczos, Eric P. Xing
    Description: Introduces Dragonfly, a scalable and robust Bayesian optimization framework for hyperparameter tuning.

  2. Distributionally Ambiguous Optimization for Batch Bayesian Optimization
    Authors: Nikitas Rontsis, Michael A. Osborne, Paul J. Goulart
    Description: Proposes distributionally ambiguous optimization for batch Bayesian optimization.

  3. The Kalai-Smorodinsky Solution for Many-Objective Bayesian Optimization
    Authors: Mickael Binois, Victor Picheny, Patrick Taillandier, Abderrahmane Habbal
    Description: Applies the Kalai-Smorodinsky solution to many-objective Bayesian optimization.

  4. Robust Reinforcement Learning with Bayesian Optimisation and Quadrature
    Authors: Supratik Paul, Konstantinos Chatzilygeroudis, Kamil Ciosek, Jean-Baptiste Mouret, Michael A. Osborne, Shimon Whiteson
    Description: Integrates Bayesian optimization and quadrature for robust reinforcement learning.

Optimization in Reinforcement Learning #

Papers focusing on optimization techniques for policy optimization and reinforcement learning.

  1. Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems
    Authors: Dhruv Malik, Ashwin Pananjady, Kush Bhatia, Koulik Khamaru, Peter L. Bartlett, Martin J. Wainwright
    Description: Develops derivative-free methods for policy optimization in linear quadratic systems with guarantees.

  2. Expected Policy Gradients for Reinforcement Learning
    Authors: Kamil Ciosek, Shimon Whiteson
    Description: Introduces expected policy gradients for reinforcement learning optimization.

  3. Importance Sampling Techniques for Policy Optimization
    Authors: Alberto Maria Metelli, Matteo Papini, Nico Montali, Marcello Restelli
    Description: Proposes importance sampling techniques for efficient policy optimization in reinforcement learning.

Other Optimization Topics #

Papers covering miscellaneous optimization topics, including dictionary learning, neural network verification, and differential privacy.

  1. Learning with Fenchel-Young Losses
    Authors: Mathieu Blondel, André F.T. Martins, Vlad Niculae
    Description: Introduces optimization with Fenchel-Young losses for structured prediction.

  2. Branch and Bound for Piecewise Linear Neural Network Verification
    Authors: Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Philip H.S. Torr, Pushmeet Kohli, M. Pawan Kumar
    Description: Applies branch and bound techniques for piecewise linear neural network verification.

  3. Conjugate Gradients for Kernel Machines
    Authors: Simon Bartels, Philipp Hennig
    Description: Develops conjugate gradient methods for optimization in kernel machines.

  4. Unique Sharp Local Minimum in L1-Minimization Complete Dictionary Learning
    Authors: Yu Wang, Siqi Wu, Bin Yu
    Description: Analyzes unique sharp local minima in L1-minimization for complete dictionary learning.

  5. Community-Based Group Graphical Lasso
    Authors: Eugen Pircalabelu, Gerda Claeskens
    Description: Proposes a community-based group graphical Lasso for structured optimization.

  6. Constrained Dynamic Programming and Supervised Penalty Learning Algorithms for Peak Detection in Genomic Data
    Authors: Toby Dylan Hocking, Guillem Rigaill, Paul Fearnhead, Guillaume Bourque
    Description: Develops constrained dynamic programming and supervised penalty learning for peak detection in genomic data.

  7. Loss Control with Rank-One Covariance Estimate for Short-Term Portfolio Optimization
    Authors: Zhao-Rong Lai, Liming Tan, Xiaotian Wu, Liangda Fang
    Description: Applies rank-one covariance estimation for loss control in short-term portfolio optimization.

  8. A Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks
    Authors: Owen Marschall, Kyunghyun Cho, Cristina Savin
    Description: Proposes a unified framework of online learning algorithms for training recurrent neural networks.

  9. Chaining Meets Chain Rule: Multilevel Entropic Regularization and Training of Neural Networks
    Authors: Amir R. Asadi, Emmanuel Abbe
    Description: Introduces multilevel entropic regularization for neural network training using chaining and chain rule.

  10. Nesterov’s Acceleration for Approximate Newton
    Authors: Haishan Ye, Luo Luo, Zhihua Zhang
    Description: Applies Nesterov’s acceleration to approximate Newton methods for optimization.

  11. New Insights and Perspectives on the Natural Gradient Method
    Authors: James Martens
    Description: Provides new insights into the natural gradient method for optimization.

  12. Complete Dictionary Learning via L4-Norm Maximization over the Orthogonal Group
    Authors: Yuexiang Zhai, Zitong Yang, Zhenyu Liao, John Wright, Yi Ma
    Description: Develops complete dictionary learning via L4-norm maximization over the orthogonal group.

  13. Empirical Risk Minimization in the Non-Interactive Local Model of Differential Privacy
    Authors: Di Wang, Marco Gaboardi, Adam Smith, Jinhui Xu
    Description: Studies empirical risk minimization in the non-interactive local model of differential privacy.

  14. Stable Regression: On the Power of Optimization over Randomization
    Authors: Dimitris Bertsimas, Ivan Paskov
    Description: Analyzes the power of optimization over randomization in stable regression.

  15. Fast Exact Matrix Completion: A Unified Optimization Framework for Matrix Completion
    Authors: Dimitris Bertsimas, Michael Lingzhi Li
    Description: Proposes a unified optimization framework for fast exact matrix completion.

  16. Rank-Based Lasso - Efficient Methods for High-Dimensional Robust Model Selection
    Authors: Wojciech Rejchel, Małgorzata Bogdan
    Description: Develops rank-based Lasso methods for high-dimensional robust model selection.

Tags:
Categories: