Title
Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning.
Abstract
The goal of this tutorial is to introduce key models, algorithms, and open questions related to the use of optimization methods for solving problems arising in machine learning. It is written with an INFORMS audience in mind, specifically those readers who are familiar with the basics of optimization algorithms, but less familiar with machine learning. We begin by deriving a formulation of a supervised learning problem and show how it leads to various optimization problems, depending on the context and underlying assumptions. We then discuss some of the distinctive features of these optimization problems, focusing on the examples of logistic regression and the training of deep neural networks. The latter half of the tutorial focuses on optimization algorithms, first for convex logistic regression, for which we discuss the use of first-order methods, the stochastic gradient method, variance reducing stochastic methods, and second-order methods. Finally, we discuss how these approaches can be employed to the training of deep neural networks, emphasizing the difficulties that arise from the complex, nonconvex structure of these models.
Year
Venue
Field
2017
arXiv: Machine Learning
Online machine learning,Semi-supervised learning,Stability (learning theory),Supervised learning,Unsupervised learning,Artificial intelligence,Deep learning,Artificial neural network,Ensemble learning,Mathematics,Machine learning
DocType
Volume
Citations 
Journal
abs/1706.10207
3
PageRank 
References 
Authors
0.40
39
2
Name
Order
Citations
PageRank
Frank E. Curtis143225.71
Katya Scheinberg274469.50