Scalable stochastic gradient descent with improved confidence
Loading...
Date
2012-02-28
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Stochastic gradient descent methods have been quite successful for solving large-
scale and online learning problems. We provide a simple parallel framework to
obtain solutions of high confidence, where the confidence can be easily controlled
by the number of processes, independently of the length of learning processes.
Our framework is implemented as a scalable open-source software which can be
configured for a single multicore machine or for a cluster of computers, where the
training outcomes from independent parallel processes are combined to produce
the final output.