|Title:||Scalable stochastic gradient descent with improved confidence|
|Abstract:||Stochastic gradient descent methods have been quite successful for solving large- scale and online learning problems. We provide a simple parallel framework to obtain solutions of high confidence, where the confidence can be easily controlled by the number of processes, independently of the length of learning processes. Our framework is implemented as a scalable open-source software which can be configured for a single multicore machine or for a cluster of computers, where the training outcomes from independent parallel processes are combined to produce the final output.|
|Is part of:||NIPS Workshop on Big Learning -- Algorithms, Systems, and Tools for Learning at Scale|
|Appears in Collections:||Sonderforschungsbereich (SFB) 876|
Files in This Item:
|lee_bockermann_2011a_2.pdf||DNB||185.65 kB||Adobe PDF||View/Open|
This item is protected by original copyright
All resources in the repository are protected by copyright.