Scalable stochastic gradient descent with improved confidence

dc.contributor.authorBockermann, Christian
dc.contributor.authorLee, Sangkyun
dc.date.accessioned2012-02-28T15:39:39Z
dc.date.available2012-02-28T15:39:39Z
dc.date.issued2012-02-28
dc.description.abstractStochastic gradient descent methods have been quite successful for solving large- scale and online learning problems. We provide a simple parallel framework to obtain solutions of high confidence, where the confidence can be easily controlled by the number of processes, independently of the length of learning processes. Our framework is implemented as a scalable open-source software which can be configured for a single multicore machine or for a cluster of computers, where the training outcomes from independent parallel processes are combined to produce the final output.en
dc.identifier.urihttp://hdl.handle.net/2003/29345
dc.identifier.urihttp://dx.doi.org/10.17877/DE290R-3293
dc.language.isoende
dc.relation.ispartofNIPS Workshop on Big Learning -- Algorithms, Systems, and Tools for Learning at Scaleen
dc.subject.ddc004
dc.titleScalable stochastic gradient descent with improved confidenceen
dc.typeTextde
dc.type.publicationtypeconferenceObjectde
dcterms.accessRightsopen access

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
lee_bockermann_2011a_2.pdf
Size:
185.65 KB
Format:
Adobe Portable Document Format
Description:
DNB
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.85 KB
Format:
Item-specific license agreed upon to submission
Description: