Parallel Algorithms for GPU accelerated Probabilistic Inference

dc.contributor.authorPiatkowski, Nico
dc.date.accessioned2012-02-21T15:23:41Z
dc.date.available2012-02-21T15:23:41Z
dc.date.issued2012-02-21
dc.description.abstractReal world data is likely to contain an inherent structure. Those structures may be represented with graphs which encode independence assumptions within the data. Performing inference in those models is nearly intractable on mobile devices or casual workstations. This work introduces and compares two approaches for ac- celerating the inference in graphical models by using GPUs as parallel processing units. It is empirically showed, that in order to achieve a scaleable parallel algo- rithm, one has to distribute the workload equally among all processing units of a GPU. We accomplished this by introducing Thread-Cooperative message compu- tations.en
dc.identifier.urihttp://hdl.handle.net/2003/29321
dc.identifier.urihttp://dx.doi.org/10.17877/DE290R-3378
dc.language.isoende
dc.relation.ispartofBig Learning Workshop on Algorithms, Systems, and Tools for Learning at Scaleen
dc.subject.ddc004
dc.titleParallel Algorithms for GPU accelerated Probabilistic Inferenceen
dc.typeTextde
dc.type.publicationtypeconferenceObjectde
dcterms.accessRightsopen access
eldorado.dnb.deposittruede

Dateien

Originalbündel

Gerade angezeigt 1 - 1 von 1
Lade...
Vorschaubild
Name:
piatkowski_2011c.pdf
Größe:
239.06 KB
Format:
Adobe Portable Document Format
Beschreibung:
DNB

Lizenzbündel

Gerade angezeigt 1 - 1 von 1
Lade...
Vorschaubild
Name:
license.txt
Größe:
1.85 KB
Format:
Item-specific license agreed upon to submission
Beschreibung: