Unlocking efficiency in BNNs: global by local thresholding for analog-based HW accelerators

dc.contributor.authorYayla, Mikail
dc.contributor.authorFrustaci, Fabio
dc.contributor.authorSpagnolo, Fanny
dc.contributor.authorChen, Jian-Jia
dc.contributor.authorAmrouch, Hussam
dc.date.accessioned2024-06-28T08:28:34Z
dc.date.available2024-06-28T08:28:34Z
dc.date.issued2023-09-14
dc.description.abstractFor accelerating Binarized Neural Networks (BNNs), analog computing-based crossbar accelerators, utilizing XNOR gates and additional interface circuits, have been proposed. Such accelerators demand a large amount of analog-to-digital converters (ADCs) and registers, resulting in expensive designs. To increase the inference efficiency, the state of the art divides the interface circuit into an Analog Path (AP), utilizing (cheap) analog comparators, and a Digital Path (DP), utilizing (expensive) ADCs and registers. During BNN execution, a certain path is selectively triggered. Ideally, as inference via AP is more efficient, it should be triggered as often as possible. However, we reveal that, unless the number of weights is very small, the AP is rarely triggered. To overcome this, we propose a novel BNN inference scheme, called Local Thresholding Approximation (LTA). It approximates the global thresholdings in BNNs by local thresholdings. This enables the use of the AP through most of the execution, which significantly increases the interface circuit efficiency. In our evaluations with two BNN architectures, using LTA reduces the area by 42x and 54x, the energy by 2.7x and 4.2x, and the latency by 3.8x and 1.15x, compared to the state-of-the-art crossbar-based BNN accelerators.de
dc.identifier.urihttp://hdl.handle.net/2003/42564
dc.identifier.urihttp://dx.doi.org/10.17877/DE290R-24400
dc.language.isoende
dc.relation.ispartofseriesIEEE journal on emerging and selected topics in circuits and systems;13(4)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/de
dc.subjectregistersde
dc.subjectneuronsde
dc.subjectlogic gatesde
dc.subjectcomputer architecturede
dc.subjectartificial neural networksde
dc.subjectFeFETsde
dc.subjectconvolutionde
dc.subject.ddc004
dc.subject.rswkRegister <Informatik>de
dc.subject.rswkNeuronales Netzde
dc.subject.rswkLogische Schaltungde
dc.subject.rswkComputerarchitekturde
dc.subject.rswkFerroelektrischer Transistorde
dc.titleUnlocking efficiency in BNNs: global by local thresholding for analog-based HW acceleratorsde
dc.typeTextde
dc.type.publicationtypeArticlede
dcterms.accessRightsopen access
eldorado.secondarypublicationtruede
eldorado.secondarypublication.primarycitationM. Yayla, F. Frustaci, F. Spagnolo, J. -J. Chen and H. Amrouch, "Unlocking Efficiency in BNNs: Global by Local Thresholding for Analog-Based HW Accelerators," in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 13, no. 4, pp. 940-955, Dec. 2023, doi: 10.1109/JETCAS.2023.3315561de
eldorado.secondarypublication.primaryidentifierhttps://doi.org/10.1109/jetcas.2023.3315561de

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Unlocking_Efficiency_in_BNNs_Global_by_Local_Thresholding_for_Analog-Based_HW_Accelerators(1).pdf
Size:
8.76 MB
Format:
Adobe Portable Document Format
Description:
DNB
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.85 KB
Format:
Item-specific license agreed upon to submission
Description: