View
243
Download
0
Embed Size (px)
Citation preview
1
CNN2ECST
Andrea SolazzoMatteo De SilvestriIrene De Rose
[email protected]@mail.polimi.it
Thursday, March 17, 2016XOHW16 Meeting
2
HW acceleration
Convolutional Neural Networks have a data-flow computation pattern that results to be highly suitable for hardware acceleration
Zhang, Chen, et al. "Optimizing fpga-based accelerator design for deep convolutional neural networks." Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, 2015.
3
Why FPGA?● CNNs have a huge design space
● Finding the “optimal” model requires some tuning
● Many “degrees of freedom” (#layers, #neurons, …)
3
Why FPGA?
✓ Reconfigurability allows to implement different models and select the best one directly in hardware
● CNNs have a huge design space
● Finding the “optimal” model requires some tuning
● Many “degrees of freedom” (#layers, #neurons, …)
4
A. Dundar, J. Jin, V. Gokhale, B. Krishnamurthy, A. Canziani, B. Martini, and E. Culurciello. Accelerating Deep Neural Networks on MobileProcessor with Embedded Programmable Logic. In Proc. NIPS’13, 2013.
Why FPGA?
5
A. Dundar, J. Jin, V. Gokhale, B. Krishnamurthy, A. Canziani, B. Martini, and E. Culurciello. Accelerating Deep Neural Networks on MobileProcessor with Embedded Programmable Logic. In Proc. NIPS’13, 2013.
The proper trade-off
between performances
and power consumption
reflects on a high
embeddable factor,
calculated as
performance over Watt
(GOP/s /W)
Why FPGA?
7
CNN2ECST
CNNECST-Convolutional Neural Network(www.facebook.com/cnn2ecst)
@cnn2ecst(www.twitter.com/cnn2ecst)