An Efficient Hardware-based Spike Train Repetition for Energy-constrained Spiking Neural Networks
Spiking Neural Networks (SNNs) require processing a large number of spikes to achieve high classification accuracy. However, this results in frequent memory accesses to fetch synaptic weights, which significantly increases energy dissipation in SNN systems. To address this challenge, we propose a unique technique called the Repetitive Spike Train (RST) method. By exploiting the temporal similarity of spike trains across time steps, RST minimizes redundant spike train updates and reduces memory read/write operations. We plan to implement the proposed method on a custom hardware architecture using the TSMC − 65nm technology.
Project Milestones
Do you want to view information on how to complete the work stage ""
or update the work stage for this project?
-
Milestone #1
Target Date -
Milestone #2
Target Date -
Milestone #3
Target Date -
Milestone #4
Target Date -
Milestone #5
Target Date -
Milestone #6
Target Date -
Milestone #7
Target Date -
Milestone #8
Target Date -
Milestone #9
Target Date -
Milestone #10
Target Date -
Milestone #11
Target Date -
Milestone #12
Target Date -
Milestone #13
Target Date
Team

Add new comment
To post a comment on this article, please log in to your account. New users can create an account.