Projects
Title | Updated date | Comment count |
---|---|---|
FPGA-Powered Acceleration for NLP Tasks | 3 weeks ago | 14 |
Articles
Interests
Design Flow
Technology
Authored Comments
User statistics
My contributions
:
6
My comments
:
5
Overall contributor
:
#22
Comments
Memory size
You asked the question on the model size in MB's and the ideal size to target versus the trade off in accuracy.
On chip SRAM is a limiting factor in SoC design due to the high cost of the area for SRAM. While the hierarchical memory system for classical compute has been optimised, from the off chip DRAM all the way through the cache levels, it is has not for custom acceleration. One approach we are using to reduce fabrication costs is the use chiplet based SRAM die which can be added to a SoC from a stock of pre-fabricated die as opposed to adding to the die cost of a custom accelerator.
Classical compute has caches in low MB.
Publishing your updates to the project
Hi,
Once you are happy with the changes you have made to your project don't forget to change the save from Drafts to Editorial so it is sumbitted to being published on the site. You have chages current in Draft.
Welcome
Hello,
It would be good to understand what is of interest for you in SoC Labs. We look forward to hearing from you. You can simply reply to this comment to let us know.
John.
Add new comment
To post a comment on this article, please log in to your account. New users can create an account.