Use the Tabs below to find information on community projects. Project pages have links to the 'technology' and 'design flow' stages used and ways you can comment on or join a project.

Projects are key to community hardware design

As a community hardware design activity we form our collaborations around shared design actions making Projects a core of the SoC Labs community. Projects help us share and reuse hardware and software developments around core Arm IP to help us in our research goals.

A project, takes technology and uses a design flow to make a SoC.

A project has a timeframe and uses the two other significant aspects of a SoC development, the selection of technology or IP blocks that make up the SoC and the design flow that is followed from specification through to final instantiation of a system. Any System On Chip usually involves the use of pre-existing IP blocks. A project team can select IP from the technology section of the site during the first stage of a design flow, Architectural Design. Later stages in the design flow support the creation of the novel aspects of the SoC design.

Projects have a type, either active, complete (case study) or being formulated (request for collaboration)

Sharing information on projects much earlier than traditional academic collaboration is encouraged. Historically knowledge sharing has been at the end of the research activity, with published papers and results. As well as write up of finished projects ("case study"), ongoing projects under development ("projects") there are projects that are still being formulated ("request for collaboration") listed. We want to encourage people to engage with the project teams, for example, adding a comment to a specific project page or joining a project.

Latest Reference Design Projects

Reference Design
Active Project
PCK600 to SIE300 subsytem
dwn @ soclabs

PCK600 Integration in megaSoC

The PCK600 Arm IP provides components to allow a power control infrastructure to be distributed in a SoC in order to make a design energy efficient. Arm provide the IP as part of their Power Control System Architecture that can be used to control the power states of various parts of the system. This control of the power infrastructure is achieved through the use of the Power Policy Unit (PPU). This unit has an APB interface to allow for software control, and some low power interfaces that can connect to the power controllable IP within the system.

Reference Design
Active Project
soclabs nanosoc microcontroller framework - 2024
soclabs

nanosoc - baseline Cortex-M0 microcontroller SoC (2024 update)
A small SoC development framework to support easy integration and evaluation of academic developed research hardware such as a custom accelerators or signal processing sub-systems.
Reference Design
Active Project
TSRI Arm Cortex-M55 AIoT SoC Design Platform

TSRI Arm Cortex-M55 AIoT SoC Design Platform
 What is TSRI Arm Cortex-M55 AIoT SoC Design Platform?

The Arm Cortex-M55 AIoT SoC design platform is an AIoT subsystem that allows custom SoC designers to integrate their hardware circuits and embedded software for differentiation. The platform is developed by TSRI (Taiwan Semiconductor Research Institute) to support academic research on SoC design. It's built on the Arm Corstone-300 reference package, featuring the Cortex-M55 CPU and Ethos-U55 NPU.

Reference Design
Active Project
Imrpoved power domain structure for nanosoc
dwn @ soclabs

nanoSoC Low Power Implementation

As part of plans for continued development of nanoSoC one area that requires improvement is the power structure of system. The first iteration of nanoSoC contained 2 power domains: the accelerator domain and the remainder of the SoC. Both power domains were connected to external pins to allow connection to separate external voltage regulators and power measurement ICs, as implemented in the first version of the nanoSoC testboard.

Latest Collaborative Projects

Collaborative
Active Project

Indonesia Collaborative SoC Platform

This program is dedicated to the development of a System on Chip (SoC) platform, specifically designed to support learning and research activities within Indonesian academic institutions. The platform serves as an educational and research tool for students, lecturers, and researchers to gain hands-on experience in digital chip design.

Collaborative
Active Project
AHB Qspi architectural design
dwn @ soclabs

AHB eXcecute in Place (XiP) QSPI

The instruction memory in the first tape out of nanosoc was implemented using SRAM. The benefit was the read bandwidth from this memory was very fast, the downside was on a power-on-reset, all the code was erased as SRAM is volatile memory. An alternative use of non-volatile memory would benefit applications where  deployment of the ASIC does not allow, or simply time is not available for programming the SRAM after every power up. 

Collaborative
Case Study
A53 simplified testbench
SoClabs

Arm Cortex-A53 processor

There is growing interest within the SoC Labs community for an Arm A-Class SoC that can support a full operating system, undertake more complex compute tasks and enable more complicated software directed research. The Cortex-A53 is Arm's most widely deployed 64-bit Armv8-A processor and can provide these capabilities with power efficiency. 

Collaborative
Request of Collaboration
High Capacity Memory Subsystem Development

This project aims to design and implement a high capacity memory subsystem for Arm A series processor based SoC designs.  The current focus of the project is the design and implementation of a Memory Controller for DDR4 memory. 

Latest Competition Projects

Competition 2025
Competition: Hardware Implementation

ASIC for parallel channel tuning on Reconfigurable Intelligent Surfaces

Reconfigurable Intelligent Surfaces (RIS) are planar structures composed of large arrays of tunable elements that can dynamically redirect, reflect, or shape wireless signals in the environment.

Competition 2025
Competition: Collaboration/Education

RF-Powered Sensor Platform for Intelligent Groceries Transportation Monitoring

This project aims to develop an advanced RF energy harvesting (EH) receiver chip specifically designed to power embedded sensors for monitoring the condition of groceries during transportation. The receiver chip captures wireless energy transmitted from phased array antennas and converts it into electrical power that is used to operate onboard sensors, which continuously monitor critical parameters such as temperature and humidity inside delivery trucks.

Competition 2025
Competition: Hardware Implementation

Neural Activity Processor

Stroke and epilepsy are among the most common debilitating neurological conditions, with a worldwide prevalence of 100 million people (World Stroke Organization, 2022) and 50 million people (World Health Organization, 2024), respectively. Present-day approaches for treating neurological and neurosurgical conditions include physiotherapy, pharmacological treatment, surgical excision, and interventions such as deep brain stimulation.

Competition 2025
Competition: Collaboration/Education
An Efficient Hardware-based Spike Train Repetition for Energy-constrained Spiking Neural Networks

In the context of Industry 4.0, handwritten digit recognition plays a vital role in numerous applications such as smart banking systems and postal code detection. One of the most effective approaches to tackle this problem is through the use of machine learning and neural network models, which have demonstrated impressive accuracy and adaptability in visual pattern recognition tasks.

Latest Completed Project Milestones

Project Name Target Date Completed Date Description
An Efficient Hardware-based Spike Train Repetition for Energy-constrained Spiking Neural Networks Milestone #2: Determine the dataset and SNN model

In this milestone, the dataset and initial SNN architecture for handwritten digit recognition are defined. The MNIST dataset is selected due to its wide use as a benchmark for image classification tasks and its suitability for validating lightweight neural architectures on resource-constrained platforms.

The chosen SNN model consists of four layers:

  • Input layer with 784 neurons (corresponding to 28×28 pixel grayscale images).
  • Two hidden layers with 256 neurons each, enabling sufficient representational capacity while keeping hardware costs moderate.
  • Output layer with 10 neurons, each representing one digit class from 0 to 9.

The structure is designed to balance classification performance and hardware efficiency, laying the groundwork for implementing the RST optimization in future milestones.

An Efficient Hardware-based Spike Train Repetition for Energy-constrained Spiking Neural Networks Milestone #5: MATLAB simulation

In this milestone, we perform a behavioral simulation of the SNN model using MATLAB to evaluate the impact of the Repetitive Spike Train (RST) method on classification accuracy. We applied the RST technique to different hidden layers across a range of Repetitive Time Steps (RTSs) from 1 to 10. The accuracy results is described in Fig. 1.

Key observations from the MATLAB simulation include:

  • When applying RST to the first hidden layer of SNN model, accuracy increased slightly from the baseline 97.98% to 98.05% at 3 RTSs. This shows that temporal similarity in early layers can be effectively exploited to reduce computations without degrading accuracy.

  • As the number of RTSs increased to 5 and 8, the accuracy dropped marginally to 97.78% and 97.76%, respectively—still within acceptable levels.

  • Applying RST to deeper layers (e.g., second hidden layer) resulted in greater accuracy loss due to lower spike similarity, reaching 96.03% at 9 RTSs.

  • When RST was applied to both hidden layers, the accuracy varied from 97.00% to 97.55%, depending on RTS configuration.

Fig. 1. Accuracy results (a) RST implementation at the first hidden layer, (b) RST implementation at the second layer, (c) RST implementation at both hidden layers. 

These results validate the core assumption behind RST: temporal redundancy in spike trains, especially in early network layers, can be leveraged to improve energy efficiency without compromising classification performance. The MATLAB simulation serves as a critical foundation for the subsequent hardware modeling phase.

An Efficient Hardware-based Spike Train Repetition for Energy-constrained Spiking Neural Networks Milestone #3: Determine the requirement IPs

In this milestone, the IPs required to build the nanoSoC platform are identified. These IPs provide the computational core, memory hierarchy, communication interfaces, and system peripherals needed to support the integration and operation of the RST SNN IP.

The SoC will include the following IPs:

  • Arm Cortex-M0 + SWD: A lightweight, low-power 32-bit processor core for general-purpose computation and control, with Serial Wire Debug (SWD) support for on-chip debugging.

  • Boot Monitor: Responsible for initial system bring-up, configuration, and test routines during boot-up.

  • Code SRAM Bank: Memory block dedicated to storing program instructions.

  • Data SRAM Bank: Separate memory block used for storing runtime data and intermediate results.

  • DMA Controller (PL230): Provides efficient memory-to-memory or memory-to-peripheral data transfer with minimal CPU involvement.

  • RST SNN IP: Custom hardware accelerator implementing the Repetitive Spike Train technique for energy-efficient spike-based inference.

  • Internal Interface – AHB-Lite: The AMBA AHB-Lite bus is used to interconnect the processor, memory, peripherals, and accelerator IPs for low-latency, high-throughput communication.

System Peripherals:

  • FT1248 Interface: Used for high-speed communication with external systems, e.g., for data input or debugging.

  • UART: Universal Asynchronous Receiver/Transmitter for serial communication.

  • GPIOs: General-purpose input/output pins for controlling external components or signaling.

This set of IPs is selected to ensure compatibility, scalability, and low power operation while meeting the functional requirements of handwritten digit recognition using SNNs

An Efficient Hardware-based Spike Train Repetition for Energy-constrained Spiking Neural Networks Milestone #6: RST SNN IP architecture with AHB slave interface

This milestone describes the finalized architecture of the RST SNN IP, a specialized hardware accelerator that executes inference using Spiking Neural Networks and the Repetitive Spike Train (RST) method. The IP is designed for integration into the NanoSoC platform via an AHB-Lite slave interface.

Architecture Components (as shown in the diagram)

  1. AHB-Lite Slave Interface

    • Provides the communication interface between the processor and the IP core.

    • Enables software to write input spike trains, configure control registers, and read final classification outputs.

  2. Spike Trains SRAM

    • Stores the input spike trains for each time step, layer, and neuron.

    • Supports temporal reuse of spikes across time steps under the RST mechanism to reduce memory access and computation.

  3. Synaptic Weights SRAMs

    • Stores pre-trained synaptic weights of the SNN model.

    • Accessed during inference to compute membrane potential updates in the processing core.

    • Weight access is minimized during repeated spike cycles to save energy.

  4. Spiking Neural Processing Core

    • Executes the neuron-level computation based on the Leaky Integrate-and-Fire (LIF) model.

    • Accumulates membrane potentials, generates output spikes, and applies the RST logic to skip redundant operations.

  5. Transform Spike Converter

    • Converts the final output spike pattern into a recognizable class label (e.g., digits 0–9).

An Efficient Hardware-based Spike Train Repetition for Energy-constrained Spiking Neural Networks Milestone #1: Determine scope and focus

Project management 

This project will follow a standard SoC development workflow, serving as a foundational element for the NanoSoC reference platform. Although it may not proceed to full tape-out or silicon validation, the project will have milestones to ensure steady progress. Each milestone will be tracked with corresponding completion dates and documentation.

To maintain agility and responsiveness, flexible intermediate goals will be established. These will help break down the overall objective—designing an energy-efficient RST-based SNN IP for handwritten digit recognition—into manageable phases, from algorithm modeling to RTL implementation and system integration.

Design methods

The project will adopt a top-down design methodology:

  • Algorithm Modeling: RST (Repetitive Spike Train) algorithm will be first modeled in MATLAB to simulate spiking behavior and validate the effectiveness of temporal reuse.

  • RTL Design: Once validated, the algorithm will be translated into synthesizable Verilog RTL as the RST SNN IP, incorporating AHB slave interface logic.

  • Verification & Integration: The IP will be verified using cocotb testbenches and then integrated into NanoSoC via the AHB expansion interface. 

  • Target Implementation: Although full physical implementation is not required, synthesis using TSMC 65nm libraries will be conducted to evaluate area, power, and timing.

Access to IP 

Access to standard IP blocks (Cortex M0+ core, AHB interfaces, memory, DMA) will be provided through the nanoSoC platform, while the RST SNN IP will be developed in-house.

Git to nanoSoC repository: SoCLabs / NanoSoC Tech · GitLab 

Git to RST SNN IP repository:

An Efficient Hardware-based Spike Train Repetition for Energy-constrained Spiking Neural Networks Milestone #4: Design SoC architecture

 

This milestone focuses on defining the overall architecture of the NanoSoC system, integrating both general-purpose components and the custom RST SNN IP accelerator. The SoC architecture is centered around a lightweight Arm Cortex-M0 processor, with system-level connectivity managed via the AMBA AHB-Lite bus.

Key components and architectural decisions include:

  • Processor Core: The Arm Cortex-M0 serves as the main controller, responsible for orchestrating data movement, configuration, and managing the inference flow.

  • Memory Subsystem: Two separate SRAM banks are implemented — one for code storage and one for data — enabling parallel access and efficient memory utilization.

  • RST SNN IP: The accelerator is memory-mapped and connected via AHB-Lite, allowing the processor to configure it through registers and trigger inference operations. It performs handwritten digit recognition using spike-based processing with reduced energy consumption.

  • DMA Controller (PL230): Enables efficient data transfers between expansion Data RAM regions and RST SNN IP without burdening the CPU.

  • System Peripherals: Including UART, FT1248, and GPIOs for debugging, communication, and control.

  • Boot Monitor: Manages the initial configuration and system setup upon startup.

  • AHB-Lite Interconnect: Acts as the backbone of the system, allowing seamless communication among processor, memories, DMA, peripherals, and the RST SNN IP.

This modular SoC architecture is designed with flexibility and scalability in mind, enabling easy expansion or substitution of components. It also ensures low-power operation suitable for edge AI applications focused on handwritten digit recognition.

RF-Powered Sensor Platform for Intelligent Groceries Transportation Monitoring Post Silicon

Designed and fabricated the test PCB to test the chip. 

PCK600 Integration in megaSoC Getting Started

Decide on the project goal

PCK600 Integration in megaSoC IP Selection

Chose IP relevant for this design

PCK600 Integration in megaSoC Architectural Design

Latest Project Updates