Abstract
Network-on-Chips (NoC) has been the superior interconnect fabric for multi/many-core on-chip systems because of its scalability and parallelism. On-chip network resources can be dynamically configured to improve the energy efficiency and performance of NoC. However, large and complex design space in heterogeneous NoC architectures becomes difficult to explore within a reasonable time for optimal trade-offs of energy and performance. Furthermore, reactive resource management is not effective in preventing problems, such as thermal hotspots, from happening in adaptive systems. Therefore, we propose machine learning techniques to provide proactive solutions within an instant in NoC-based computing systems. We present a deep reinforcement learning technique to configure voltage/frequency levels of NoC routers and links for both high performance and energy efficiency while meeting the global energy budget constraint. Distributed reinforcement learning agents technique has been proposed, where a reinforcement learning agent configures a NoC router and associated links intelligently based on system utilization and application demands. Additionally, neural networks are used to approximate the actions of distributed reinforcement learning agents. Simulations results for 256-core and 16-core NoC architectures under real applications and synthetic traffic show that the proposed self-configurable approach improves energy-delay product (EDP) by 30-40% compared to traditional non-machine-learning based solution.
Original language | American English |
---|---|
Journal | IEEE Access |
DOIs | |
State | Published - Jun 13 2022 |
Keywords
- Network-on-Chip (NoC)
- Multicore Architecture
- Mancore Processor
- Machine Learning (ML)
- Reinforcement Learning (RL)
- Distributed RL
- Deep reinforcement learning (Deep RL)
- Q-learning
- Neural Networks (NNs)
- Self-configurable
- Energy-Efficiency
- High-Performance
Disciplines
- Computer Engineering
- Computer and Systems Architecture
- Computer Sciences