AbstractComputational processing is a critical element of modern-day life, enabling and
enhancing research, industry, medical and military efforts. The continued
improvements seen in processing speed and power are largely driven by the iterative scaling of transistors and other integrated circuit components. As predicted by Moore's law, this trend has resulted in processing solutions approximately doubling in computational power every two years. This trend cannot continue indefinitely, and it is widely agreed that new designs are rapidly approaching the physical limits of transistor scaling. It is therefore necessary that novel processing technologies are developed to allow the continued advancement of processing potential. One promising research eort is that of neuromorphic computing, a field that takes inspiration for the design of new and efficient processing technologies from biological nervous systems.
Neuromorphic systems excel at cognitive computational tasks that standard processing solutions typically find challenging. These systems therefore represent a crucial tool in the future of computational technologies. Despite significant advancements, neuromorphic solutions fall short of the efficiencies seen in nature and further research into such designs is therefore critical in ensuring their continued success and wider application.
This thesis is concerned with the impact of low-level element design on a neuromorphic system's efficiency and computational power. Significant improvements in both speed and efficiency are demonstrated, achieved by careful redefinition of the fundamental building blocks and structures common to neuromorphic solutions. The proposed systems developed in this work achieve this improved performance without reduction in computational function by redesigning the underpinning operations and implementations from the ground up with engineering constraints and principles in mind. The gains demonstrated with this approach are shown to benefit both biophysically accurate and computationally efficient neural models, offering further acceleration on existing neuromorphic architectures. Alongside the fundamental
building blocks, the connection infrastructure common to modern neuromorphic
solutions is also considered, showing that there is a considerable difference between hardware implementations and biological systems. This difference in structure and applied connectivity appears largely due to a fundamental difference in the dimensionality available in each case, with hardware systems commonly constrained to a low number of two-dimensional layers, while biological systems form dense three-dimensional structures. The results of this work show the critical role that function and system definition plays in the efficiency and computational power of neuromorphic systems. Through the application of these findings, future neuromorphic systems can achieve greater performance-per-watt with reduced computational delays.
|Date of Award||4 Sep 2019|
|Supervisor||Benjamin Metcalfe (Supervisor) & Peter Wilson (Supervisor)|