Abstract
The limitations of traditional computers and the advent of real-time applications dictate the need for a dedicated hardware to realize complex neural network models that are widely used in pattern recognition systems. It has been shown that the computational bottleneck of the neural network architecture lies in the activation function. In this paper, we present an area-time efficient neural network engine that employs linear approximation to reduce the computational complexity of the activation function. It is demonstrated that the approximation method can achieve a high degree of recognition accuracy by trading-off the number of recognizable patterns. Hardware implementation results show that the proposed method has approximately 42% performance gain over the previous implementation with a hardware reduction of 123K NAND gates. The approach in this paper lends well for low-cost and high-speed portable applications such as intelligent toys.
Original language | English |
---|---|
DOIs | |
Publication status | Published - Nov 2004 |
Event | 38th Asilomar Conference on Signals, Systems and Computers - California, USA United States Duration: 7 Nov 2004 → 10 Nov 2004 |
Conference
Conference | 38th Asilomar Conference on Signals, Systems and Computers |
---|---|
Country/Territory | USA United States |
City | California |
Period | 7/11/04 → 10/11/04 |