direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Speeding Up ANN Training Using Approximate Computation



Artificial Neural Network (ANN) are an efficient tool for classification and prediction of patterns in input data. The ANN can be trained to recognize a specific pattern(s) in the input data. If a different set of data that might contain the same pattern(s) is applied as an input to the ANN, the ANN can detect the pattern and classify the input according to those patterns. It has a wide range of applications in data mining, artificial intelligence, and computer vision. Training is a very critical process in the function of ANN. Several algorithm can be used to train an ANN; the most important algorithm is the back propagation algorithm which is an iterative algorithm, that consumes a significant amount of computation power. Training can be performed off-line, which means the training data is fixed and will not change during the system lifetime. This approach makes the system not adaptable to changes, and limits its abilities to learn new patterns.

On-line training on the other hand, allows the ANN to learn new pattern during the system lifetime, and so increases its adaptability. However, due to the computation requirements of the training algorithms, the real-time performance of on-line training is difficult to achieve using software solutions.

In this thesis project, it is required to accelerate ANN using reconfigurable architectures. To reduce the computational efforts required by the algorithm to reach real-time performance, approximate computation should be employed. Using approximate computation will reduce the accuracy of the result with advantage of faster computation time.

The main tasks required in this project:

  1. Port a software version of the ANN backpropagation algorithm to an embedded processor running on FPGA (specifically the Zynq SoC).
  2. Convert all computations in the ANN software to use fixed point arithmetics and compare the accuracy and performance.
  3. Identify critical components of the software which are the main source performance degradation.
  4. Off-load critical software blocks to hardware accelerators running on the FPGA using Fixed point arithmetic and compare the performance to the software version.
  5. Employ approximate computation to enhance the performance of the hardware accelerators and compare all these architectures with various parameters.

Required Skills

Programming Skills C/C++, VHDL/Virlog hardware description languages, basic understanding of computer arithmetics and computer architectures.

Desired Skills

Linux OS programming, driver development for Linux, network programming, computer vision and image processing.

Contact Persons

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions