Methods of Automating Acceleration Process in FPGAs for Neural Networks Resulting From Deep Machine Learning enviroment

Postgraduate Thesis uoadl:2876915 273 Read counter

Unit:
Κατεύθυνση Ηλεκτρονικός Αυτοματισμός (Η/Α, με πρόσθετη εξειδίκευση στην Πληροφορική και στα πληροφοριακά συστήματα)
Library of the School of Science
Deposit date:
2019-06-26
Year:
2019
Author:
Chatzigeorgiou Panagiotis
Supervisors info:
Λεντάρης Γεώργιος, Εθνικό Μετσόβιο Πολυτεχνείο
Original Title:
Μέθοδοι Αυτοματοποίησης της διαδικασίας επιτάχυνσης σε FPGA για Νευρωνικά Δίκτυα που προκύπτουν από περιβάλλον Βαθιάς Μηχανικής Μάθησης
Languages:
Greek
Translated title:
Methods of Automating Acceleration Process in FPGAs for Neural Networks Resulting From Deep Machine Learning enviroment
Summary:
By "Machine Learning" we refer to the scientific field, which deals exclusively with the study of algorithms developed for the pattern recognition and learning theory in artificial intelligence. With the ever-expanding use of the media, Internet of Things (IOT) and Big Data, requirements for data processing speed have increased. At the same time need for maintaining low-cost energy and growth time, led in spreading the use of engineering learning algorithms in many systems.
For all the above reasons, Deep Convolutional Neural Networks (DCNNs),in the modern era, are in great demand. These systems offer fairly accurate predictions, while they have great flexibility in their use. The algorithms used in these systems are based on the human brain and the way it works. In recognizing image patterns, for example, these networks are constituted by a sequence of successive layers of pattern detectors, and then through classifiers they manage to accurately distinguish, through machine learning, an image, such as a human being.
Accurate calculation and thorough analysis of very large amounts of data requires large amounts of energy. The need to reduce energy costs and the time to calculate data has become more imperative than ever before. With the development of FPGAs (Field Programmable Gate Arrays), the implementation of systems using hardware accelerators and very low energy costs has seen great progress.
This paper will show how to get a DCNN (Deep Convolutional Neural Network) that works in platform-level software and transfer it to a FPGA board level. The DCNN to be used is Caffe. Caffe is a deep learning platform at the University of Berkeley (UC Berkley) developed and implemented in two different CPU (Central Processing Unit) and GPU (Graphics Processing Unit) architectures.
This paper will also provide basic instructions for transferring the Caffe platform and FPGAs, the difficulties encountered and the interaction of the new Caffe, which has been cross-compiled for ARM processor with the Vivado HLS 2018.3. Finally, theoretical results will be presented by the application of various types of neural networks on specific FPGA boards.
Main subject category:
Science
Keywords:
FPGA, Zybo, CNNs, DNNs, Cifar10, Caffe, Cross-Compiling, Vivado HLS, Neural Networks, ARM processor.
Index:
No
Number of index pages:
0
Contains images:
Yes
Number of references:
44
Number of pages:
56
File:
File access is restricted only to the intranet of UoA.

Master_Thesis_Final_ Chatzigeorgiou_Panagiotis.pdf
2 MB
File access is restricted only to the intranet of UoA.