Module Number

ML-4420
Module Title

Efficient Machine Learning in Hardware
Lecture Type(s)

Lecture
ECTS 6
Work load
- Contact time
- Self study
Workload:
180 h
Class time:
60 h / 4 SWS
Self study:
120 h
Duration 1 Semester
Frequency In the summer semester
Language of instruction English
Type of Exam

Oral Test

Content

The recent breakthroughs in using deep neural networks for a large variety of
machine learning applications have been strongly influenced by the availability
of high performance computing platforms. In contrast to its biological origin,
however, high performance of artificial neural networks critically relies on much
higher energy demands. While the average energy consumption of the entire
human brain is comparable to that of a laptop computer (i.e. 20W), artificial
intelligence often resorts to large HPCs with several orders of magnitude higher
energy demand. This lecture will discuss this problem and show solutions on
how to build energy and resource efficient architectures for machine learning
in hardware. In this context, the following topics will be addressed:
• Hardware architectures for machine learning: GPU, FPGA, SIMD architectures,
domain-specific architectures, custom accelerators, in/near
memory computing, training vs. inference architectures
• Energy-efficient machine learning
• Optimized mapping of deep neural networks to hardware and pipelining
techniques
• Word length optimization (binary, ternary, integer, floating point)
• Scalable application specific architectures
• New switching devices to implement neural networks (Memristors, PCM)
• Neuromorphic computing

Objectives

The students gain in-depth knowledge about the challenges associated with
energy-efficient machine learning hardware and respective state-of-the-art solutions.
They can compare different hardware architectures regarding the tradeoff
between energy consumption, complexity, computational speed and the specificity
of their applicability. The students learn what kinds of hardware architectures
are used for machine learning, understand the reasons why a particular
architecture is suitable for a particular application, and can efficiently implement
machine learning algorithms in hardware.

Allocation of credits / grading
Type of Class
Status
SWS
Credits
Type of Exam
Exam duration
Evaluation
Calculation
of Module (%)
Prerequisite for participation There are no specific prerequisites.
Lecturer / Other Bringmann
Literature

Will be announced in the first lecture / Knowledge about foundations in machine learning

Last offered Sommersemester 2022
Planned for Sommersemester 2024
Assigned Study Areas INFO-INFO, INFO-PRAK, INFO-TECH, MEDI-APPL, MEDI-INFO, MEDI-MEDI, MEDI-MMT, ML-CS, ML-DIV