Hardware Acceleration for Machine Learning (HAML) - Spring 2020

Course Material | Talks | Schedule | Seminar Hours | People


Overview

The seminar is intended to cover recent results in the increasingly important field of hardware acceleration for machine learning, both in dedicated machines or in data centers. The seminar aims at students interested in the system aspects of data processing who are willing to bridge the gap across traditional disciplines: machine learning, databases, systems, and computer architecture. The seminar should be of special interest to students interested in completing a master thesis or even a doctoral dissertation in related topics.

The seminar will start on 20th Feburary with an overview of the general topics and the intended format of the seminar. Students are expected to present one paper in a 30 minute talk and complete a report (max 4 pages, excluding references) on the main idea of the paper and how they relate to the other papers presented at the seminar and the discussions around those papers. The presentation will be given during the semester in the allocated time slot. The report is due on 31st May .

Attendance to the seminar is mandatory to complete the credit requirements. Active participation is also expected, including having read every paper to be presented in advance and contributing to the questions and discussions of each paper during the seminar.


News

1) The first introductory class will take place on 20th Feburary 2020 at 15:15 in LEE C 104.

2) Selection of papers (3 papers max) are expected to be ready by 26th February. Please send your preferences to zhe[at]inf.ethz.ch and tpreusser[at]ethz.ch.

3) The deadline for report submission is on 31st May. Please send in your report to the two email ids: zhe[at]inf.ethz.ch and tpreusser[at]ethz.ch.
 


Schedule

Speaker Title Date
Dr. Thomas Preusser Neural Network Inference in the Context of Heterogeneous Computing 20-Feb
Prof. Gustavo Alonso Accelerating Search in the Data Centers and the Cloud 27-Feb
NO CLASS 05-Mar
Federico Pirovano A Configurable Cloud-Scale DNN Processor for Real-Time AI 12-Mar
Fanlin Wang BlueConnect: Decomposing All-Reduce for Deep Learning on Heterogeneous Network Hierarchy 19-Mar
Maxime Fabre FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning 26-Mar
Marco Flowers Gandiva: Introspective Cluster Scheduling for Deep Learning 02-Apr
Niels Gleinig Poseidon: An Efficient Communication Architecture for Distributed DeepLearning on GPU Clusters 02-Apr
Son Do TensorFlow: A System for Large-Scale Machine Learning 09-Apr
Marco Zeller Xilinx Adaptive Compute Acceleration Platform: Versal™ Architecture 09-Apr
Etienne de Stoutz Finn: A Framework for Fast, Scalable Binarized Neural Network Inference 23-Apr
Federico van Swaaij Towards Federated Learning at Scale: System Design 23-Apr
Florian Mahlknecht Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices 30-Apr

 


Seminar Hours

Thursday, 15:00-17:00 in LEE C 104


People

Lecturers:

Teaching Assistants: