Hardware Acceleration for Data Processing (HADP) - Fall 2019

Course Material | Talks | Schedule | Seminar Hours | People


The seminar is intended to cover recent results in the increasingly important field of hardware acceleration for data science, both in dedicated machines or in data centers. The seminar aims at students interested in the system aspects of data processing who are willing to bridge the gap across traditional disciplines: machine learning, databases, systems, and computer architecture. The seminar should be of special interest to students interested in completing a master thesis or even a doctoral dissertation in related topics.

The seminar will start on September 17th with an overview of the general topics and the intended format of the seminar. Students are expected to present one paper in a 30 minute talk and complete a report (max 4 pages, excluding references) on the main idea of the paper and how they relate to the other papers presented at the seminar and the discussions around those papers. The presentation will be given during the semester in the allocated time slot. The report is due on the last day of the semester (20.12.2019)

Attendance to the seminar is mandatory to complete the credit requirements. Active participation is also expected, including having read every paper to be presented in advance and contributing to the questions and discussions of each paper during the seminar.


1) The first introductory class will take place on 17th September 2019 at 13:15 in ML J 34.1 followed by the opening talk at 14:15.

2) Selection of papers (3 papers max) and presentation dates (3 slots max) are expected to be ready by 24th September 2019. Please send your preferences to user_name[at]inf.ethz.ch, where user_name is amit[dot]kulkarni.

3) The deadline for report submission is on 20th December 2019. Please send in your report to the two email ids: user_name[at]inf.ethz.ch, where user_name is amit[dot]kulkarni and fabio[dot]maschi.

4) We received the reports from all students and the reports are intact.


Speaker Title Date
Prof. Gustavo Alonso Introduction to the seminar 17 Sep 13:15
Cedric Renggli SparCML: High-Performance Sparse Communication for Machine Learning 17 Sep 14:15
Dr. Muhsen Owaida Lowering the Latency of Data Processing Pipelines Through FPGA based Hardware Acceleration 24 Sep 13:15
David Dao Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms 24 Sep 14:15


Name Paper Date
Pavllo Dario Crossbow: Scaling Deep Learning with Small Batch Sizes on Multi-GPU Servers 1 Oct 13:15
Athanasiadis Ioannis Accelerating Pattern Matching Queries in Hybrid CPU-FPGA Architectures 8 Oct 13:15
Aeschbacher Tobias Fine-Grained, Secure and Efficient Data Provenance on Blockchain Systems 8 Oct 14:15
Pascal Oberholzer HetExchange: Encapsulating heterogeneous CPU-GPU parallelism in JIT compiled engines 22 Oct 13:15
Jiang Tianjian A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services 22 Oct 14:15
Breitwieser Lukas KV-Direct: High-Performance In-Memory Key-Value Store with Programmable NIC 29 Oct 13:15
Bonaert Gregory A Cloud-Scale Acceleration Architecture 29 Oct 14:15
Severin Kistler Azure Accelerated Networking: SmartNICs in the Public Cloud 12 Nov 13:15
Sikonja Rok Efficiently Searching In-Memory Sorted Arrays: Revenge of the Interpolation Search? 12 Nov 14:15
Jaggi Akshay Analyzing Efficient Stream Processing on Modern Hardware 19 Nov 13:15
Onus Viviane FPGA-based High-Performance Parallel Architecture for Homomorphic Computing on Encrypted Data 19 Nov 14:15
Kolar Luka Speculative Distributed CSV Data Parsing for Big Data Analytics 26 Nov 13:15
Martsenko Kristina In-RDBMS Hardware Acceleration of Advanced Analytics 26 Nov 14:15
Pasquale Davide Schiavone Federated Learning: Challenges, Methods, and Future Directions 3 Dec 13:15
Alessandro Novello Orthogonal Security With Cipherbase 3 Dec 14:15

Seminar Hours

Tuesdays, 13:00-15:00 in ML J 34.1



Teaching Assistants: