The workshop series on embedded machine learning (WEML) is jointly organized by Heidelberg University, University Duisburg-Essen, Graz University of Technology, and Materials Center Leoben, and embraces our joint interest in bringing complex machine learning models and methods to resource-constrained devices like edge devices, embedded devices, and IoT. The workshop is rather informal, without proceedings, and is organized around a set of invited talks on topics associated with this interest.
With embedded we refer to embedded in the real-world, thus operating under conditions that include
Limited resources in terms of compute, memory and power consumption
Noisy and incomplete training data, as well as noisy sensor data
Part of a complex system architecture
Topics of interest include in general:
Compression of neural networks for inference deployment, including methods for quantization, pruning, knowledge distillation, structural efficiency and neural architecture search
Hardware support for emerging neural architectures beyond the state-of-the-art
Tractable models beyond neural networks
Reasoning about uncertainty
Learning on edge devices, including federated and continuous learning
Trading among prediction quality (accuracy), efficiency of representation (model parameters, data types for arithmetic operations and memory footprint in general), and computational efficiency (complexity of computations)
Automatic code generation from high-level descriptions, including linear algebra and stencil codes, targeting existing and future instruction set extensions
New and emerging applications that require ML on resource-constrained hardware
Security/privacy of embedded ML
New benchmarks suited to edge and embedded devices
In this regard, the workshop gears to gather experts from various domains and from both academia and industry, to stimulate discussions on recent advances in this area.
09:15 - 09:30 Workshop opening
Session 1: Resource-Efficient Probabilistic Modeling
Coffee break: 10:30-11:00
Session 2: Resource Efficiency by Compilation
Lunch break: 12:30-13:30
Session 2: Resource Efficiency by Neural Architecture
Workshop closing with coffee 15:00 - ca. 16:00
This is an in-person event, thus seating is limited. We'll do our best to accommodate as many requests as possible. With regard to attending:
We would like to ask you to fill out the following form if you would like to attend: (REGISTRATION CLOSED - BOOKED OUT)
As we will possibly have more requests than seats, we’d like you to shorty tell us why you would like to attend (field “motivation”).
For a better planning of the organization, we’d appreciate feedback by filling out the form as early as possible. We will try to confirm attendance on a rolling basis and as fast as possible, until we run out of space. In this sense, there is no hard deadline (but space is limited).
Once we run out of space, we will close the form linked above.
As said previously: no registration fees, neither remote presenting nor attending, no formal proceedings, but ample space for interactions and discussions