ISLPED will be a virtual event on Zoom
3-days of Existing Programs on Low Power Design

Day 1: August 10th

Day 2: August 11th

Day 3: August 12th

Welcome by General and Program Co-Chairs

Keynote Talk 2: Prof. Marian Verhelst (KUL, Belgium)
“Enabling deep NN at the extreme edge: Co-optimization across circuits, architectures, and algorithmic scheduling”

Best Paper Announcement

Keynote Talk 1: Prof. Bill Dally (Nvidia Corporation, USA)
"Low-Power Processing with Domain-Specific Architecture"

Session 2A: Tuning the Design Flow for Low Power: From Synthesis to Pin Assignment

Design Contest

Session 1A: Energy-efficient Machine Learning Systems

Poster session

Session 3A: Memory Technology and In-memory Computing

Session 1B: From CMOS to Quantum Circuits for Sensing, Computation and Security

Session 2B: Energy Efficient Neural Network Processors: Compression or Go for Near-sensor Analog

Session 3B: Low power system and NVM

Session 1C: Smart Power Management and Computing

Session 2C: Non-ML Low-power Architecture

Session 3C: ML-based Low-Power Architecture

IEEE/ACM member registration is just $75 this year, click here register now!.



Monday Keynote:


Low-Power Processing with Domain-Specific Architecture
Monday, August 10, 10:10 am – 10:55 am (ET)

Prof. Bill Dally (Nvidia Corporation, USA)

Domain-specific architecture is one of the most effective methods to reduce power dissipation in information processing systems. Efficiency in these systems comes from specialized functions, specialized memory systems, and reduced overhead. This talk will explore the efficiency gains possible from DSAs drawing examples from several accelerators.


Bill is Chief Scientist and Senior Vice President of Research at NVIDIA Corporation and a Professor (Research) and former chair of Computer Science at Stanford University. Bill is currently working on developing hardware and software to accelerate demanding applications including machine learning, bioinformatics, and logical inference. He has a history of designing innovative and efficient experimental computing systems. While at Bell Labs Bill contributed to the BELLMAC32 microprocessor and designed the MARS hardware accelerator. At Caltech he designed the MOSSIM Simulation Engine and the Torus Routing Chip which pioneered wormhole routing and virtual-channel flow control. At the Massachusetts Institute of Technology his group built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanisms from programming models and demonstrated very low overhead synchronization and communication mechanisms. At Stanford University his group developed the Imagine processor, which introduced the concepts of stream processing and partitioned register organizations, the Merrimac supercomputer, which led to GPU computing, and the ELM low-power processor. Bill is a Member of the National Academy of Engineering, a Fellow of the IEEE, a Fellow of the ACM, and a Fellow of the American Academy of Arts and Sciences. He has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, the ACM Maurice Wilkes award, the IEEE-CS Charles Babbage Award, and the IPSJ FUNAI Achievement Award. He currently leads projects on computer architecture, network architecture, circuit design, and programming systems. He has published over 250 papers in these areas, holds over 160 issued patents, and is an author of the textbooks, Digital Design: A Systems Approach, Digital Systems Engineering, and Principles and Practices of Interconnection Networks.

Tuesday Keynote:


Enabling deep NN at the extreme edge: Co-optimization across circuits, architectures, and algorithmic scheduling
Tuesday, August 11, 10:00 am – 10:45 am (ET)

Prof. Marian Verhelst (KUL, Belgium)

Deep neural network inference comes with significant computational complexity, making their execution until recently only feasible on power-hungry server or GPU platforms. The recent trend towards embedded neural network processing on edge and extreme edge devices requires a thorough cross layer optimization. The keynote will discuss how to exploit and join​tly optimize NPU/TPU processor architectures, dataflow schedulers and quantized neural network models for minimum latency and maximum energy efficiency.


Marian Verhelst is an associate professor at the MICAS laboratories of the EE Department of KU Leuven and scientific director at imec. Her research focuses on embedded machine learning, hardware accelerators, HW-algorithm co-design and low-power edge processing. Before that, she received a PhD from KU Leuven in 2008, was a visiting scholar at the BWRC of UC Berkeley in the summe​r of 2005, and worked as a research scientist at Intel Labs, Hillsboro OR from 2008 till 2011. Marian is a member of the DATE and ISSCC executive committees, is TPC co-chair of AICAS2020 and tinyML2020, and TPC member DATE and ESSCIRC. Marian is an SSCS Distinguished Lecturer, was a member of the Young Academy of Belgium, an associate editor for TVLSI, TCAS-II and JSSC and a member of the STEM advisory committee to the Flemish Government. Marian currently holds a prestigious ERC Starting Grant from the European Union and was the laureate of the Royal Academy of Belgium in 2016.