Artificial Intelligence (AI), specifically deep learning, is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, the deep neural networks of today are growing quickly in size and use too much memory, compute, and energy. Plus, to make AI truly ubiquitous, it needs to run on the end device within a tight power and thermal budget. One approach to address these issues is Bayesian deep learning.

Attendees will learn:

  • Why AI algorithms and hardware need to be energy efficient
  • How Bayesian deep learning is making neural networks more power efficient through model compression and quantization
  • How we are doing fundamental research on AI algorithms and hardware to maximize power efficiency

Speaker

Max Welling, VP, Technology, Qualcomm