Power consumption limitations and flexibility to support newly developed AI models are two major challenges faced in implementing AI at the endpoints. Large amount of power consumption generated by AI processing shortens operation time, and generates heat issues at the endpoint system. Also, hardware without flexibility to support new developed AI models will become obsolete soon.

These challenges can be solved with Renesas’ original DRP-AI (Dynamically Reconfigurable Processor for AI), which is an AI accelerator with high-speed AI inference processing that achieves the low power and flexibility to easily implement new AI models at the endpoints.

In this white paper you will learn:

  • How DRP-AI provides flexibility, high-speed processing, and power efficiency at the same time
  • How to reuse data to reduce power and communication bandwidth required for data movement
  • How to achieve a lower surface temperature without using heat sink