Data Plane Processing with Configurable Architectures
With processing demands in the data plane set to outpace Moore’s Law, there is a very real danger of a widening gap between the requirements of new high-throughput algorithms and the ability of design teams to develop efficient system-on-chip (SoC) solutions.
Flexibility and scalability remain important qualities for these devices—qualities that are becoming harder to deliver as software solutions struggle to meet the performance demands of new, portable, multimedia applications. While hardwired solutions deliver performance and low power, the traditional design approach is time consuming, error prone and slow to accommodate specification changes.
ARM’s vision for a new approach to embedded data plane processing is based on configurable IP—architectures that continue to deliver the best of both worlds—excellent processing performance in a solution that is flexible, easy to implement and scalable.
ARM has introduced a flexible, new data engine based on a configurable very long instruction word (VLIW) processor. ARM OptimoDE data engines enable systems designers to configure the data plane architecture to suit the exact needs of the application. Combining an ARM RISC microprocessor core with an OptimoDE data engine will yield a flexible, area-efficient solution for many low-power, high-performance applications running next-generation algorithms.
Please disable any pop-up blockers for proper viewing of this Whitepaper.