Pedestrian detection appeals to a wide range of
applications, including automotive safety systems, mobile applications, and
industrial automation, often referred to as embedded vision. Embedded vision is
about extracting meaning from video or images, and is one of the fastest growing
application domains in the semiconductor industry. 

The most popular
algorithm for pedestrian detection is Histogram of Oriented Gradients (HOG).
Generally, the HOG algorithm is implemented using a general-purpose processor
(GPP) or a graphics processor (GPU), which either lacks performance or results
in huge power consumption.  A dedicated hardware solution can achieve power and
performance goals, but lacks the flexibility of software. In this webinar, we
explain the steps we took to map the HOG algorithm to a multicore design
tailored to the application, achieving full programmability, real-time
performance and low power consumption. We will cover the overall architectural
decisions, why we chose a heterogeneous multicore architecture, and the
importance of the memory system configuration. We will discuss how to
parallelize the design process among a team of algorithm, hardware and software
engineers, how to design the individual cores, the value of a virtual prototype
and the need for an FPGA-based prototyping system.

Who should

  • Processor designers who want to perform architectural analysis, with
    performance, power and area taken into account
  • Project managers who want to understand the alternatives to standard
    processors, and the design flow implications
  • Engineers looking for a prototyping solution for OpenCV-based vision
  • Computer architects who are working on complex real-time embedded vision
    systems and looking for ways to offload the computation intensive tasks from the
    main application processor

What Attendees will learn

  • Why a heterogeneous multicore design can achieve the balance of
    programmability, performance and power efficiency
  • Why application specific processors (ASIP) are suited for Embedded Vision
  • The steps it takes to move a multicore processor design from concept to a
    FPGA-based prototyping implementation
  • How the Synopsys Embedded Vision Development System featuring Processor
    Designer and  the HAPS FPGA-based prototyping solution can accelerate the design
    and prototyping stages


Willems, Product Marketing Manager, Synopsys

Markus Willems is
currently responsible for Synopsys’ system-level solutions with a focus on
processor development and signal-processing design. He has been with Synopsys
for 14 years supporting various system-level and functional verification
marketing roles. He has worked in the electronic design automation and computer
industries for more than 20 years in a variety of senior positions, including
marketing, applications engineering, and research. Prior to Synopsys, Markus was
product marketing manager at dSPACE, Paderborn, Germany. Markus received his
Ph.D. (Dr.-Ing.) and M.Sc (Dipl.-Ing.) in Electrical Engineering from Aachen
University of Technology in 1998 and 1992, respectively. He also holds a MBA
(Dipl.Wirt-Ing) from Hagen University.