Designing and Using FPGAs for Double-Precision Floating-Point Math
Floating-point arithmetic is used extensively in many applications across multiple market segments. These applications often require a large number of calculations and are prevalent in fields such as financial analytics, bioinformatics, molecular dynamics, radar, and seismic imaging. Apart from integer and single-precision, 32-bit floating-point math, many applications demand higher precision, forcing the use of double-precision 64-bit operations.
This white paper demonstrates the double-precision floating-point performance of Altera FPGAs, using two different approaches. First, a theoretical “paper and pencil” calculation is used to demonstrate peak performance. This type of calculation may be useful for raw comparison between devices, but it is somewhat unrealistic; it assumes that data is always available to feed the device and does not take into account memory interfaces and latencies, place and route constraints, and other aspects of an actual FPGA design. Thus, secondly, the paper demonstrates real results of a double-precision matrix multiply core that can easily be extended to a full DGEMM benchmark, and discusses the real-world constraints and challenges of achieving such results.
Please disable any pop-up blockers for proper viewing of this Whitepaper.