This website uses cookies. By using this site, you consent to the use of cookies. For more information, please take a look at our Privacy Policy.
Home > FPGA Technology > FPGA > FPGA Basics: What is FPGA, and why do you need it? - FPGA Technology

FPGA Basics: What is FPGA, and why do you need it?

Date: Oct 10, 2022

Click Count: 263

FPGA Basics: What is FPGA, and why do you need it?

Designers are always looking for ways to build system architectures that provide the best computing solutions for all application needs. In many cases, this optimal solution often requires field-programmable gate arrays (FPGAs). Still, unfortunately, many designers are unfamiliar with the capabilities of these devices and how they can be integrated.

This article will briefly describe design scenarios that can benefit from using FPGAs. Then, some interesting FPGA solutions and development kits will be introduced after describing the basic working principles.

Why use FPGAs?

Computing applications are diverse, and the best approach to meet application requirements may vary from application to application, including off-the-shelf microprocessors (MPUs) and microcontrollers (MCUs), off-the-shelf graphics processing units (GPUs), FPGAs, and custom system-on-a-chip (SoC) devices. Application requirements and considerations must be carefully reviewed to determine which approach to use.

For example, when working on cutting-edge technologies such as 5G base stations, designers must consider that the underlying standards and protocols are still evolving. This means that designers must respond quickly and effectively to any specification changes that are out of their control.

Likewise, they must respond flexibly to future standards and protocol changes after the system is deployed to the field. In addition, they must be able to respond to unexpected errors in system functionality or vulnerabilities in system security by modifying existing features or adding new ones to extend the system's life.

While the SoC typically provides maximum performance, this approach is expensive and time-consuming. In addition, any algorithm implemented in the chip fabric is essentially "frozen in silicon. Given these considerations, this inherent inflexibility becomes a problem. An alternative route is needed to find the optimal balance of high performance and flexibility. FPGAs often provide this route, with combinations of microprocessors/microcontrollers and FPGAs or hard processor cores as part of the architecture.

What is an FPGA?

This is a difficult question because FPGAs are different things to different people. Also, there are many FPGAs, each with a different combination of capabilities and functionality.

The programmable fabric is the core of any FPGA (the defining aspect of "FPGA-dom") and is presented in an array of programmable logic blocks. Each logic block is a collection of multiple components, including look-up tables (LUTs), multiplexers, and registers, all of which can be configured (programmed) to perform operations as needed.

Many FPGAs use 4-input LUTs that can be configured to implement any 4-input logic function. To better support the wide data paths some applications use, some FPGAs offer 6-input, 7-input, or even 8-input LUTs, the output of which is directly connected to one of the logic block outputs and one of the multiplexer inputs. The other input of the multiplexer is connected directly to the logic block input (e). The multiplexer can be configured to select one of the inputs.

The output of the multiplexer feeds into the register inputs. Each register can be configured as an edge-triggered flip-flop or level-sensitive latch (although it is not recommended to use asynchronous logic in the form of a latch inside the FPGA). Each register's clock (or enable signal) can be configured to be active high or active low. Likewise, the valid level of the set/reset input is configurable.

These logic blocks can be considered "islands of programmable logic" floating in a "sea of programmable interconnects." This interconnection can be configured to connect any output of any logic block to any input of any other logic block. Similarly, the primary input of the FPGA can be connected to any logic block's input, and any logic block's output can be used to drive the primary output of the device.

The primary general-purpose inputs/outputs (GPIOs) are presented in groups, each of which can be configured to support different interface standards such as LVCMOS, LVDS, LVTTL, HSTL, or SSTL. In addition, the impedance of the inputs is configurable, as is the slew rate of the outputs.

Further extensions to the FPGA architecture can include things like SRAM blocks (called block RAM (BRAM)), phase-locked loops (PLLs), and clock managers. In addition, digital signal processing (DSP) blocks (DSP slices) can be added. They contain configurable multipliers and adders that can perform multiply-accumulate (MAC) operations.

High-speed SERDES blocks are another common feature of FPGAs and can support gigabit serial interfaces. It is important to note that not all FPGAs support all of these features. Different FPGAs offer different sets of features for different markets and applications.

The programmable structures in FPGAs can be used to implement any logic function or set of functions required, down to the processor core or multiple cores. If these cores are implemented as programmable structures, they are referred to as "soft cores". By contrast, some FPGAs (often called SoC FPGAs) contain one or more "hardcore" processors that are implemented directly in silicon. These hard processor cores may include floating point units (FPUs) and L1/L2 caches.

Similarly, peripheral interface functions such as CAN, I2C, SPI, UART, and USB can be implemented as soft cores in a programmable fabric. Still, many FPGAs implement them in silicon as hard cores. Communication between the processor core, interface functions, and the programmable fabric is typically implemented using high-speed buses such as AMBA and AXI.

The first FPGAs were introduced to the market by Xilinx in 1985 and contained only an array of 8 x 8 programmable logic blocks (no RAM blocks, DSP blocks, etc.). In comparison, today's high-end FPGAs can contain thousands of logic blocks, DSP blocks, and megabits (Mb) of RAM. They can contain billions of transistors, equivalent to tens of millions of equivalent gates.

Alternative Configuration Techniques

Configuration cells are required to determine the logic blocks' functionality, and the interconnects' wiring, which can be represented graphically by 0/1 (off/on) switches. These units also configure GPIO interface standards, input impedance, output slew rate, etc. Depending on the specific FPGA, these configuration cells can be implemented using one of the following three techniques.

Anti-fuse: These configuration cells are one-time programmable (OTP) cells, meaning that once a device is programmed, it cannot be withdrawn. Such devices tend to be limited to space and high-security applications. They are sold in very small quantities and can be expensive design choices because of their high price.

Flash memory: Like antifuse-based configuration cells, flash memory-based cells are non-volatile. Unlike inverse fused units, flash memory units can be reprogrammed as needed. Flash configuration cells can withstand radiation, making these devices suitable for space applications (with modifications to the upper metallization layer and package).

SRAM: This means the configuration data is stored in external memory and loaded from memory each time the FPGA is powered up (or in the case of dynamic configuration, as required by the command).

The advantage of FPGAs with configuration cells based on inverse fuses or flash memory is that they are "instant-on" and consume very little power. A disadvantage of these technologies is that they require additional processing steps, and the underlying CMOS process is used to create the rest of the chip.

For FPGAs where the configuration cell is based on SRAM technology, the advantage is that they are manufactured using the same CMOS process as the rest of the chip and have higher performance because they are typically one or two generations ahead anti-fuse and flash technologies. The main drawbacks are that SRAM configuration cells are more power-hungry than anti-fuse and flash cells (of the same technology node) and are prone to single particle flip-flop (SEU) due to radiation.

The latter drawback has long led to SRAM-based FPGAs being considered unsuitable for aerospace applications. Recently, the industry has adopted special mitigation strategies that have resulted in SRAM-based FPGAs appearing alongside flash-based FPGAs in systems such as the Curiosity Mars Rover.

Providing Flexibility with FPGAs

FPGAs are suitable for a wide variety of applications, particularly for implementing intelligent interface functions, motor control, algorithm acceleration and high-performance computing (HPC), image and video processing, machine vision, artificial intelligence (AI), machine learning (ML), deep learning (DL), radar, beamforming, base stations, and communications.

A simple example is to provide intelligent interfaces between other devices that use different interface standards or communication protocols. Consider an existing system with an application processor that uses legacy interfaces to connect to camera sensors and display devices.

Suppose the creator of a system wants to upgrade camera sensors and displays to modern products that are lighter, cheaper, and consume less power. The only problem is that the two new peripherals, or one of them, may use modern interface standards that the original application processor (AP) cannot provide support for. Or, they may support completely different communication protocols, such as the Mobile Industry Processor Interface (MIPI). In this case, using an FPGA that supports multiple I/O standards and some soft MIPI IP cores will provide a fast, low-cost, and risk-free upgrade path.

As another example, consider computationally intensive tasks such as signal processing required to perform radar systems or beamforming in a communications base station. Conventional processors with von Neumann or Harvard architectures are well suited for some tasks but not for tasks that require the same sequence of operations to be performed repeatedly. This is because a single processor core running a single thread can only execute one instruction at a time.

In contrast, multiple functions can be executed simultaneously in FPGAs, supporting a series of operations in a pipelined fashion, enabling greater throughput. Similarly, instead of performing the same operation as a processor, such as performing another 1,000 operations on 1,000 pairs of data values, FPGAs instantiate 1,000 adders in a programmable fabric, thus performing the same computation in a massively parallel manner in a single clock cycle.

Which vendors make FPGAs?

It's an evolving picture. Two major manufacturers of high-end devices with the highest capacity and performance are Intel (which acquired Altera) and Xilinx.

Intel and Xilinx offer products ranging from low-end FPGAs to high-end SoC FPGAs. Another supplier that focuses almost entirely on FPGAs is Lattice Semiconductor, which targets low- and mid-range applications. Last but not least is Microchip Technology (through its acquisitions of Actel, Atmel, and Microsemi), which now offers several families of small- to medium-sized FPGAs and low-end SoC FPGA-like products.

With many product families offering different resources, performances, capacities, and package styles, choosing the best device for the task can be tricky.

How do I design with FPGAs?

The traditional approach to FPGA design is for the engineer to use a hardware description language such as Verilog or VHDL to capture the design intent. These descriptions can first be simulated to verify compliance and then passed to a synthesis tool that generates the configuration files to configure (program) the FPGA.

Each FPGA vendor either has its own internally developed toolchain or offers a customized tool version from a specialized vendor. Either way, the tools are available from the FPGA vendor's website. Alternatively, mature tool suites may be available in free or low-cost versions.

To make FPGAs more accessible to software developers, some FPGA vendors now offer High-Level Synthesis (HLS) tools. These tools parse algorithmic descriptions of desired behavior captured in C, C ++, or OpenCL at a high level of abstraction, and generate input to provide to lower-level synthesis engines.

Several developments and evaluation boards are available for designers wishing to get started, each offering different functionality and features. Three examples are given here: DFR0600 Development Kit from DFRobot, featuring Xilinx's Zynq-7000 SoC FPGA; Terasic Inc.'s DE10 Nano, featuring Intel's Cyclone V SoC FPGA; ICE40HX1K-STICK-EVN Evaluation Board, featuring Lattice Semiconductor's low-power iCE40 FPGA.

Designers planning to use FPGA-based PCIe daughtercards to accelerate applications running on X86 motherboards can look at products such as the Alveo PCIe daughtercard, also offered by Xilinx.

Final words

FPGAs often provide optimal design solutions, combinations of processors and FPGAs, or FPGAs with hard processor cores as part of the architecture.

FPGAs have evolved rapidly over the years to meet a wide range of design needs in terms of flexibility, processing speed, and power consumption and are suitable for a wide range of applications.

<< Previous: Basic knowledge of FPGA architecture and applications

<< Next: What efforts has Google made to advance RISC-V development?

Need Help?


If you have any questions about the product and related issues, Please contact us.