Just a few days ago, Intel quietly released a large-scale upgrade for its chip product line: the chip giant will soon introduce a new processor that integrates the CPU and FPGA. This new combination will be plugged into the standard E5 LGA2011 slot, and the integrated FPGA will allow each chip to handle a specific workload in a customized manner. This operation is clearly hoping to make the Intel x86 architecture a better set of computing platforms to address the specific needs of multiple workloads across enterprises and data centers, while at the same time striving for those who would like to purchase GPUs from vendors such as NVIDIA.
The new strong CPU + FPGA solution also brings a new debate, whether Intel will consider introducing FPGAs into its consumer-grade Core chip product line - on the surface it is unlikely, but must recognize the next generation of games And the application will speed up the core processing in the FPGA to achieve better operation results. We will discuss this at the end of the article.
What exactly is FPGA?
Let us first talk about the field programmable gate array (FPGA). As the name implies, an FPGA is essentially equivalent to a blank chip that can be reprogrammed multiple times after fabrication. With a few exceptions, almost every chip used in our computer equipment is hard-coded (that is, coded at the time of manufacture) to perform a single feature set. Our CPU can only complete the design processing goals that Intel or AMD have set for them. You can't convert your CPU to a GPU. But on the FPGA side, we can code it to perform a specific feature set (such as graphics processing) and then reprogram to handle other types of workloads (such as some database operations tasks).
In addition to this customizability, the main advantages of FPGAs are reflected in their superior performance. While ASICs are by far the fastest and most efficient processing solution for a particular workload (so we use it for bitcoin mining), FPGAs are equally good at speed and execution efficiency. Although it is not possible to compare with ASIC in these two aspects, but at the expense of some speed performance, you can also get the unique ability of reprogramming (again, ASIC sets the type of load that can be processed at the time of manufacture) .
Why does Intel want to combine Xeon CPUs with FPGAs?
In the past few years, more and more processing tasks have begun to shift from the local to the cloud environment, and Intel’s dominance in the server market has been affected by many aspects. The Xeon chip is indeed an excellent integrated solution, but in some cases, we need to use other more targeted chips to handle certain workloads. Because of this, people are starting to think about using a more affordable and lower-powered chip on a Web server (which doesn't need expensive chips that are amazingly expensive), and the GPGPU acceleration solution is powerful. Parallel processing effects (and therefore occupy a place in the supercomputer). For these specific workloads, Intel has introduced a lightweight Atom processor, a 50-core Xeon Phi coprocessor for supercomputers, and today's Xeon+FPGA (which currently does not seem to have an official name).
Intel also pointed out that it has launched "15 customized products" in the past year to meet the actual needs of large customers such as Facebook and Ebay (probably fine-tuning the cache size and the number of computing cores based on the strongest) And this year it will launch more than 30 customized designs.
So what is the goal of the new Xeon CPU + FPGA product announced this time? According to Intel: "FPGAs can provide our customers with programmable and high-performance acceleration capabilities, which significantly improve the efficiency of their critical algorithms." Intel expects Xeon + FPGA to achieve up to 20 times performance improvement The effect (of course, only those that execute on top of the FPGA rather than relying on traditional x86 CPUs - but it's clear that the overall speed increase will be considerable due to the elimination of bottlenecks). Another big advantage is the workload change—if you adjust your own key algorithms, or if the entire business core shifts, then FPGAs can still work, ensuring we don't have to buy new hardware platform.for this.
As far as current technical indicators are concerned, we can only give limited guess conclusions. Altera is likely to be responsible for the manufacturing of related FPGA chips, and it has maintained a close working relationship with Intel (it was one of the first manufacturers to be licensed to use Intel's tightly-protected chip foundry facilities). FPGAs generally have a large volume (this is the necessary pain to use programmable gates), and there is obviously not much free space on the LGA2011 Xeon E5 chip package substrate, so we suspect that both parties will try to compress the size of the FPGA. In addition, if the FPGA can take advantage of Xeon's own cache and other low-level resources, it may improve the space utilization efficiency to some extent. (Intel pointed out that there is a 'low latency consistent interface' between Xeon and FPGA, but it is not specifically explained.)
The chip giant did not give the specific price or time to market for this product - but it is clear that it is not a cheap product (currently top-level Xeon processors still cost thousands of dollars). Intel hopes that this Xeon+FPGA solution will enable enterprise customers to realize that the x86 architecture is still promising, preventing them from moving to competitors' architectural solutions (such as the Tesla GPGPU built by NVIDIA). After all, from the perspective of implementation difficulty, rewriting a small amount of critical code to run it on an FPGA is much simpler than rewriting an entire application with OpenCL.
If this little Xeon + FPGS experimental attempt is successful, then when are the Core + FPGA chip available? Or can be a GPU+FPGA product? Imagine if we could take the core part of the game engine to the FPGA, such as the physics engine or enemy AI, or the core part of the PC device operating system from the FPGA, it would be exciting to think about it.