FPGAs, Deep Learning, Software Defined Networks and the Cloud: A Love Story Part 1

Introduction to FPGAs

CPU, GPU, FPGA, ASIC…What is the Difference?

Figure 1: Flexibility ←→ Efficiency Scale for CPUs, GPUs, FPGAs and ASICs
  1. Relative Bests. The best chip here relative to the applications or workloads you are running and your technical ability. For example, if you are using the deep learning library TensorFlow to run machine learning workloads on FPGAs, you’ll have the ability to optimize to your heart’s content. If you don’t have in-house knowledge on programming FPGAs then using an ASIC, like Google Cloud’s Tensor Processing Unit specifically built for machine learning and tailored to TensorFlow can make more sense.
  2. Overlap. There can also be overlap with CPUs, GPUs, FPGAs and ASICs when it comes to processing. As discussed earlier, ASICs and FPGAs might represent similar processing ability and only differ in ones ability to reprogram the logic. Another example is CPUs that are capable of handling light graphics processing normally handled by a GPU.
  3. Tag-Team Computing. As each type of chip has it’s own value proposition, there are also efforts to combine chips for a more seamless integration. Intel’s upcoming Programmable Acceleration Cards that include an FPGA connected with a CPU through a UltraPath Interconnect is a good example. Baidu is also taking this approach with the announcement of their 256-core cloud/AI “XPU” which combines elements of FPGA, GPU and CPU architectures. Even in Amazon Web Service’s F1 instance type you’ll find an FPGA paired with CPUs with a bulk of the application running on the CPU and the FPGA being used to accelerate specific computations. Surprisingly, even this GPU-CPU combo from competitors AMD and Intel exists, which makes me look a bit closer at pigs to see if they are growing wings and start flying soon.

Benefits of FPGAs Finding a New Home in the Cloud

  1. Speed. This comes from a simple truth that purpose built hardware is almost always better than hardware built for general purpose. As I’ll discuss later, you will see some tremendous gains in speed across machine learning, networking and other areas allowing for better performance with fewer resources which help to justify the FPGA adoption in the data center.
  2. Efficiency & Scale. Both are equally relevant here as the efficiency of purpose built hardware is directly tied to the ability to scale out the cloud. As an analogy, picture a train that seats 500 people and makes a round trip from Los Angeles to San Francisco in 12 hours. That train can only make two round trips a day, servicing 2,000 people (500 going and then 500 coming back per trip, two round trips per 24 hour day). If you could increase efficiency of the train allowing it to make the round trip in 4 hours then the train can now take 6 trips a day servicing 6,000 people without having to scale out train cars or tracks. With the exact same transit system you can now scale out to service 3x as many customers. Cloud service providers are using the same efficiency plus scale formula to their benefit. If their customers complete the same workloads in less time then that frees up the hardware allowing cloud providers to have more available hardware to service customer’s needs without having to increase their data center footprint.
  3. Cost. I know this is a listed benefit that mysteriously shows up on almost every list of benefits regardless of the technology, but in the case of FPGAs it is actually true. By improving speed, efficiency and scale, cloud providers decrease their cost per user transaction and pass those savings on to their customers. As cloud providers continue to slash prices in a race to $0, they increasingly have thin profit margins and rely on scale to hit revenue targets. The improved efficiency allows them to take on more customers without scaling servers which increases their annual revenue. Cloud providers also save money on energy costs as the cost of processing per watt is typically lower on an FPGA than with CPUs or GPUs. A recent demo of image processing on AlexNet showed FPGAs processing 32 images per watt of power with GPUs only processing 14.2 images per watt with even less for CPUs.

Software Defined Networks in the Cloud with FPGAs

Figure 2: Microsoft’s SmartNIC with FPGA
Figure 3: Typical Virtual Network and Switch (LEFT). Virtual network and Switch with FPGA (RIGHT)

FPGA based network function virtualization can provide the flexibility of general purpose processors, like a CPU, while at the same time providing the necessary throughput that CPUs cannot sustain.

The Wrap Up



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store