As traditional microprocessors struggle to effectively process information from these demanding workloads, data center GPUs move in to fill the void.

Graphics processing units, which have been around since the ’70s, were initially used to offload video and graphic-heavy processing tasks from central processors. These systems have a different foundation than typical CPUs, which were built to maximize throughput on a single-stream, high-speed pipeline. CPUs were also designed to support rapid handoffs and to move information quickly from place to place, such as main to a storage system. GPUs have a different structure: They work with parallel processing and support multiple high-speed connections. These microprocessors have multiple data paths to process lots of data, which fits well with graphic applications.

Extending the reach of data center GPU use

GPUs have done a fine job of completing a narrow number of tasks, but gradually, their reach has expanded. Nvidia turned to GPUs to differentiate itself from other semiconductor suppliers and to find more uses for GPUs.

First, these products wormed their way into the high- computing arena. But recently, GPU vendors have designed and cards specifically for data center servers. The server-optimized GPUs use high-bandwidth memory and are offered either as modules for integration into a dedicated server design or as Peripheral Component Interconnect Express add-in cards. However, unlike the gaming cards, these cards provide no graphics interfaces.

Server vendors couple GPUs with CPUs to take advantage of the CPU’s strengths. CPU performance improves when it doesn’t work with data-intensive tasks.

Big data, machine learning and AI applications have high processing needs and work with massive amounts of information and different data types. These characteristics mesh well with GPU design.

and machine learning vendors use GPUs to support the processing of the vast amounts of data necessary to train neural networks. In this market, the availability of PCs with GPUs enables software to develop their algorithms on desktops prior to transferring the programs to higher-performance server-based GPUs, according to Alan Priestley, analyst at Gartner.

GPUs arrive in the data center

Data center GPU use will likely increase in the future. GPUs are important infrastructure attributes for mission-critical workloads. IT organizations can procure GPUs off the shelf and use standard libraries that they can easily incorporate into applications, Priestley said.

As a result, server vendors offer either dedicated servers that integrate GPU modules or products that support GPU add-in cards. Server-optimized GPU cards and modules using the highest-performing processors typically cost between $1,000 and $5,000, according to Gartner.

The established vendors are beginning to incorporate these add-ons in their product lines.

Dell supports the FirePro series of GPUs from Advanced Micro Devices, as well as GPUs from Nvidia, that are designed for virtual desktop infrastructure and compute applications and have processing power that supports up to 1,792 GPU cores. Hewlett Packard Enterprise’s (HPE) ProLiant systems work with Nvidia Tesla, Nvidia GRID and Nvidia Quadro GPUs. The HPE Insight Cluster Management Utility installs and provisions the GPU drivers and monitors GPU health, such as temperature.

To prepare for further data center GPU use, administrators need to gain expertise in how to manage these processors. They should find individuals who are familiar with the technology, which isn’t easy since the technology differs from traditional microprocessor design and there is little education for it, although Nvidia offers some training materials.



Source link

No tags for this post.

LEAVE A REPLY

Please enter your comment!
Please enter your name here