+1-415-954-2800 Support

Leap Ahead of the Competition
with GPU-Accelerated Computing

Get faster time-to-results without the traditional equipment headaches

Get Ready for The Future with More Powerful, More Efficient Computing

Penguin Computing™ delivers targeted, modular and complementary AI & Analytics architectures for AI/ML and High Performance Data Analytic Pipeline. Our solutions shorten time to insight and discovery by removing the complexities involved in designing, deploying, and supporting customer’s AI & Analytics infrastructure.

Our GPU-accelerated compute delivers best of breed solution that powers our Technology Practices, especially in HPC Technology as well as AI & Analytic Technology. Our infrastructure offering includes both 19” EIA and 21” Tundra (Open Compute) infrastructure enabling higher density, and alternative non-air cooling for more compute per rack.

Penguin Computing team (2019 NVIDIA® HPC Preferred OEM Partner of the Year) is experienced with building both CPU and GPU-based systems as well as the storage subsystems required for this level of data analytics, the outcome of moving to a GPU-accelerated strategy is superior performance by all measures, faster compute time, and reduced hardware requirements.

GPU-Accelerated Servers

19″ EIA Servers

2UProcessorPCIe SlotsGPU(s) Supported
Altus XE2214GTAMD EPYC 7002/70034x PCIe Gen4 x16 (FHFL) and 2x PCIe Gen4 x16 (HHHL)NVIDIA A100 PCIe, NVIDIA V100/V100S-PCIe, NVIDIA T4, NVIDIA RTX
Altus XE2318GTAMD EPYC™ 9004 Series8x PCIe Gen5 x16 FHFL, 2x PCIe Gen5 x8 LPNVIDIA H100 NVL, L40, L40S
Relion XE2318GT4th Gen Intel® Xeon® Scalable Processors8x PCIe Gen5 x16 FHFL, 2 x PCIe Gen5 x8 LPNVIDIA H100 NVL, L40, L40S
3UProcessorPCIe SlotsGPU(s) Supported
Altus XE3314GTSAMD EPYC™ 9004 Series6x PCIe Gen5 x16 LP SlotsNVIDIA HGX H100 80GB HBM3 SXM5 x4
NVIDIA HGX H200 141GB HBM3e SXM5 x4
Relion XE3314GTS4th Gen Intel® Xeon® Scalable Processors6x PCIe Gen5 x16 LP SlotsNVIDIA HGX H100 80GB HBM3 SXM5 x4
NVIDIA HGX H200 141GB HBM3e SXM5 x4
4UProcessorPCIe SlotsGPU(s) Supported
Altus XE4318GTAMD EPYC™ 9004 Series8x PCIe Gen5 x16 FHFL, 2x PCIe Gen5 x8 LPNVIDIA H100 NVL, L40, L40S
Relion XE4318GT4th Gen Intel® Xeon® Scalable Processors8x PCIe Gen5 x16 FHFL, 10x PCIe Gen5 x16 LPNVIDIA H100 NVL, L40, L40S
5UProcessorPCIe SlotsGPU(s) Supported
Altus XE5318GTSAMD EPYC™ 9004 Series12x PCIe Gen5 x16 SlotsNVIDIA HGX H100 80GB HBM3 SXM5 x8
NVIDIA HGX H200 141GB HBM3e SXM5 x8
Relion XE5318GTS4th Gen Intel® Xeon® Scalable Processors12x PCIe Gen5 x16 LP Slots, 1x PCIe Gen4 x16 LP SlotNVIDIA HGX H100 80GB HBM3 SXM5 x8
NVIDIA HGX H200 141GB HBM3e SXM5 x8
Altus XE5318GTOAMD EPYC™ 9004 Series12x PCIe Gen5 x16 SlotsAMD Instinct MI300X 192GB HBM3 OAM x8
Relion XE5318GTO4th Gen Intel® Xeon® Scalable Processors12x PCIe Gen5 x16 SlotsAMD Instinct MI300X 192GB HBM3 OAM x8

21″ OCP Servers

1OUProcessorPCIe SlotsGPU(s) Supported
Altus XO1314GTAMD EPYC™ 9004 Series Processors4x PCIe Gen4 x16 FHFL (GPU), 2x PCIe Gen4 x16 (LP)NVIDIA L40
Relion XO1314GT4th Gen Intel® Xeon® Scalable Processors4x PCIe Gen4 x16 FHFL (GPU), 2x PCIe Gen4 x16 (LP)NVIDIA L40
Altus XO1214GTAMD EPYC™ 7002/7003 Series Processors4x PCIe Gen4 x16 (FHFL) and 2x PCIe Gen4 x16 (LP)NVIDIA A100 PCIe
3OUProcessorPCIe SlotsGPU(s) Supported
Altus XO3218GTSAMD EPYC™ 7002/7003 Processors10x HHHL PCIe Gen 4 slotsNVIDIA HGX A100 SXM4 x8

Selected applications supported by NVIDIA-based Penguin Computing GPU servers:

  • Amber
  • ANSYS Fluent
  • Gaussian
  • Gromacs
  • LS-DYNA
  • NAMD
  • OpenFOAM
  • Simulia Abaqus
  • VASP
  • WRF

Selected deep learning frameworks supported by NVIDIA-based Penguin Computing GPU servers:

  • Caffe2
  • Microsoft Cognitive Toolkit
  • MXNET
  • Pytorch
  • TensorFlow
  • Theano

Benefits of GPU-Accelerated Computing

  • Computing Power/Speed A single GPU can offer the performance of hundreds of CPUs for certain workloads. In fact, NVIDIA, a leading GPU developer, predicts that GPUs will help provide a 1000X acceleration in compute performance by 2025.
  • Efficiency/Cost Adding a single GPU-accelerated server costs much less in upfront, capital expenses and, because less equipment is required, reduces footprint and operational costs. Using libraries also allows organizations to use GPU acceleration without in-depth knowledge of GPU programming, reducing the investment of time required to achieve results.
  • Flexibility The inherently flexible nature of GPU programmability allows new algorithms to be developed and deployed quickly across a variety of industries. According to Intersect360 Research, 70% of the most popular HPC applications, including 10 of the top 10, have built-in support for GPUs.
  • Long-Term Benefits Adding GPU-accelerated computing now prepares you for the artificial intelligence (AI) revolution, which also relies in GPU-accelerated computing. This inevitable increase on the reliance on GPUs means that early adopters will enjoy not only greater computing power over time but have a greater margin of difference over time than competitors who do not migrate to GPU-accelerated computing.

Learn More About GPU-Accelerators