23.7 C
Vientiane
Wednesday, October 15, 2025
spot_img

Scaling Cloud and AI: MSI Highlights ORv3, DC-MHS, and MGX Solutions at 2025 OCP Global Summit

This Week

SAN JOSE, Calif., Oct. 15, 2025 /PRNewswire/ — At 2025 OCP Global Summit (Booth #A55), MSI, a leading global provider of high-performance server solutions, highlights the ORv3 21″ 44OU rack, OCP DC-MHS platforms, and GPU servers built on NVIDIA MGX module architecture, accelerated by the latest NVIDIA Hopper GPUs and NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These solutions target hyperscale, colocation, and AI deployments, delivering the scalability and efficiency required for next-gen data centers. On Oct 16, MSI Product Marketing Manager Chris Andrada will also present an Expo Hall session titled “Pioneering the Modern Datacenter with DC-MHS Architecture.”

MSI Scales up Data Centers for Cloud and AI with ORv3, DC-MHS and MGX Solutions
MSI Scales up Data Centers for Cloud and AI with ORv3, DC-MHS and MGX Solutions

“Our focus is on helping datacenter operators bridge the gap between rapidly advancing compute technologies and real-world deployment at scale. By integrating rack-level design with open standards and GPU acceleration, we aim to simplify adoption, reduce complexity, and give the industry a stronger foundation to support the next wave of AI and data-driven applications,” said Danny Hsu, General Manager of MSI’s Enterprise Platform Solutions.

ORv3 Rack-Scale Integration

MSI’s ORv3 21″ 44OU Rack comes fully validated with integrated power, thermal, and networking, reducing engineering effort and deployment time for hyperscale environments. With 16 dual-node servers, centralized 48V power shelves, and all front-facing I/O, operators gain more space for CPUs, memory, and storage while keeping airflow clear for efficient cooling.

The CD281-S4051-X2 2OU 2-node DC-MHS server supports a single AMD EPYC™ 9005 CPU up to 500W TDP per node, each node with 12 DDR5 DIMM slots, 12 front E3.S PCIe 5.0 NVMe drives, and 2 PCIe 5.0 x16 slots for balanced compute, storage, and expansion. This combination provides dense performance for cloud and analytics, delivered in a rack system that can be deployed faster and serviced entirely from the cold aisle.

Standardization with OCP DC-MHS Server & Motherboards

MSI’s DC-MHS portfolio offers standardized server and HPM designs across Intel® Xeon® 6 and AMD EPYC 9005 processors for CSPs and hyperscale data centers. With standardized DC-SCM modules, these platforms reduce firmware effort and enable cross-vendor interoperability. Available in M-FLW, DNO-2, and DNO-4 form factors, they provide a consistent path to deploy next-gen CPUs without redesigning entire systems.

With support for DDR5 high-bandwidth memory, PCIe 5.0 for accelerators and I/O, and front-service NVMe bays, DC-MHS systems include options such as the CX270-S5062 2U Intel Xeon 6 platform or modular HPMs, which let customers align CPU power, memory density, and drive configurations to workload needs, from cloud clusters to hyperscale data centers. Intel HPMs include the D3071 (DNO-2 single-socket, 12 DIMM slots), D3061 (DNO-2 single-socket, 16 DIMM slots), and D3066 (DNO-4 single-socket, 16 DIMM slots). AMD HPMs include the D4051 (DNO-2 single-socket, 12 DIMM slots) and the D4056 (DNO-4 single-socket, 24 DIMM slots for higher capacity).

GPU Density with NVIDIA MGX

Built on the NVIDIA MGX modular architecture, MSI’s GPU servers accelerate AI workloads across training, inference, and simulation with support for the latest NVIDIA Hopper GPUs and NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.

  • The CG481-S6053 (4U) integrates dual AMD EPYC 9005 CPUs, 8 FHFL PCIe 6.0 GPU slots, 24 DDR5 DIMM slots, and 8×400G Ethernet networking via NVIDIA ConnectX-8 SuperNICs, ideal for large-scale AI training clusters requiring maximum GPU density and bandwidth.
  • The CG290-S3063 (2U) features a single Intel Xeon 6 CPU, 4 FHFL PCIe 5.0 GPU slots, and 16 DDR5 DIMM slots, providing a compact, efficient system optimized for AI inference and fine-tuning in space-sensitive environments.

Supporting Resources:

 

Latest article