With the rapid development of artificial intelligence, the power demand of each data center rack is expected to increase from the current 100 kW to more than 1 MW in the future. Traditional racks mostly use 48V/54V power distribution, and its limitations are becoming increasingly prominent. If the current 48V system is used to power a 1MW cabinet, nearly 200 kilograms of copper are required. From a physical point of view, traditional solutions are difficult to expand to meet long-term computing needs.
To help the next generation of AI data centers achieve efficient power distribution, evolve to the megawatt (MW) scale, and overcome space constraints and busbar scalability issues, the industry is accelerating the exploration of high-voltage direct current (HVDC) architecture. Giants such as Google, Microsoft, Amazon, and Meta have begun to draw on technologies originally developed for electric vehicles (EVs) and adopt a more cautious strategy to promote the evolution of data centers to ±400V DC. NVIDIA, in conjunction with power supply manufacturers, has jointly developed and promoted another AI power supply architecture route with higher device stress requirements-800V HVDC architecture, and plans to mass produce it in 2027 in synchronization with the NVIDIA Kyber rack system.
Semiconductor Manufacturers and NVIDIA Jointly Promote Efficient Power Distribution for AI Infrastructure
Recently, Texas Instruments (TI) announced that it has jointly developed power management and sensing technologies for 800V high-voltage direct current (HVDC) power distribution systems for data center servers with NVIDIA, laying the foundation for the next generation of more scalable and reliable AI data centers.

Source from Internet
In addition to TI, semiconductor manufacturers such as Infineon, MPS, STMicroelectronics, ROHM, and Navitas also participated in the cooperation.
Nvidia said it will lead the transformation to 800V HVDC data center power infrastructure to support IT racks of 1MW and above. To accelerate adoption, Nvidia is working with key industry partners in the data center electrical ecosystem.
On May 20, Infineon announced a collaboration with Nvidia to develop solutions for the next-generation power system based on a new architecture. The architecture has 800V HVDC centralized power generation capabilities.
The new system architecture significantly improves the power distribution efficiency of the entire data center and supports power conversion directly on the AI chip (graphics processing unit, GPU) in the server motherboard. Infineon has deep expertise in power conversion solutions based on all relevant semiconductor materials such as silicon (Si), silicon carbide (SiC), and gallium nitride (GaN), covering the entire chain from grid to chip (Grid-to-Core). Based on this advantage, Infineon is accelerating the roadmap to achieve a full-scale HVDC architecture.
On May 21, Navitas announced its participation in NVIDIA's new 800V HVDC architecture development program. Navitas is a leader in AI data center solutions based on GaN and SiC technologies. They have achieved many world firsts in this field and brought innovations to markets such as AI data centers and electric vehicles. With its extensive product portfolio, Navitas can support NVIDIA's 800V HVDC infrastructure from the power grid to the GPU. At the same time, Delta and Vertiv also announced that they will deploy 800V HVDC power architecture for the next generation of AI data centers.
Future AI Data Center Needs 800V HVDC Power Architecture
In the automotive field, the 800V architecture has become a mainstream trend, and there are even plans to increase the high-voltage bus to a 1000V level. The main purpose of increasing the voltage is to shorten the charging time and reduce system losses. When transmitting the same power, the current of the 800V architecture is reduced by half compared to the 400V architecture. Reduced current means reduced heat loss. Therefore, increasing the voltage can not only save electricity but also better control temperature rise and improve energy utilization efficiency.
Due to the surge in AI computing needs, the power demand of data centers has jumped to the gigawatt level. The racks of current AI factories rely on 54V DC power distribution, and the power is transmitted from the rack-mounted power cabinet (Power Shelf) to the computing tray through the copper bus. When the rack power exceeds 200kW, this traditional architecture faces multiple physical limitations.
Taking NVIDIA GB200/GB300 NVL72 as an example, its rack needs to be equipped with up to 8 power cabinets to power the MGX computing and switching racks. If 54V DC power distribution is still used, the power cabinet alone may occupy up to 64U of the Kyber rack, seriously squeezing the computing space. At GTC 2025, NVIDIA demonstrated an 800V sidecar solution that can power 576 Rubin Ultra GPUs in a single Kyber rack. The traditional solution requires a separate power rack for each computing rack, further exacerbating the waste of space.
In addition, if a single 1 MW rack uses 54V DC power distribution, it will consume up to 200kg of copper busbar. The rack busbar copper consumption of just one 1 GW data center (about 1,000 1 MW racks) will reach 200 tons, which is unsustainable. The repeated AC/DC conversion in the traditional architecture not only consumes energy but also increases the failure points, resulting in an inefficient overall power chain.
The 800V high-voltage DC architecture can break through these limitations through a centralized power transmission mode. Solid-state transformers (SST) and industrial-grade rectifiers are used at the edge of the data center to directly convert 13.8kV AC grid power to 800V HVDC, eliminating multiple AC/DC and DC/DC conversion links, and improving end-to-end power efficiency by 5%. At the same time, based on higher voltage levels, power can be transmitted at lower currents at the same power, and the cross-sectional area of copper wires can be reduced by up to 45%, fundamentally reducing copper consumption, current loss, and heat load.

Traditional data center power architecture
The new architecture supports providing 800V HVDC power directly to IT racks, which is then stepped down to a low voltage suitable for GPUs through a DC/DC converter. This not only simplifies the power chain structure but also frees up valuable space in the server rack through a centralized design. In addition, it reduces the number of power supply units (PSUs) in the power chain that require fan cooling. Fewer PSUs and fans can improve system reliability, reduce cooling requirements, and improve energy efficiency, making HVDC power distribution a more effective solution for modern data centers while reducing the total number of components.

NVIDIA 800V HVDC Architecture Minimizes Energy Conversion
Delta also demonstrated related solutions at GTC 2025. Delta pointed out that as processor power consumption continues to rise, there is no longer enough space in the rack to accommodate the upgrade of traditional power shelves, battery backup units (BBU), supercapacitors, or power conversion systems (PCS), and other components. Therefore, the new generation of architecture integrates these power-related components into a centralized power supply unit, effectively addressing the power transmission and distribution challenges of data centers.
Challenges of 800V HVDC Technology
HVDC is not a new technology. It has been tried in the data center industry as early as the 2000s. Google has deployed 380V HVDC in its data center in Oregon, claiming to save 15% of energy costs; Alibaba and Baidu have also tested 240V~336V solutions. However, this technology has not been widely popularized, mainly due to high costs, difficulty in transformation, and inconsistent standards.
Against the rapid increase in AI computing power, power density faces extreme challenges. Energy efficiency needs are urgent, and data center operators face strict environmental protection requirements. Today, the advancement of power supply technology and the industrial foundation established by electric vehicle charging standards are driving HVDC technology to break through bottlenecks. NVIDIA and its partners are researching the cost and safety of traditional transformers and solid-state transformer solutions. However, the implementation of 800V HVDC is still full of challenges.
At the technical level, the reliability requirements for electronic components such as IGBT, SiC, and GaN are higher, and the complexity of power supply design has increased significantly. Secondly, the industry lacks unified standards, and 800V HVDC requires broad ecological support. High-voltage systems also place higher demands on overcurrent protection and safety training for maintenance personnel. Traditional uninterruptible power supplies (UPS) still dominate the small and medium-sized data center market due to their high maturity and low cost. It will take at least 5 to 10 years for HVDC to completely replace UPS and increase its penetration rate. It will take time for the product to be fully verified and mass-produced.
In addition, the energy crisis has become increasingly prominent. In the past decade, the rack density of data centers has steadily increased from 2~4kW per rack to 8~12kW. In the past two years, driven by the demand for AI, the rack density has soared to more than 50kW per rack, and some have even exceeded 100kW.
AI tasks rely on high-density GPUs, whose thermal design power consumption (TDP) generally reaches 700W to 1000W, far exceeding the 300 to 500W of CPUs. According to Gartner's forecast, by 2027, 40% of existing AI data centers will be limited by power supply. In addition, AI workloads require GPUs to be deployed as densely as possible to form a cluster of more than 100,000, with a power density of up to 30MW in a small space. This not only leads to a sharp increase in computing power consumption, but also an exponential increase in cooling energy consumption. So that liquid cooling technology is becoming increasingly mainstream.
Currently, there are three solutions: 400V architecture, 800V architecture, and ±400V architecture (bipolar), each with its advantages and disadvantages. But 800V can be said to provide a more thorough solution. Despite the many challenges, NVIDIA's partners are strong. TI's expertise in power conversion, Infineon's "Grid-to-Core" application, and system knowledge for powering AI, combined with NVIDIA's world-leading technology in accelerated computing, will hopefully reshape the future of the AI power industry.