Tech Giants Unite to Develop Next-Gen AI Chip Components

UALink - Tech giants unite to develop next-gen AI chip | Ultra Accelerator Link

Intel, Google, Microsoft, Meta, and other leading tech companies are forming a new industry group called the Ultra Accelerator Link (UALink) Promoter Group. This initiative aims to guide the development of components that connect AI accelerator chips in data centers.
Announced on Thursday, the UALink Promoter Group includes prominent members such as AMD, Hewlett Packard Enterprise, Broadcom, and Cisco, although Arm is not yet part of the group. The goal is to propose a new industry standard to connect AI accelerator chips, which are increasingly found in a growing number of servers. These AI accelerators range from GPUs to custom-designed solutions, all intended to enhance the training, fine-tuning, and deployment of AI models.

Forrest Norrod, AMD’s General Manager of Data Center Solutions, emphasized the need for an open standard that can advance quickly and allow multiple companies to contribute to the ecosystem. “The industry needs a standard that allows innovation to proceed at a rapid clip unfettered by any single company,” Norrod stated during a briefing.

The initial version of the proposed standard, UALink 1.0, aims to connect up to 1,024 AI accelerators, specifically GPUs, within a single computing “pod”—defined as one or several server racks. UALink 1.0 is based on open standards, including AMD’s Infinity Fabric, and will enable direct memory loads and stores between AI accelerators. This design is expected to significantly boost speed and reduce data transfer latency compared to existing interconnect specifications.The creation of UALink 1.0 represents a collaborative effort to establish a robust and open standard for linking AI accelerator chips, facilitating rapid innovation and enhancing the performance of AI models across data centers.

Understanding the New Concept of UALink

What is UALink?

UALink is an open industry standard interconnect designed specifically for GPU-to-GPU communication.

What are the specification OF UALink?

This year, members will have access to UALink’s 1.0 specification. Within an AI pod, the standard will allow up to 1,024 accelerators to connect at up to 200Gbps per lane. It is expected that UALink could connect up to 128 Nvidia HGX-style servers in a pod, each of which would include eight AI accelerators.
UALink

The UALink Promoter Group plans to establish the UALink Consortium in Q3 to oversee the ongoing development of the UALink specification. Around this time, UALink 1.0 will be made available to companies that join the consortium. A higher-bandwidth update, UALink 1.1, is expected to be released in Q4 2024.

Forrest Norrod, AMD’s General Manager of Data Center Solutions, mentioned that the first UALink products are anticipated to launch “in the next couple of years.” This timeline indicates a strategic push to standardize AI accelerator interconnects and accelerate their deployment in data centers.
Notably absent from the consortium’s membership is Nvidia, the dominant producer of AI accelerators, commanding an estimated 80% to 95% of the market. Nvidia declined to comment on its absence from the group. The lack of participation from Nvidia, a key player in the AI accelerator market, suggests possible strategic disagreements or competitive interests that may not align with the consortium’s objectives.

According to a recent Gartner report, the value of AI accelerators used in servers is expected to reach $21 billion this year, growing to $33 billion by 2028. Revenue from AI chips is projected to hit $33.4 billion by 2025.

Google, for instance, has developed custom chips such as TPUs and Axion for training and running AI models. Amazon boasts several AI chip families, while Microsoft has introduced Maia and Cobalt. Meta is also refining its own line of accelerators.


Furthermore, Microsoft and its partner OpenAI are reportedly planning to invest at least $100 billion in a supercomputer for training AI models, equipped with future versions of Cobalt and Maia chips.


Nvidia’s absence is significant given its leading role in the AI accelerator industry. The formation of UALink, without Nvidia’s involvement, highlights a potential shift in the competitive landscape. Other major tech companies are evidently seeking to establish an open standard that can drive innovation and reduce dependency on a single vendor’s proprietary technology.

The consortium’s aim is to create a unified and open standard that promotes rapid development and innovation in AI accelerator technology.

By facilitating collaboration among multiple industry leaders, the UALink Consortium intends to foster a diverse ecosystem of AI hardware solutions that can enhance the performance and scalability of data center operations. This initiative underscores the growing need for standardized interconnects to support the increasing complexity and demands of AI workloads.

Nvidia’s absence from the UALink Promoter Group is notable, given the company’s dominant position in the AI accelerator market. Nvidia currently offers its own proprietary interconnect technology for linking GPUs within data center servers and is unlikely to support a specification based on rival technologies. The company’s immense influence and robust market position further diminish its need to adopt the UALink standard.

In Nvidia’s most recent fiscal quarter (Q1 2025), the company’s data center sales, including AI chip sales, surged by over 400% compared to the same period last year. This remarkable growth trajectory positions Nvidia to potentially surpass Apple as the world’s second-most valuable firm by the end of the year.

Consequently, Nvidia has little incentive to align with the UALink initiative. Amazon Web Services (AWS), another significant player not participating in UALink, may be adopting a cautious approach, observing the consortium’s developments while advancing its in-house AI accelerator hardware efforts.

AWS, which commands a substantial share of the cloud services market, relies heavily on Nvidia GPUs for its customer offerings and might not find it strategically beneficial to oppose Nvidia at this stage.

The primary beneficiaries of UALink, aside from AMD and Intel, appear to be Microsoft, Meta, and Google. These tech giants have collectively invested billions of dollars in Nvidia GPUs to support their cloud infrastructure and AI model training. By supporting UALink, they aim to reduce their reliance on Nvidia, whose dominance in the AI hardware ecosystem is seen as concerning.

Conclusion

The formation of the Ultra Accelerator Link (UALink) Promoter Group by leading tech companies such as Intel, Google, Microsoft, and Meta represents a significant step towards standardizing AI accelerator interconnects in data centers. By developing an open and unified standard, the consortium aims to foster rapid innovation, enhance performance, and reduce latency in AI workloads.

This initiative addresses the growing need for efficient and scalable AI hardware solutions, potentially transforming the competitive landscape by reducing dependency on proprietary technologies like those from Nvidia. As the consortium prepares to launch UALink 1.0 and its subsequent versions, it promises to significantly advance the capabilities of AI applications and infrastructure.

Unlock the future of AI with Arcitech Explore our cutting-edge solutions designed to enhance your business operations with advanced AI technologies. Partner with us to leverage the power of standardized AI accelerator interconnects and stay ahead.

Footnotes

Check Out Our Social Media
Scroll to Top

Social Media Promotion​

Automatic driving from highway on-ramp to off-ramp includes automatic lane changes, Traffic-Aware Cruise Control with complete stopping and re-engagement, Autosteer, and overtaking slow cars in your lane.
 

$ 300

Let's Build Something Together

Fill out the form and let's talk about how we can grow your business.