Alibaba-affiliate Ant Group is reportedly using a mix of U.S.- and Chinese-made semiconductors to enhance the efficiency of its artificial intelligence (AI) development. The company claims this combination will help in reducing time and costs for training AI models while limiting the reliance on a single supplier like Nvidia.
According to CNBC, there is an industry trend of tapping multiple networks, known as Mixture of Experts (MoE) — a technique that allows models to be trained with much less compute. MoE divides AI work among smaller “expert” models instead of one big AI model trying to do everything; each smaller model is designed to handle a specific type of input or task.
Earlier, the company said in a paper that it was able to use lower-cost hardware to effectively train its own MoE models, reducing computing costs by 20%.
READ: Nvidia rises to second most valuable company after Q4 earnings surge (February 27, 2025)
According to a Bloomberg report, Ant has used chips from Alibaba and Huawei for training AI models. While Ant has also used Nvidia chips in the past, it now relies more on alternatives from Advanced Micro Devices (AMD) and Chinese chips.
This comes at a time when the U.S. has imposed restrictions by limiting Chinese businesses’ access to the most advanced semiconductors used for training models. Nvidia can still sell its lower-end chips to China.
On Monday, Ant Group announced “major upgrades” to its AI solutions for healthcare, which it said were being used by seven major hospitals and healthcare institutions in Beijing, Shanghai, Hangzhou and Ningbo. The healthcare AI model is built on DeepSeek’s R1 and V3 models, Alibaba’s Qwen and Ant’s own BaiLing. The model is capable of answering questions about medical topics, and can also help improve patient services, according to the company’s statement.
According to a Fortune report, Ant Group plans to follow the footsteps of DeepSeek, and scale high-performing AI models “without premium GPUs.”