网站首页 > 技术文章 正文
Infinigence Unveils Next-Gen AI Infrastructure Suite, Aims to Lead China’s AI Deployment Revolution
nanyue 2025-08-05 20:09:04 技术文章 2 ℃Xia Lixue, Co-founder and CEO of Infinigence
AsianFin -- Infinigence, an AI infrastructure startup backed by Tsinghua University, introduced a sweeping portfolio of performance-optimized computing platforms targeting the full spectrum of AI deployment at this year’s World Artificial Intelligence Conference (WAIC 2025) .
The company officially launched three flagship products under its integrated solution suite: Infinicloud, a global-scale AI cloud platform for clusters of up to 100,000 GPUs; InfiniCore, a high-performance intelligent computing platform designed for multi-thousand-GPU clusters; and InfiniEdge, a lean, edge computing solution optimized for terminal deployments with as few as one GPU.
Together, the platforms represent what CEO Xia Lixue calls a “software-hardware co-designed infrastructure system for the AI 2.0 era.” Built for compatibility across heterogeneous computing environments, the Infinigence stack offers full lifecycle support—from model scheduling and performance optimization to large-scale application deployment.
“We’re addressing a core bottleneck in China’s AI industry: fragmentation in compute infrastructure,” Xia said. “With InfiniCloud, InfiniCore, and InfiniEdge, we’re enabling AI developers to move seamlessly between different chips, architectures, and workloads—unlocking intelligent performance at scale.”
In a fast-evolving AI landscape dominated by open-source large language models such as DeepSeek, GLM-4.5, and MiniMax M1, Chinese infra startups are racing to build the backbone that powers model deployment and inference.
Early on July 29, Infinigence announced that InfiniCloud now supports Zhipu AI’s latest GLM-4.5 and GLM-4.5-air models, which currently rank third globally in performance. The move signals Infinigence’s ambition to anchor the growing synergy between Chinese model developers and domestic chipmakers.
Xia likened the trio of newly launched platforms to “three bundled boxes” that can be matched to AI workloads of any scale. “From a single smartphone to clusters of 100,000 GPUs—our system is designed to ensure resource efficiency and intelligent elasticity,” he said.
Infinigence’s platforms are already powering Shanghai ModelSpeed Space, the world’s largest AI incubator. The facility sees daily token call volumes exceed 10 billion, supports over 100 AI use cases, and reaches tens of millions of monthly active users across its applications.
A key challenge for China’s AI infrastructure sector is hardware heterogeneity. With dozens of domestic chip vendors and proprietary architectures, developers often struggle to port models across systems.
Xia emphasized that Infinigence has developed a “universal compute language” that bridges chips with disparate instruction sets. “We treat computing resources like supermarket goods—plug-and-play, interoperable, and composable,” he said.
The company’s infrastructure has already achieved full-stack adaptation for more than a dozen domestic chips, delivering 50%–200% performance gains through algorithm and compiler optimization. It also supports unified scheduling and mixed-precision computing, enabling cost-performance ratios that beat many international offerings.
“What’s missing in China’s ecosystem is a feedback loop,” Xia said. “In the U.S., NVIDIA and OpenAI form a tight cycle: model developers know what chips are coming, and chipmakers know what models are being built. We’re building that loop domestically.”
Infinigence is also targeting AI democratization with a first-of-its-kind cross-regional federated reinforcement learning system. The system links idle GPU resources from different regional AIDC centers into a unified compute cluster—allowing SMEs to build and fine-tune domain-specific inference models using consumer-grade cards.
To support this, Infinigence launched the “AIDC Joint Operations Innovation Ecosystem Initiative” in partnership with China’s three major telecom providers and 20+ AIDC institutions.
Xia noted that while training still depends heavily on NVIDIA hardware, inference workloads are rapidly migrating to domestic accelerators. “Users often start with international chips on our platform, but we help them transition to Chinese cards—many of which now deliver strong commercial value,” he said.
Infinigence has also rolled out a series of on-device and edge inference engines under its Infini-Ask line. These include:
Infini-Megrez2.0, co-developed with the Shanghai Institute of Creative Intelligence, the world’s first on-device intrinsic model.
Infini-Mizar2.0, built with Lenovo, which enables heterogeneous computing across AI PCs, boosting local model capacity from 7B to 30B parameters.
A low-cost FPGA-based large model inference engine, jointly developed with Suzhou Yige Technology.
Founded in May 2023, Infinigence has raised more than RMB 1 billion in just two years, including a record-setting RMB 500 million Series A round in 2024—the largest to date in China’s AI infrastructure sector.
Its product portfolio now spans everything from model hosting and cloud management to edge optimization and model migration—serving clients across intelligent computing centers, model providers, and industrial sectors.
The company’s broader mission, Xia said, is to balance scale, performance, and resource availability. “Our vision is to deliver ‘boundless intelligence and flawless computing’—wherever there's compute, we want Infinigence to be the intelligence that flows through it.”
IEEE Fellow and Tsinghua professor Wang Yu, also a co-founder of Infinigence, argued that the future of China’s AI economy depends on interdisciplinary collaboration. “We need people who understand chips, models, commercialization, and investment,” Wang said. “Only then can we solve the ‘last mile’ problem—connecting AI research with real-world deployment.”
As China looks to decouple from foreign hardware dependence while competing globally in next-gen AI, Infinigence is positioning itself as a vital enabler—fusing chip-level control with cloud-scale ambition.
“Every AI system runs on two forces: models and compute,” Xia said. “They cannot evolve in silos—they must move forward in sync.”
猜你喜欢
- 2025-08-05 KKR Nears Completion of Dayao Soda Buyout in Rare Foreign Takeover of Chinese Beverage Brand
- 2025-08-05 The Rise of China’s Machine Tool Industry Despite the West's Export Restrictions
- 2025-08-05 Tesla Logs Largest Revenue Decline in Over A Decade as Q2 EV Sales Continues to Plunge
- 2025-08-05 Chinese vice premier calls for championing humanity's common values, promoting multipolar world
- 2025-08-05 China Unveils 600 km/h Superconducting Maglev Train, Expected to Slash Beijing–Shanghai Travel Time to 2.5 Hours
- 2025-08-05 Partnership can once again prove its mettle
- 2025-08-05 Amundi sees "US Exceptionalism" eroding, while turns bullish on China's AI
- 2025-08-05 China's listed banks attract record investor visits on dividend appeal
- 2025-08-05 Huawei Denies Plagiarism in Pangu AI Model After Allegations of Copying Alibaba's Qwen
- 2025-08-05 US consumers 'eat' force-fed tariffs
- 1522℃桌面软件开发新体验!用 Blazor Hybrid 打造简洁高效的视频处理工具
- 646℃Dify工具使用全场景:dify-sandbox沙盒的原理(源码篇·第2期)
- 527℃MySQL service启动脚本浅析(r12笔记第59天)
- 492℃服务器异常重启,导致mysql启动失败,问题解决过程记录
- 492℃启用MySQL查询缓存(mysql8.0查询缓存)
- 479℃「赵强老师」MySQL的闪回(赵强iso是哪个大学毕业的)
- 461℃mysql服务怎么启动和关闭?(mysql服务怎么启动和关闭)
- 459℃MySQL server PID file could not be found!失败
- 最近发表
- 标签列表
-
- cmd/c (90)
- c++中::是什么意思 (84)
- 标签用于 (71)
- 主键只能有一个吗 (77)
- c#console.writeline不显示 (95)
- pythoncase语句 (88)
- es6includes (74)
- sqlset (76)
- windowsscripthost (69)
- apt-getinstall-y (100)
- node_modules怎么生成 (87)
- chromepost (71)
- flexdirection (73)
- c++int转char (80)
- mysqlany_value (79)
- static函数和普通函数 (76)
- el-date-picker开始日期早于结束日期 (70)
- asynccallback (71)
- localstorage.removeitem (74)
- vector线程安全吗 (70)
- java (73)
- js数组插入 (83)
- mac安装java (72)
- 查看mysql是否启动 (70)
- 无效的列索引 (74)