Chinese synthetic intelligence (AI) lab Z.ai introduced the discharge of two new open-source normal language fashions (GLM) on Monday. Dubbed GLM-4.5 and GLM-4.5-Air, the AI agency calls them its newest flagship fashions. Both of them are hybrid reasoning fashions that provide a considering mode for advanced reasoning and software use, and a non-thinking mode for immediate responses. Additionally, the corporate says these fashions assist agentic capabilities. Notably, the AI agency claims that its newest fashions outperform all different open-source fashions worldwide.
In a weblog publish, the Chinese AI agency introduced the discharge of those fashions. The thought behind GLM fashions is to create a big language mannequin (LLM) that’s actually generalistic and can equally carry out several types of duties. The firm argues that regardless of a number of developments in generative AI, the fashions from the likes of Google, OpenAI, and Anthropic are usually not normal sufficient, as they show sturdy efficiency in some areas whereas lagging in others. “GLM-4.5 makes efforts toward the goal of unifying all the different capabilities,” the corporate mentioned.
The GLM-4.5 AI mannequin includes a complete of 355 billion parameters with 32 billion lively parameters. On the opposite hand, the Air variant will get 106 billion complete parameters, with 12 billion parameters being lively. Both fashions unify reasoning, coding, and agentic capabilities in a single structure. It has a context window of 1,28,000 tokens and comes with native operate calling capability.
Coming to the mannequin structure, Z.ai opted for a mixture-of-experts (MoE) structure to enhance the compute effectivity of each coaching and inference. Instead of accelerating the width (hidden dimensions and variety of consultants) of the MoE layers like DeepSeek-V3, the GLM-4.5 collection reduces the width whereas growing the peak (variety of layers). This was performed as the corporate believed that deeper fashions displayed improved reasoning functionality.
The Chinese AI agency additionally listed the novel strategies used for the pre-training and post-training course of within the weblog publish to assist the developer neighborhood perceive how the fashions have been constructed from scratch.
Performance of GLM-4.5 collection AI fashions
Photo Credit: Z.ai
Z.ai claimed to have examined the GLM-4.5 mannequin’s efficiency on 12 benchmarks throughout agentic, reasoning, and coding. It then claimed to match the mannequin’s total scores towards varied LLMs from OpenAI, Anthropic, Google, xAI, Alibaba, and extra. Based on this inner analysis, the Chinese AI agency claims that GLM-4.5 ranked within the third place, behind OpenAI’s o3 and xAI’s Grok 4.
Interested people can entry the open weights of those fashions from Z.ai’s GitHub and Hugging Face listings. Alternatively, these LLMs will also be accessed by way of the corporate’s web site and utility programming interface (API).