As global competition in AI governance intensifies, China has taken a decisive step by standardising AI terminal classification. This framework, developed by the Ministry of Industry and Information Technology (MIIT) and involving tech giants like Baidu and Tencent, aims to unify safety protocols and ethical benchmarks for AI applications. Learn how this strategic move positions China as a rule-maker in the AI tools ecosystem and impacts global markets.
The Strategic Imperative Behind China's AI Classification Framework
In December 2024, China established the National New Generation Artificial Intelligence Governance Standards Committee, comprising executives from Baidu, Alibaba, and Tencent (BAT). This committee focuses on creating standards for large language models, risk assessment, and industrial applications. By 2026, China aims to develop over 50 national AI standards and participate in 20+ international ones, signaling its ambition to lead in AI tools regulation rather than follow Western frameworks.
Balancing Innovation and Risk Mitigation
Unlike its hands-off approach to early internet sectors, China has adopted proactive AI governance. Recent guidelines mandate strict safety evaluations for generative AI services, including data security and ethical compliance. For instance, the Generative AI Service Security Basic Requirements (2024) enforces rigorous testing for chatbots like Baidu's Ernie Bot before public release. This contrasts with the EU's risk-tiered AI Act, highlighting China's hybrid model of state-led standardisation and corporate collaboration.
Technical Architecture of China's AI Terminal Standards
The framework categorises AI systems based on their application domains and risk levels. Key components include:
1. Safety Classification for Generative AI
Inspired by Anthropic's AI Safety Levels (ASL), China's standards classify systems from low-risk (e.g., industrial robots) to high-risk (e.g., autonomous weapons). The 2025 guidelines require AI tools in healthcare and transportation to undergo mandatory security audits, with failure rates capped at 0.1% for critical infrastructure.
2. Industry-Specific Benchmarking
For sectors like manufacturing, standards such as Industrial Large Model Technical Requirements (2025) define parameters for predictive maintenance and quality control. Baidu's Apollo autonomous driving platform, for example, now adheres to MIIT's real-time data processing benchmarks, reducing latency by 40%.
Global Implications and Industry Responses
Stanford's 2025 AI Index Report notes that China's AI models now trail US counterparts by less than 2% in key benchmarks like MMLU. Standardisation could further narrow this gap by streamlining R&D. However, foreign firms face hurdles: compliance with China's data-localisation rules adds 15–20% to operational costs, per the EU Chamber of Commerce.
Corporate Adaptation Strategies
Companies like DeepSeek have aligned their models with national standards. DeepSeek-R1, optimised for MIIT's computational efficiency metrics, reduced training costs by 280x since 2022. Meanwhile, Tencent's WeChat AI now integrates state-approved content moderation algorithms, blocking 94% of policy-sensitive queries.
Key Takeaways
?? 50+ national AI standards to be developed by 2026
?? Mandatory security audits for high-risk AI tools
?? 2% performance gap between Chinese and US models
?? 280x cost reduction in AI training since 2022
??? 94% policy compliance rate in Tencent's AI systems