Event Reports
2025年11月06日
RockAI CEO Liu Fanping Took the Stage at RTE 2025, Sharing Insights on the Architectural Innovation and Applications of On-Device Large Models.
分享
Recently, the Convo AI & RTE 2025 Real-Time Internet Conference and Conversational AI Forum, jointly hosted by Agora and the RTE Developer Community, was held in Beijing. RockAI CEO Liu Fanping took the stage at the On-Device and Hardware Technology Session, delivering a keynote speech titled Architectural Innovation and Applications of On-Device Large Models. From the perspective of large model evolution, Liu shared insights on the rise and challenges of on-device large models, architecture design, and real-world applications. When discussing innovations beyond the Transformer architecture, he emphasized that “autonomous learning and memory capability are the true core.”


"Autonomous Learning and Memory Capability are the True Core."
Focusing on the convergence of on-device intelligence and hardware, Liu noted that intelligence will redefine hardware itself. In the future, hardware will no longer have a fixed ceiling of value determined at the time of production. Instead, intelligent hardware will continuously evolve through user interaction and usage, becoming increasingly valuable over time.The most critical capability of on-device models lies in autonomous learning and memory. RockAI has independently developed Yan, China’s first non-Transformer, non-Attention architecture large model, which features native memory and selective activation mechanisms to enable true memory capabilities. Yan 2.0 Preview has demonstrated this capability—unlike external memory systems, Yan’s memory module is built directly into the model architecture, synchronizing training and inference so the model can accumulate and refine knowledge in real time through interaction, achieving true learn-as-you-use capability.

"Make Every Device Its Own Intelligence ."
How can intelligence be truly integrated into hardware? RockAI’s answer is offline intelligence, enabling large models to run locally on devices. Upholding its mission to “Make Every Device Its Own Intelligence ” RockAI has successfully deployed its large models across robots, dexterous hands, smartphones, PCs, tablets, and even Raspberry Pi devices. The company has also achieved mass production of AIPC, bringing intelligent capabilities to more people’s everyday lives.

Non-Transformer architecture, native memory, and offline intelligence together define RockAI’s technological framework. As on-device large models and hardware continue to deeply converge, RockAI is driving every device toward true intelligence through continuous iteration of its Yan architecture large model.
推荐新闻
-
RockAI Yan1.3全新发布,掀起群体智能革命
-
甲小姐对话RockAI刘凡平:以群体智能挑战OpenAI的造神之路
-
LLM Architecture Design and Algorithm Optimization for Edge AI
-
RockAI Participated in the Hong Kong EFAE with Its Partner, Where Autonomous Learning and AIPC Attracted Great Attention!
-
RockAI CEO Liu Fanping Took the Stage at RTE 2025, Sharing Insights on the Architectural Innovation and Applications of On-Device Large Models.
-
Huxiu F&M Innovation Festival (虎嗅F&M创新节)|RockAI CEO Liu Fanping: Abandoning Transformer, Driving the "Hardware Revolution" through "Architecture Innovation"
-
Architecture Innovation Drives the Hardware Revolution | RockAI CTO Yang Hua at the 2025 Southern Weekly Science and Technology Innovation Conference: "The Innovation of AI Large Models: AI Redefining Hardware"