Tencent Will Launch Hunyuan T1 Inference Model on March 21
Tencent's large language model (LLM) specialist division has announced the imminent launch of their T1 AI inference model. The Chinese technology giant's Hunyuan social media accounts revealed a grand arrival, scheduled to take place on Friday (March 21). A friendly reminder was issued to interested parties, regarding the upcoming broadcast/showcase: "please set aside your valuable time. Let's step into T1 together." Earlier in the week, the Tencent AI team started to tease their "first ultra-large Mamba-powered reasoning model." Local news reports have highlighted Hunyuan's claim of Mamba architecture being applied losslessly to a super-large Mixture of Experts (MoE) model.
Late last month, the company released its Hunyuan Turbo S AI model—advertised as offering faster replies than DeepSeek's R1 system. Tencent's plucky solution has quickly climbed up the Chatbot Arena LLM Leaderboard. The Hunyuan team was in a boastful mood earlier today, and loudly proclaimed that their proprietary Turbo S model had charted in fifteenth place. At the time of writing, DeepSeek R1 is ranked seventh on the leaderboard. As explained by ITHome, this community-driven platform is driven by users interactions: "with multiple models anonymously, voting to decide which model is better, and then generating a ranking list based on the scores. This kind of evaluation is also seen as an arena for big models to compete directly, which is simple and direct."
Late last month, the company released its Hunyuan Turbo S AI model—advertised as offering faster replies than DeepSeek's R1 system. Tencent's plucky solution has quickly climbed up the Chatbot Arena LLM Leaderboard. The Hunyuan team was in a boastful mood earlier today, and loudly proclaimed that their proprietary Turbo S model had charted in fifteenth place. At the time of writing, DeepSeek R1 is ranked seventh on the leaderboard. As explained by ITHome, this community-driven platform is driven by users interactions: "with multiple models anonymously, voting to decide which model is better, and then generating a ranking list based on the scores. This kind of evaluation is also seen as an arena for big models to compete directly, which is simple and direct."