Open-Source Large Language Models (LLMs)
-
01.
DeepSeek-V3.2 — Reasoning-first models built for
agents. Now available on web, app & API.
-
02.
MiMo-V2-Flash. It is a powerful, efficient, and
ultra-fast foundation language model that
particularly excels in reasoning, coding, and
agentic scenarios
-
03.
Kimi-K2.5 from Moonshot AI is a
trillion-parameter MoE model (32B active
parameters) that takes a unique approach: it
integrates vision natively from the beginning of
pretraining The model was trained on
approximately 15 trillion mixed vision and text
tokens, with a constant vision-text mixing ratio
throughout
-
04.
GLM-4.7-Flash - As the strongest model in the
30B class, GLM-4.7-Flash offers a new option for
lightweight deployment that balances performance
and efficiency.
-
05.
GPT-OSS-120b is anopen-weight model, which fits
into a single H100 GPU (117B parameters with
5.1B active parameters).
-
06.
Qwen3-235B-A22B-Instruct-2507, packs 235B total
parameters (22B active per token across 128
experts) and delivers state-of-the-art
performance on instruction following, coding,
math, and science benchmarks
-
07.
Meta's Llama 4 series introduces natively
multimodal models (Llama 4 Scout and Llama 4
Maverick ) capable of processing both text and
images from the ground up.