fireworks/models/deepseek-v3p1

Common Name: DeepSeek V3.1

Fireworks
Released on Oct 16 12:00 AMSupportedTool Invocation
CompareTry in Chat

DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats.

Specifications

Context
160K
Inputtext
Outputtext

Performance (7-day Average)

Collecting…
Collecting…
Collecting…

Pricing

Input$0.62/MTokens
Output$1.85/MTokens

Availability Trend (24h)

Performance Metrics (24h)

Similar Models

$0.62/$1.85/M
ctx160Kmaxavailtps
InOutCap

DeepSeek-V3.1-Terminus is an updated version of DeepSeek-V3.1 with enhanced language consistency, reduced mixed Chinese-English text, and optimized Code Agent and Search Agent performance.

$0.66/$2.75/M
ctx256Kmaxavailtps
InOutCap

Kimi K2 0905 is an updated version of Kimi K2, a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Kimi K2 0905 has improved coding abilities, a longer context window, and agentic tool use, and a longer (262K) context window.

$0.50/$1.98/M
ctx256Kmaxavailtps
InOutCap

Qwen3's most agentic code model to date

$0.66/$2.75/M
ctx128Kmaxavailtps
InOutCap

Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capabilities.