xAI launches API for Grok 3

Despite the recent countersuit from OpenAI, xAI, the company founded by Ilon Musk, isn’t slowing down. Today, it was revealed that xAI’s flagship Grok 3 is now available to developers via API – along with a lighter version of the Grok 3 Mini.
It’s now available to developers via API – along with a lighter version of the Grok 3 Mini.
First introduced a few months ago, Grok 3 is xAI’s answer to next-generation models like OpenAI’s GPT-4o and Google’s Gemini. The model can analyze images, answer questions, and is already being used for some functions on the X social network, which was acquired by xAI in March 2025.
And it’s already being used for some functions on the X social network, which was acquired by xAI in March 2025.
API xAI offers two model variants: the Grok 3 and the Grok 3 Mini, both with logic output capabilities. Pricing depends on the volume of tokens processed and generated. Grok 3 costs $3 per million input tokens and $15 per million output tokens. The accelerated version of the model costs more – $5 per million input tokens and $25 per million output tokens.
An accelerated version of the model costs $5 per million input tokens and $25 per million output tokens.

For those looking for a more affordable solution, there’s the Grok 3 Mini: $0.30 for a million input tokens and $0.50 for output tokens. The premium Mini variant costs $0.60 and $4, respectively.
Nevertheless, compared to its competitors, xAI’s plans can’t be called budget-friendly. For example, the Grok 3 is comparable in price to Anthropic’s Claude 3.7 Sonnet, also focused on “reasoning,” but loses out to Google’s Gemini 2.5 Pro, which performs better in most AI benchmarks. What’s more, xAI has already been accused of making incorrect comparisons of Grok 3 performance.
Anthropic’s Sonnet 3.7 Sonnet is the best performing AI benchmark in the world.
X users have also noticed that less context is available through the API than previously stated. While in February xAI talked about supporting up to 1 million tokens in a context window, the API currently maxes out at 131,072 tokens (roughly 97,500 words).
Mask initially positioned Grok as a «bold and unfiltered» model, ready to answer questions that other AIs avoid. Grok 1 and Grok 2 could indeed afford rudeness and foul language, but on political issues they often avoided direct answers. One study even showed the model’s tendency to hold left-wing views on topics of equality and inclusion.