< back to Documentation
Large Language Models
Starburst can use several different large language models to generate responses. By default, the following models are available in Starburst:
| Model | Avg cost per 100 queries | Average response time | Notes |
|---|---|---|---|
| OpenAI GPT-4.1 nano | US$0.05 | 2 seconds | The lowest-cost and fastest of these options. It can provide accurate, helpful responses in a wide variety of simple applications, but it will be noticeably less accurate in more complex or less common topics and may exhibit less fluency in languages other than English. |
| OpenAI GPT-4.1 mini | US$0.20 | 6 seconds | (Recommended) The best OpenAI model for most cases. It is generally as capable as GPT-4.1 but at a lower cost. |
| OpenAI GPT-4.1 | US$1 | 8 seconds | The most capable model of these OpenAI models, but not generally recommended over GPT-4.1 mini unless you want a bit of additional accuracy and it is worth the increased cost. |
| Google Gemini 3 Flash Preview | US$0.50 | 8 seconds | (Recommended) The best Google model for most cases. Is likely to give the most accurate and correct responses of all models listed here. |
Not recommended
For comparison, here are a few OpenAI and Google models that are high-quality but whose cost and/or speed make them poor choices.
These models will produce some of the highest quality responses available, but the improvement over the recommended models above will be relatively small, especially relative to their substantially slower generation and higher costs.
They are not provided as options by default, but you can always configure one as a custom model for your own classes if you want to try it.
| Model | Avg cost per 100 queries | Average response time |
|---|---|---|
| OpenAI GPT 5 Mini | US$0.50 | 20 seconds |
| OpenAI GPT 5.2 | US$2 | very slow |
| Google Gemini 3 Pro Preview | US$2 | very slow |