Skip to content

API Configuration

Through “Model Settings” → “Add API”, you can connect any compatible model service to NextAI.
Supports both custom APIs and common providers such as Qwen, Gemini, Anthropic, and Ollama.

1. Creating a New API Provider

  1. Open NextAI → Tap the top floating island → Open the menu → Go to Settings → API Settings.
  2. Tap the “+” icon at the top right to create a new API.

You’ll see a form with the following fields:

FieldDescription
EnableWhen turned on, this provider will be available in chat.
TypeSelect model type: Custom API / Qwen / Gemini / Anthropic / Ollama.
Display NameCustom label for the provider (e.g., OpenAI, DeepSeek, or My Gateway).
API Address (Base URL)The endpoint for model requests, e.g., https://api.openai.com/v1.
API KeyCopy your API key from the corresponding provider’s console.

💡 Tip:

  • “Custom API” is compatible with most OpenAI-style APIs.
  • Selecting a specific provider (like Qwen or Gemini) may auto-fill the default address.

After completing the form, tap 💾 Save at the top right.

2. Common Provider Examples

ProviderBase URL Example
OpenAIhttps://api.openai.com/v1
DeepSeekhttps://api.deepseek.com
Qwen (Tongyi)https://dashscope.aliyuncs.com/compatible-mode/v1
Moonshothttps://api.moonshot.cn/v1
Zhipu (GLM)https://open.bigmodel.cn/api/paas/v4
OllamaLocal deployment, e.g. http://127.0.0.1:11434/api

3. Save and Use

  1. Tap Save, then return to the chat page.
  2. You can now select the newly added provider when starting a new conversation.
    If prompted with “Please add a model provider first,” check:
    • The provider is enabled;
    • Base URL and Key are correct;
    • The network can reach the domain (some networks block external APIs).

4. Security Recommendations

  • API Keys are stored locally on your device and never uploaded to any server.
  • Use separate keys for each device and rotate them regularly.
  • Double-check for spaces or newlines when pasting keys.
  • Avoid putting proxy URLs directly into Base URL fields.

5. Test Connectivity (Optional)

You can test your API connection using curl:

bash
curl https://api.openai.com/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"

If you receive a JSON model list, your connection works.
If you get 401 / 403 / 404 / 5xx errors, refer to the troubleshooting guide.

6. Add and Enable Models

After adding an API, you must configure one or more models under it.

  1. Go to Settings → API Settings.
  2. Under the target provider (e.g., Qwen), tap “+ Add Model”.
  3. Fill in the form as follows:
FieldDescription
EnableEnable to make the model available for chatting.
MultimodalTurn on if the model supports image or voice input.
Display NameShown in chat (e.g., plus, turbo, glm-4).
Max Token CountControls context length (default: 2048).
Model IDRequired — the model’s unique ID (e.g., gpt-4-turbo, qwen-turbo, gemini-pro).

Tap 💾 Save when finished.

Common Model IDs

ProviderModel IDDescription
OpenAIgpt-4-turbo, gpt-3.5-turboSupports multimodal inputs, strong general performance
DeepSeekdeepseek-chat, deepseek-coderOptimized for Chinese & coding
Qwenqwen-turbo, qwen-plusTongyi Qwen family models
Geminigemini-proGoogle multimodal model
Anthropicclaude-3-opus, claude-3-haikuStrong English reasoning
Ollamallama3, mistral, phi3Local models, offline support

Tips

  • Each API can have multiple models for quick switching.
  • Models are one-to-many with APIs; deleting an API removes its models.
  • You can configure multiple APIs for the same provider (e.g., with different gateways).
  • For multimodal models (like GPT-4 Vision or Gemini Pro), enable the Multimodal toggle.

Proxy & Network Tips

  • If you use a local proxy (e.g. 127.0.0.1:7890), ensure your system settings allow NextAI to access it.
  • Corporate or campus networks may restrict outbound API requests — try a home or mobile network.
  • Don’t put proxy addresses in Base URL fields — always use the provider’s official API domain.