Custom Providers
Connect Cyberstrike to any OpenAI-compatible API endpoint or custom provider.
📸 SCREENSHOT: custom-provider-config.png
Custom provider configuration
OpenAI-Compatible APIs
Many services provide OpenAI-compatible endpoints:
| Service | Base URL |
|---|---|
| Together AI | https://api.together.xyz/v1 |
| Anyscale | https://api.endpoints.anyscale.com/v1 |
| Fireworks | https://api.fireworks.ai/inference/v1 |
| Groq | https://api.groq.com/openai/v1 |
| DeepInfra | https://api.deepinfra.com/v1/openai |
| Perplexity | https://api.perplexity.ai |
Basic Configuration
Generic OpenAI-Compatible
{ "provider": { "openai-compatible": { "name": "Custom Provider", "options": { "baseURL": "https://api.example.com/v1", "apiKey": "{env:CUSTOM_API_KEY}" } } }}Command Line
cyberstrike --model openai-compatible/model-nameProvider Examples
Groq
Fast inference with Groq:
{ "provider": { "groq": { "name": "Groq", "options": { "baseURL": "https://api.groq.com/openai/v1", "apiKey": "{env:GROQ_API_KEY}" } } }, "model": "groq/llama-3.3-70b-versatile"}Available models:
- llama-3.3-70b-versatile
- llama-3.1-8b-instant
- mixtral-8x7b-32768
Together AI
{ "provider": { "together": { "name": "Together AI", "options": { "baseURL": "https://api.together.xyz/v1", "apiKey": "{env:TOGETHER_API_KEY}" } } }, "model": "together/meta-llama/Llama-3.3-70B-Instruct-Turbo"}Fireworks AI
{ "provider": { "fireworks": { "name": "Fireworks", "options": { "baseURL": "https://api.fireworks.ai/inference/v1", "apiKey": "{env:FIREWORKS_API_KEY}" } } }, "model": "fireworks/accounts/fireworks/models/llama-v3p3-70b-instruct"}DeepInfra
{ "provider": { "deepinfra": { "name": "DeepInfra", "options": { "baseURL": "https://api.deepinfra.com/v1/openai", "apiKey": "{env:DEEPINFRA_API_KEY}" } } }, "model": "deepinfra/meta-llama/Llama-3.3-70B-Instruct"}OpenRouter
Access multiple providers through OpenRouter:
{ "provider": { "openrouter": { "options": { "apiKey": "{env:OPENROUTER_API_KEY}" } } }, "model": "openrouter/anthropic/claude-sonnet-4"}Available Models
OpenRouter provides access to:
- Anthropic (Claude)
- OpenAI (GPT-4)
- Google (Gemini)
- Meta (Llama)
- Mistral
- And many more
Model Selection
cyberstrike --model openrouter/openai/gpt-4ocyberstrike --model openrouter/anthropic/claude-sonnet-4cyberstrike --model openrouter/google/gemini-proSelf-Hosted Solutions
vLLM
{ "provider": { "vllm": { "name": "vLLM Server", "options": { "baseURL": "http://localhost:8000/v1", "apiKey": "dummy" } } }}Start vLLM:
python -m vllm.entrypoints.openai.api_server \ --model meta-llama/Llama-3.3-70B-Instruct \ --port 8000Text Generation WebUI
{ "provider": { "textgen": { "name": "Text Generation WebUI", "options": { "baseURL": "http://localhost:5000/v1", "apiKey": "dummy" } } }}LM Studio
{ "provider": { "lmstudio": { "name": "LM Studio", "options": { "baseURL": "http://localhost:1234/v1", "apiKey": "lm-studio" } } }}Azure OpenAI
{ "provider": { "azure": { "name": "Azure OpenAI", "options": { "baseURL": "https://your-resource.openai.azure.com/openai/deployments/your-deployment", "apiKey": "{env:AZURE_API_KEY}", "apiVersion": "2024-02-15-preview" } } }}Environment Variables
export AZURE_API_KEY="..."export AZURE_RESOURCE_NAME="your-resource"export AZURE_DEPLOYMENT_NAME="your-deployment"Advanced Configuration
Custom Headers
{ "provider": { "custom": { "options": { "baseURL": "https://api.example.com/v1", "apiKey": "your-key", "headers": { "X-Custom-Header": "value" } } } }}Timeout and Retries
{ "provider": { "custom": { "options": { "baseURL": "https://api.example.com/v1", "timeout": 120000, "maxRetries": 3 } } }}Proxy Configuration
{ "provider": { "custom": { "options": { "baseURL": "https://api.example.com/v1", "httpAgent": { "proxy": "http://proxy.example.com:8080" } } } }}Model Mapping
Map custom model names:
{ "provider": { "custom": { "options": { "baseURL": "https://api.example.com/v1", "modelMapping": { "gpt-4": "custom-gpt-4-equivalent", "claude-sonnet": "custom-claude-equivalent" } } } }}Testing Custom Providers
Verify Connection
cyberstrike --model custom/model-nameTest Message
> Hello, can you confirm you're working?Check Model List
Some providers support listing models:
curl https://api.example.com/v1/models \ -H "Authorization: Bearer $API_KEY"Troubleshooting
Authentication Failed
Error: 401 UnauthorizedVerify:
- API key is correct
- Key has required permissions
- Authorization header format
Model Not Found
Error: Model not foundCheck:
- Model ID matches provider’s format
- Model is available on the provider
- Correct provider is selected
Connection Timeout
Error: Request timeoutSolutions:
- Increase timeout value
- Check network connectivity
- Verify base URL is correct
Invalid Response Format
Error: Unexpected response formatThe provider may not be fully OpenAI-compatible. Check:
- API documentation
- Response structure
- Required headers
Caution
Not all OpenAI-compatible APIs support all features. Tool calling and streaming may vary by provider.
Related Documentation
- Providers Overview - All providers
- Ollama - Local models
- Configuration - Full options