Currently, the Swiftor AI features (like the AI Agent) are primarily designed to work with the models provided through the Swiftor AI Gateway. These models are accessed using the swiftor/
prefix (e.g., swiftor/gemini-2.5-pro
).
This ensures seamless integration, optimized performance, and access control based on your subscription tier.
While support for custom endpoints is planned for advanced users and specific use cases, we highly recommend leveraging the integrated Swiftor AI models provided through our subscription plans for the optimal experience. Here's why:
Using the built-in models ensures you benefit directly from our ongoing efforts to optimize the AI experience within Swiftor.
We are planning to introduce support for connecting the Swiftor AI Agent and potentially other AI features to your own custom model endpoints or local models.
This will likely involve configuring specific secrets within your Swiftor Workspace deployment (the special VM automatically created for managing your environment).
To enable custom model connections, you will likely need to set the following secrets for your Workspace deployment:
AI_BASE_URL
: The base URL of your custom AI model endpoint (e.g., http://localhost:11434/v1
for a local Ollama instance exposed within the workspace, or your self-hosted API endpoint).AI_MODEL
: The specific model identifier expected by your custom endpoint (e.g., llama3:instruct
, mistral
).Note: An AI_API_KEY
secret might also be required depending on the authentication needs of your custom endpoint.
AI_BASE_URL
and AI_MODEL
(and potentially AI_API_KEY
) secrets for your Workspace VM. You can manage VM secrets via the Virtual Machines API or potentially through the dashboard in the future.Stay Tuned!
This feature is under development. Keep an eye on our announcements for updates on its availability and specific implementation details.