logo

Using Local or Custom AI Models

Current Support

Currently, the Swiftor AI features (like the AI Agent) are primarily designed to work with the models provided through the Swiftor AI Gateway. These models are accessed using the swiftor/ prefix (e.g., swiftor/gemini-2.5-pro).

This ensures seamless integration, optimized performance, and access control based on your subscription tier.

Benefits of Using Swiftor AI Models

While support for custom endpoints is planned for advanced users and specific use cases, we highly recommend leveraging the integrated Swiftor AI models provided through our subscription plans for the optimal experience. Here's why:

  • Curated Model Access: Your subscription tier grants access to a wide range of standard and premium models (up to 23+ depending on the tier), pre-configured and ready to use without any setup hassle.
  • Optimized System Prompts: We invest heavily in prompt engineering for the models available through our gateway. These system prompts are carefully crafted to enhance the AI's capabilities within the Swiftor environment, particularly for tasks involving tool use, security analysis, and interaction with deployments.
  • Managed Infrastructure: Using our gateway means you don't need to worry about hosting, managing, or scaling your own AI model inference endpoints.
  • Predictable Usage: Rate limits and daily spending budgets associated with your tier provide clear usage guidelines and cost control.
  • All-in-One Interface: The true value lies in the seamless integration within the Swiftor platform, offering a unified interface for development, deployment, testing, and AI assistance.

Using the built-in models ensures you benefit directly from our ongoing efforts to optimize the AI experience within Swiftor.

Future Plans: Custom Model Endpoints

We are planning to introduce support for connecting the Swiftor AI Agent and potentially other AI features to your own custom model endpoints or local models.

This will likely involve configuring specific secrets within your Swiftor Workspace deployment (the special VM automatically created for managing your environment).

Planned Configuration Secrets

To enable custom model connections, you will likely need to set the following secrets for your Workspace deployment:

  • AI_BASE_URL: The base URL of your custom AI model endpoint (e.g., http://localhost:11434/v1 for a local Ollama instance exposed within the workspace, or your self-hosted API endpoint).
  • AI_MODEL: The specific model identifier expected by your custom endpoint (e.g., llama3:instruct, mistral).

Note: An AI_API_KEY secret might also be required depending on the authentication needs of your custom endpoint.

How it Will Work (Anticipated)

  1. Deploy Your Model: Host your desired LLM using a tool like Ollama, vLLM, or a custom API server, ensuring it's accessible from within your Swiftor Workspace network environment.
  2. Configure Secrets: Set the AI_BASE_URL and AI_MODEL (and potentially AI_API_KEY) secrets for your Workspace VM. You can manage VM secrets via the Virtual Machines API or potentially through the dashboard in the future.
  3. Select Custom Endpoint: The AI Agent interface will be updated to allow selecting your configured custom endpoint alongside the standard Swiftor models.

Stay Tuned!

This feature is under development. Keep an eye on our announcements for updates on its availability and specific implementation details.