OpenAI-compatible API, first-party SDKs for Python and TypeScript, a CLI tool, and a VS Code extension. Everything you need to migrate from OpenAI and start building in minutes.
Alveare's inference API follows the OpenAI chat completions format. If your application currently calls the OpenAI API, you can switch to Alveare by changing two values: the base URL and the API key. Your request and response formats remain identical.
This is not a partial compatibility layer. We support the full chat completions interface including system messages, multi-turn conversations, streaming responses, temperature and top-p parameters, stop sequences, max tokens, and JSON mode. The goal is that your existing code works without modification.
The Alveare Python SDK provides a typed, ergonomic interface on top of the API. It handles authentication, retries, streaming, and error handling. Install it with pip and start making requests in three lines.
The Python SDK supports Python 3.9+ and has zero required dependencies beyond the standard library.
Optional dependencies for async support (httpx) and streaming (sseclient) are installed with
pip install alveare[async,stream].
The TypeScript SDK provides full type safety, native Promise support, and works in Node.js, Deno, and Bun. It exports both ESM and CommonJS builds. Install with npm, yarn, or pnpm.
The Alveare CLI lets you manage specialists, make inference requests, and monitor usage from the terminal. It is useful for testing prompts, scripting batch jobs, and CI/CD integration.
The Alveare VS Code extension brings inference testing into your editor. Test prompts, preview specialist outputs, and monitor your hive without leaving your development environment.
Select text in any file, right-click, and send it to a specialist. The response appears in a side panel with latency and token count.
View all your configured specialists in the sidebar. Edit system prompts, generation parameters, and validation rules directly in VS Code.
A status bar item shows your current request count and remaining allocation. Click it to open a detailed usage view with per-specialist breakdowns.
Compare responses from different specialists or different parameter configurations side by side. Useful for prompt engineering and quality testing.
Install from the VS Code Marketplace: search for "Alveare" or run
ext install alveare.alveare-vscode
from the command palette.
Alveare sends webhook notifications for billing and usage events. Configure webhook endpoints in the dashboard and receive real-time notifications when usage thresholds are hit, billing events occur, or specialists change status.
All webhook payloads include an HMAC-SHA256 signature for verification. The signing secret is generated when you configure the webhook endpoint and can be rotated at any time.
This is a summary of the migration process. The full step-by-step guide is available in the documentation.
Create an Alveare account and choose a plan. Your hive is provisioned automatically and you receive an API key within 60 seconds.
Define specialists that match your current OpenAI use cases. For each specialist, set a system prompt, temperature, max tokens, and output format. Use the dashboard or CLI.
Change api.openai.com to api.alveare.ai and replace your OpenAI key with your Alveare key. If you use the OpenAI Python or Node SDK, change the base_url parameter.
Execute your existing integration tests. Because Alveare uses the same request/response format, most tests pass without modification. Adjust any tests that check for specific model names.
Push the URL change to production. Monitor the usage dashboard and specialist latency for the first hour. If latency or quality does not meet expectations, adjust specialist parameters without redeploying.
Total migration time for a typical integration: under 15 minutes. No code rewrite. No SDK change. No downtime. If you run into issues, our support team is available to help with the migration.
Get your API key and make your first request in under 5 minutes. Full SDK documentation included.
Get Started Free