At Scalytics, we’ve always believed that the next frontier of AI lies in open collaboration, transparent infrastructure, and developer-first tooling. Today, we’re thrilled to unveil Scalytics Connect – Community Edition, our full enterprise-grade inference platform released under the Apache 2.0 license. What was once a turnkey, self-hosted engine behind Fortune-500 deployments is now freely available for you to clone, customize, and extend as your own private AI operation stack.
Why Community Matters
Enterprises demand high throughput, sub-second latency, and ironclad security when running large language and vision models. Yet developers crave the freedom to innovate—spin up new models, craft custom agents, and integrate AI directly into the tools they use every day. Community Edition bridges that gap:
- Zero Vendor Lock-In: Own your hardware, your data, and your AI roadmap.
- OpenAI-Compatible API: Drop-in support for
/v1/chat/completions
and/v1/images/generations
lets you map existing tools—VS Code plugins, Python scripts, or webhooks—straight to your self-hosted endpoint. - Modular Architecture: vLLM-driven inference, vector services, Live Search SSE streams, and GPU monitoring live in separate, easy-to-extend services.
By open-sourcing our entire stack, we’re inviting you to contribute new backends, refine rate-limiting strategies, or prototype novel modalities—audio, video, you name it.
How does it look like?
What’s Inside the Repo
When you clone github.com/scalytics/Scalytics-Community-Edition, you’ll find:
- Admin UI & Orchestration Scripts
Spin up the control plane in minutes with./start-app.sh
. Manage API keys, toggle rate limits, and assign your preferred local model as “Active.” - OpenAI-Compatible Endpoints
All chat and image-generation APIs mirror the OpenAI request/response format. Whether you’re streaming token by token or generating base64-encoded images, the integration is seamless. - Deep Search & Vector Service
Orchestrate multi-step research workflows via/v1/deepsearch
, leverage embeddings at/v1/vector/embeddings
, and build secure, precise knowledge-graph applications. - vLLM Inference Engine
Our high-throughput, memory-efficient serving layer dynamically adapts to each model’s full context window—ensuring you never hit silent truncation or context-size errors. - Comprehensive Docs & Samples
From integrating with IDE extensions like Cline to powering custom chatbots, our/docs
folder teaches you best practices for prompt design, temperature tuning, and iterative refinement.
Quickstart in Three Commands
Ready to run? Here’s how to stand up your own instance:
git clone https://github.com/scalytics/Scalytics-Community-Edition.git
cd Scalytics-Community-Edition
cp .env.example .env # set SCALYTICS_API_KEY, MODEL_PATHS, etc.
./start-app.sh # launch admin UI, chat endpoint, GPU monitor
Join the Community
Scalytics Connect – Community Edition isn't just code; it's a community. Fork the repository, raise issues, share custom model bundles, or propose pull requests to improve our vector indexer, add new SSE events, or integrate new LLM backends.We combine the stability of enterprise software with the flexibility of open-source software. This allows developers to build the future of private, scalable AI according to their preferences and under their control.
About Scalytics
Built on distributed computing principles and modern virtualization, Scalytics Connect orchestrates resource allocation across heterogeneous hardware configurations, optimizing for throughput and latency. Our platform integrates seamlessly with existing enterprise systems while enforcing strict isolation boundaries, ensuring your proprietary algorithms and data remain entirely within your security perimeter.
With features like autodiscovery and index-based search, Scalytics Connect delivers a forward-looking, transparent framework that supports rapid product iteration, robust scaling, and explainable AI. By combining agents, data flows, and business needs, Scalytics helps organizations overcome traditional limitations and fully take advantage of modern AI opportunities.
If you need professional support from our team of industry leading experts, you can always reach out to us via Slack or Email.