Scalytics Connect
Scalytics is excited to announce the April 2025 update for Scalytics Connect, our privacy-focused AI operating stack. This release introduces significant enhancements designed to boost developer productivity, strengthen security, improve administrative control, and deliver more powerful, reliable AI interactions for your enterprise.
Apache Wayang ist das Herzstück unserer Produkte. Apache Wayang (Incubating) ist die einzige plattformübergreifende Open-Source-Datenverarbeitungs-Engine. Anwendungsentwickler spezifizieren Anwendungen mithilfe der API von Apache Wayang.
What's New (April 2025)
- Seamless Integration with Existing Tools:
- OpenAI-Compatible API: We've launched a new API endpoint (
/v1/chat/completions
) fully compatible with the OpenAI standard. This allows developers to effortlessly integrate Scalytics Connect with the vast ecosystem of tools and libraries built for OpenAI, streamlining development and deployment. - AI-Enhanced Coding with Private Models: Officially introducing support for leveraging Scalytics Connect's private models directly within your development workflow for AI-assisted coding, including integration with the Cline VSCode plugin.
- Expanded Model Compatibility: Support for the popular GGUF model format has been added, giving you access to a wider range of modern, efficient language models.
- Flexible Deployment: Enhanced dynamic URL detection and CORS handling provide greater flexibility when deploying Scalytics Connect in diverse network environments.
- More Transparent & Controllable AI:
- Smarter Conversations: Advanced context management techniques, including summarization and clear context window indicators, ensure longer, more coherent, and reliable chat sessions.
- Fine-Grained Control: Implement global system prompts and user-specific instructions to tailor model behavior precisely to your needs. New transparency settings offer clearer insights into model operations.
- Refined Outputs: Benefit from enhanced model pre-processing, output sanitization for structured reasoning (ideal for agentic workflows), and model-specific response filtering for more accurate and relevant results.
- Improved User Interaction: Enjoy more robust chat streaming and the ability to gracefully stop ongoing AI responses with the new "Stop" button.
- Collaboration & Continuity: Share private LLM chats securely amongst users and benefit from chat persistence, ensuring conversations remain available even if models are updated or changed.
- Enhanced Security & Administration:
- Proactive Threat Protection: New middleware automatically filters and blocks common cyberattacks, strengthening your security posture. Access controls via Nginx have also been hardened.
- Simplified User Management: A new search function makes finding and managing users easier.
- Better Resource Visibility: Administrators can now easily view GPU assignments and model file sizes directly in the model list, aiding in resource management and planning. Protected user settings and improved statistics aggregation further enhance administrative control.
- Optimized Performance & Hardware Support:
- GPU Optimization: Automatic detection of newest GPU hardware enables optimized performance tuning for relevant systems.
- Efficient Resource Use: An improved model balancing and queuing system ensures better performance and more efficient utilization of your hardware resources. Hardware monitoring capabilities have also been reinstated and enhanced.
Platform Stability and Fixes
Alongside these new features, this release includes numerous improvements and bug fixes across the platform, enhancing API stability, refining the user interface, addressing core system issues (including Llama.cpp and deployment processes), updating dependencies, and improving overall code quality. We've also updated our documentation to reflect these changes.
This comprehensive update underscores Scalytics' commitment to providing a powerful, secure, and user-friendly AI platform tailored for enterprise needs. We value your feedback and look forward to continuing to enhance Scalytics Connect.
Scalytics Federated AI MCP features:
- Der KI-basierte Optimierer wählt automatisch eine optimale Konfiguration von Klassendatenverarbeitungs-Frameworks wie Java Streams oder Apache Spark aus, auf denen Anwendungen ausgeführt werden.
- Blossom Core führt die Programmausführung durch. Es abstrahiert die verschiedenen plattformspezifischen APIs und koordiniert die plattformübergreifende Kommunikation.
- Anwendungen können auf mehreren Datenverarbeitungsplattformen ausgeführt werden, ohne den systemeigenen Code der zugrunde liegenden Plattformen zu ändern.
- Federated data processing: In-situ processing in different sites without moving raw data outside their origin.
- Build and execute cross platform machine learning pipelines in a unified way.
- NEW: Federated Machine Learning
- Federated analytics by integrating multiple platforms across silos
- Developers: Train ML models using federated learning in a platform agnostic way
- NEW: Supporting unsupervised learning (e.g., using K-means) and Stochastic Gradient Decent optimization technique for Federated Learning across supported data platforms
- NEW: Auditing compliance (who accessed what when) and training audits (basic)
Data sources:
- PostgresSQL
- Columnar Data Files (e.g., CSV, Iceberg, Parquet, ORC)
- SQlite (e.g. Mobiles, Embedded)
- Local file systems
- Distributed file systems (e.g., HDFS, S3)
- Apache Kafka
- NEW: Remote files over http(s)
- NEW: JDBC based data sources
Data Processing Platforms:
- Java 8 Streams
- Apache Spark / DataBricks
- Postgres
- SQLite
- Apache Flink / Confluent, Decodable
- NEW: Apache Kafka
- NEW: Tensorflow
- NEW: JDBC based platforms
Programming APIs
- Java
- Scala
- Basic SQL
- New: Python (limited support)
Runtime
- NEW: Actor-based runtime for building federated applications
This release represents our ongoing commitment to delivering a secure, efficient, and privacy-focused AI platform for enterprise environments. We continue to enhance Scalytics Connect based on customer feedback and emerging industry requirements, ensuring our solution remains at the forefront of enterprise AI technology.