Always-on. Fault-tolerant. Free forever.
An industrial execution substrate with process isolation, real-time
messaging, managed hosting, a full REST API with event streaming,
and an MCP endpoint for AI integration.
ULTRAMEGA S1 is the execution substrate for industrial systems that demand continuous operation. It orchestrates independently developed modules (any language, any runtime) inside a type-safe, message-driven architecture with a full REST API, real-time event streaming, and managed process hosting.
Real-time control loops, high-rate test pipelines, cross-platform integrations: ULTRAMEGA S1 composes dependable systems instead of rebuilding infrastructure. Multi-step workflows with saga orchestration, multi-instance federation, scheduled execution, and persistent journaling are built into the substrate. A built-in MCP endpoint exposes 57 system tools to AI agents. Ship it as three NuGet packages. No configuration files, no installers.
Each module runs in its own process with protected memory space. A fault in one component cannot cascade to the rest of the system. The hosting layer manages process lifecycle, port allocation, and automatic restart on failure.
Deep dive →Low-latency IPC bus with guaranteed message ordering. Sub-millisecond delivery for critical control signals. Typed messages with compile-time schema validation prevent integration errors before they reach production.
Deep dive →Full HTTP API across 17 controllers: modules, messages, discovery, config, health, metrics, pipelines, and more. WebSocket streams for real-time events. SignalR hub for module-specific message delivery.
Deep dive →Three-layer resilience: circuit breakers isolate failures at the communication boundary, resilient connections maintain gRPC links through transient faults, and process supervision restarts crashed modules with state recovery. Each layer operates independently so a single fault never escalates into a system-wide outage.
Deep dive →Five health endpoints (liveness, readiness, startup, detailed, aggregate) for integration with orchestrators and load balancers. Metrics snapshots, audit trails, and per-module health tracking across the entire substrate.
Deep dive →Define multi-stage processing pipelines through the API. Route data through sequences of modules with typed contracts at every stage boundary. Pipelines are defined, inspected, and executed through the API.
Deep dive →Multi-step workflows with automatic compensation on failure. Each saga tracks step progress, enforces concurrency limits, and guarantees correlation ID uniqueness. If a step fails, previously completed steps are rolled back through registered compensation handlers.
Deep dive →Connect multiple S1 instances into a federated mesh. Messages route transparently across instance boundaries with endpoint validation and SSRF protection. Federation health monitoring tracks peer availability and enforces forwarding timeouts.
Deep dive →Cron-based task scheduling with persistent storage. Define recurring jobs that survive restarts. The scheduler integrates with the feature flag system for runtime kill-switches and supports bounded query pagination across all schedule stores.
Deep dive →Route messages based on content fields, not just destination addresses. Define routing rules through the API that match message payloads and direct them to the appropriate modules. Add, remove, and update rules at runtime through the API without module restarts.
Deep dive →Persistent message journal with query and replay capabilities. Save and restore module state through the checkpoint API. Journal compaction runs automatically to manage storage growth, with structured metrics for monitoring retention and drop rates.
Deep dive →Three independent layers that keep production running through hardware faults, network interruptions, and process crashes
When a downstream module stops responding, the circuit opens to prevent request pileup and cascading timeouts. Calls fail fast with a clear status while the breaker periodically probes for recovery.
Exponential backoff with jitter maintains gRPC channels through transient network faults. Connections re-establish automatically without operator intervention or message loss.
The hosting layer detects crashed processes, cleans up stale state, and restarts modules with their last known configuration. PID tracking and orphan detection prevent zombie processes from consuming resources.
Expose the substrate to AI agents through the Model Context Protocol
The built-in MCP server exposes 57 system tools to AI agents. Manage modules, send messages, execute pipelines, query health, and control the substrate through the same standardized interface used by human operators.
Deep dive →Modules declare capabilities using a resource:action model. RBAC gates which tools each session can invoke. Agents and operators see only the tools they are authorized to use.
Deep dive →Every MCP session is tracked with identity resolution. Every tool call is audit-logged for compliance. Session broadcasting enables coordination across multiple agents on the same substrate.
Deep dive →Register custom tools with JSON Schema definitions. The schema registry enables agents to discover available operations, understand parameter types, and invoke them correctly without hardcoded knowledge.
Deep dive →Every tool is available to both AI agents and human operators through a single interface
Register, start, stop, restart, and remove modules. Query state and health.
Deep dive →Send, batch, broadcast, and query-with-response. Typed messages with routing.
Deep dive →Read and update substrate config. Toggle feature flags at runtime.
Deep dive →Shutdown, restart, system info, performance stats, and platform metadata.
Deep dive →Node management, leader election, service discovery, and distributed state.
Deep dive →Create multi-stage processing pipelines and execute them with payloads.
Deep dive →Register service endpoints and resolve them by name.
Deep dive →Register message types with schemas and query the type registry.
Deep dive →Add, remove, and list content-based routing rules with predicate matching.
Deep dive →Query message history, replay sequences, and inspect journal statistics.
Deep dive →Save, restore, and list module state checkpoints for crash recovery.
Deep dive →Schedule messages for future or recurring delivery. Cancel and list jobs.
Deep dive →Start durable multi-step workflows with compensation. Track and cancel.
Deep dive →Register remote substrate peers, manage federation links, and check peer health.
Deep dive →Three packages. No installers, no configuration files.
Typed .NET client for the Gateway REST API. Covers authentication, module management, messaging, discovery, config, health, metrics, and feature flags. Includes WebSocket event streaming and SignalR hub integration with auto-reconnect.
Managed process hosting for the S1 platform. Spawns and monitors Gateway and ModuleHost processes, handles port allocation, readiness detection, graceful shutdown, and crash recovery. Two modes: managed (owns processes) or external (connects to existing).
Pre-built, self-contained executables for Gateway, ModuleHost, and McpServer. Content packages that land binaries directly in your output directory. No separate install step. No build-time compilation of S1 internals.
Two paths to integration
Maximum performance with direct SDK integration. Zero-copy message passing and native memory management. Modules declare capabilities and register through the hosting API with full lifecycle management.
Integrate existing systems via gRPC, TCP, or WebSocket. The ModuleHost bridges external processes to native IPC. Register through the REST API and receive real-time events over WebSocket.
Where ULTRAMEGA S1 excels
Orchestrate control logic from multiple vendors. MATLAB models, LabVIEW drivers, Python robotics, and C++ algorithms working together in one system with process isolation and self-healing resilience.
High-rate test pipelines with real-time data acquisition. Coordinate test fixtures, vision systems, and quality databases in unified workflows. Pipeline orchestration routes data through measurement, analysis, and reporting stages.
Wrap existing PLCs and SCADA systems with digital interfaces. Add modern analytics, remote monitoring, and AI capabilities to brownfield installations without disrupting operations.
Hardware-in-the-loop simulation with real-time fidelity. Mix physical hardware with simulated components for testing and validation. Event streaming provides live visibility into every state transition.
Multi-robot coordination with real-time sensor fusion. Path planning, autonomous navigation, and collaborative manipulation with sub-millisecond message delivery between control modules.
Connect AI agents to the substrate through the built-in MCP endpoint. Agents use the same 57-tool interface as human operators to configure instruments, run test sequences, monitor results, and adapt.
17 controllers, full REST coverage
JWT login, token refresh, logout, password change, and token verification.
Deep dive →Full lifecycle: register, start, stop, restart, delete. Plus health, logs, and external module support.
Deep dive →Send single, typed, or batch messages. Query delivery status by message ID.
Deep dive →Register and resolve services. Load-balanced selection with round-robin strategy.
Deep dive →Liveness, readiness, and detailed health endpoints for load balancers and orchestrators.
Deep dive →Read full config or by section. Admin-only write access for runtime changes.
Deep dive →List all feature flags, check individual flags, enable and disable at runtime.
Deep dive →JSON and Prometheus formats. Drain diagnostics, CI build metrics, and security findings.
Deep dive →Create, execute, inspect, and delete multi-stage processing pipelines.
Deep dive →Real-time event and message streams. Connection info endpoint for clients.
Deep dive →Register and query message types, module states, and delivery guarantee enums.
Deep dive →System info and drain progress for debugging message processing bottlenecks.
Deep dive →Everything you need to get started
Architecture overview, performance characteristics, and the design decisions behind the message-driven microkernel.
Go to DownloadsAPIs, data contracts, lifecycle hooks, threading model, resilient connections, and code samples for native module development.
Go to DownloadsStep-by-step adoption guide for S1.Client, S1.Hosting, and S1.Runtime NuGet packages. Includes migration checklist and DI patterns.
Go to DownloadsConnecting AI agents to S1. Tool registration, capability scanning, RBAC configuration, and session management.
Go to DownloadsPractical walkthroughs for PLCs, SCADA, enterprise systems, and external module integration via gRPC and WebSocket.
Go to DownloadsComplete REST API documentation for all 17 controllers. Request/response schemas, authentication, and WebSocket protocols.
Go to DownloadsFree forever, for any use, including commercial. Available as standalone runtime or as NuGet packages for .NET applications.