Interpretos Local runs entirely inside your network. Your credentials, your queries, your results — all under your control. There's nothing to trust us with.
SOC 2 certifies vendor infrastructure. Since Interpretos runs on YOUR infrastructure, the security posture is yours to define and audit.
Typical SaaS vendor
Interpretos Local
Nothing leaves your network. There's nothing to trust us with.
Complete transparency about what stays inside your network, what leaves, and how to keep everything local.
Configurable — choose your provider:
Credentials are NEVER sent.
Use a local Ollama instance (OpenAI-compatible endpoint) for zero external traffic.
No internet connection required. Every byte stays inside your network.
Complete network isolation.
Even if one layer fails, the other two prevent any data modification.
Blocks INSERT, UPDATE, DELETE, DROP, ALTER, TRUNCATE, CREATE, GRANT, and REVOKE before any query executes. Only SELECT statements pass through.
Only GET requests to REST APIs. POST, PUT, DELETE, and PATCH are blocked at the request layer. Read-only by protocol.
Restricted Python environment. No file I/O, no os/subprocess, no network access beyond approved ERP endpoints.
Credentials live in your Docker volume. Back them up, rotate them, revoke them — all standard operations under your IT policy.
HTTPS to your chosen LLM provider (Gemini, OpenAI, Anthropic, or local Ollama). No inbound ports required.
One port (default 8080). No privileged access. No host network required. Standard Docker deployment.
Telemetry disabled by default. No license server dependency. Runs fully offline with local Ollama.
All stored in your Docker volume. Queryable, exportable, deletable by you.
When enabled, sends a daily heartbeat with aggregate stats only:
Never collected: query text, usernames, data records, credentials, IP addresses.
Choose where AI inference happens — from fully managed to fully air-gapped. Interpretos works with any provider.
| Option | What leaves your network | Quality | Est. cost | Best for |
|---|---|---|---|---|
| Interpretos Cloud Included free |
Query text + API responses transit our proxy to Gemini. Credentials never leave your network. | Highest | $0 100 queries/day |
Evaluation & small teams |
| Bring Your Own Key Direct to provider |
Query text + API responses go directly to the provider you choose. We never see them. | Highest | $50–300/mo typical 25-user team |
Production teams |
| Enterprise AI Platform Your cloud, your terms |
Query text stays within your cloud tenancy. Covered by your existing enterprise agreement. | Highest | Per your agreement often already budgeted |
Regulated industries |
| Air-Gapped Local Ollama |
Nothing. Zero external network calls. AI runs on your hardware. | Good depends on model & GPU |
$0 your GPU hardware |
NERC CIP, defence, air-gapped sites |
Most large organizations already have enterprise AI agreements that include data processing commitments, residency guarantees, and compliance certifications. Interpretos connects to any OpenAI-compatible API endpoint — point it at your existing platform:
Models run in your Azure tenancy. Data processed under your Microsoft Enterprise Agreement. Supports private endpoints, VNET integration, and managed identity. SOC 2, ISO 27001, HIPAA, FedRAMP certified.
Claude, Llama, Mistral and others in your AWS account. Data stays in your chosen region, encrypted with your KMS keys. VPC endpoints available. Covered by your AWS BAA and enterprise agreement.
Gemini models in your GCP project. Data processing governed by your Cloud Data Processing Agreement. VPC Service Controls, CMEK encryption, data residency controls. EU data boundary option available.
Granite and open-source models on IBM Cloud or on-prem via Cloud Pak. For Maximo shops already on IBM infrastructure, this keeps the entire stack within one vendor relationship and enterprise agreement.
All enterprise platforms provide contractual guarantees that your data is not used for model training, is encrypted in transit and at rest, and is processed within your specified geography. Interpretos needs only an API endpoint URL and authentication key — the same configuration regardless of provider.
Interpretos never stores, trains on, or retains your query data. We are a query engine, not an AI provider. Your choice of AI provider determines where inference happens, what data processing agreements apply, and what compliance certifications are in effect. You can change providers at any time — it is a single configuration field in the admin panel.
Everything you need for a secure enterprise deployment.
Any Linux, macOS, or Windows machine with Docker installed.
Outbound HTTPS to LLM provider, or local Ollama for air-gapped deployments.
Admin provisions per-user ERP credentials via the Setup Wizard. Encrypted and isolated.
Admin and user roles with per-user credential isolation. Each user sees only what their ERP permissions allow.
Docker volume contains all state. Include it in your standard backup procedures.
Audit logs stored locally for your review. Export or integrate with your SIEM as needed.
Try the live demo with real EBS, PeopleSoft, and Maximo data. Then deploy on your own infrastructure.