Security by Architecture,
Not by Promise

Interpretos Local runs entirely inside your network. Your credentials, your queries, your results — all under your control. There's nothing to trust us with.

Why On-Prem Changes the
Security Conversation

SOC 2 certifies vendor infrastructure. Since Interpretos runs on YOUR infrastructure, the security posture is yours to define and audit.

Typical SaaS vendor

You trust them with:

  • Your data on their servers
  • Their SOC 2 and pen test results
  • Their data residency policies
  • Their access controls and encryption
  • Their breach notification process
  • Their employee background checks

Interpretos Local

You trust yourself with:

  • Your data on your servers
  • Your network and firewall rules
  • Your encryption and key management
  • Your audit logs and retention
  • Your backup and disaster recovery
  • Your compliance and governance

Nothing leaves your network. There's nothing to trust us with.

What Goes Where

Complete transparency about what stays inside your network, what leaves, and how to keep everything local.

Stays Inside Your Network
  • ERP credentials and API keys
  • Database query results
  • Conversation history
  • User accounts and roles
  • Audit logs
  • All configuration data
Sent to LLM Provider

Configurable — choose your provider:

  • System prompt (query instructions)
  • User's natural language question
  • Conversation context
  • Query results for formatting

Credentials are NEVER sent.

Air-Gap Option

Use a local Ollama instance (OpenAI-compatible endpoint) for zero external traffic.

No internet connection required. Every byte stays inside your network.

Complete network isolation.

Three Independent Layers
of Protection

Even if one layer fails, the other two prevent any data modification.

1

SQL Validation

Blocks INSERT, UPDATE, DELETE, DROP, ALTER, TRUNCATE, CREATE, GRANT, and REVOKE before any query executes. Only SELECT statements pass through.

2

HTTP Method Restriction

Only GET requests to REST APIs. POST, PUT, DELETE, and PATCH are blocked at the request layer. Read-only by protocol.

3

Code Executor Sandbox

Restricted Python environment. No file I/O, no os/subprocess, no network access beyond approved ERP endpoints.

Per-User Credential Isolation

  • Each user's ERP credentials stored separately — never shared
  • Encrypted at rest with AES-128 (Fernet)
  • Injected at query time only — never sent to the LLM
  • Stored in Docker volume under your control
  • Admin provisions credentials via Setup Wizard
  • Users never see raw credentials

Your Keys, Your Control

Credentials live in your Docker volume. Back them up, rotate them, revoke them — all standard operations under your IT policy.

Minimal Network Footprint

Outbound Only

HTTPS to your chosen LLM provider (Gemini, OpenAI, Anthropic, or local Ollama). No inbound ports required.

Single Container

One port (default 8080). No privileged access. No host network required. Standard Docker deployment.

No Phone-Home

Telemetry disabled by default. No license server dependency. Runs fully offline with local Ollama.

Complete Audit Trail

Logged Locally (Always On)

  • Every natural language query
  • Every SQL statement and API call executed
  • Full conversation history per user
  • User login and session activity

All stored in your Docker volume. Queryable, exportable, deletable by you.

Telemetry (Opt-In, Disabled by Default)

When enabled, sends a daily heartbeat with aggregate stats only:

  • Query count and active user count
  • Uptime and version information

Never collected: query text, usernames, data records, credentials, IP addresses.

Your AI, Your Rules

Choose where AI inference happens — from fully managed to fully air-gapped. Interpretos works with any provider.

Option What leaves your network Quality Est. cost Best for
Interpretos Cloud
Included free
Query text + API responses transit our proxy to Gemini. Credentials never leave your network. Highest $0
100 queries/day
Evaluation & small teams
Bring Your Own Key
Direct to provider
Query text + API responses go directly to the provider you choose. We never see them. Highest $50–300/mo
typical 25-user team
Production teams
Enterprise AI Platform
Your cloud, your terms
Query text stays within your cloud tenancy. Covered by your existing enterprise agreement. Highest Per your agreement
often already budgeted
Regulated industries
Air-Gapped
Local Ollama
Nothing. Zero external network calls. AI runs on your hardware. Good
depends on model & GPU
$0
your GPU hardware
NERC CIP, defence, air-gapped sites

Enterprise AI Platforms

Most large organizations already have enterprise AI agreements that include data processing commitments, residency guarantees, and compliance certifications. Interpretos connects to any OpenAI-compatible API endpoint — point it at your existing platform:

Azure OpenAI Service

Models run in your Azure tenancy. Data processed under your Microsoft Enterprise Agreement. Supports private endpoints, VNET integration, and managed identity. SOC 2, ISO 27001, HIPAA, FedRAMP certified.

AWS Bedrock

Claude, Llama, Mistral and others in your AWS account. Data stays in your chosen region, encrypted with your KMS keys. VPC endpoints available. Covered by your AWS BAA and enterprise agreement.

Google Cloud Vertex AI

Gemini models in your GCP project. Data processing governed by your Cloud Data Processing Agreement. VPC Service Controls, CMEK encryption, data residency controls. EU data boundary option available.

IBM watsonx.ai

Granite and open-source models on IBM Cloud or on-prem via Cloud Pak. For Maximo shops already on IBM infrastructure, this keeps the entire stack within one vendor relationship and enterprise agreement.

All enterprise platforms provide contractual guarantees that your data is not used for model training, is encrypted in transit and at rest, and is processed within your specified geography. Interpretos needs only an API endpoint URL and authentication key — the same configuration regardless of provider.

The bottom line

Interpretos never stores, trains on, or retains your query data. We are a query engine, not an AI provider. Your choice of AI provider determines where inference happens, what data processing agreements apply, and what compliance certifications are in effect. You can change providers at any time — it is a single configuration field in the admin panel.

Getting Started

Everything you need for a secure enterprise deployment.

Docker Host

Any Linux, macOS, or Windows machine with Docker installed.

Network Access

Outbound HTTPS to LLM provider, or local Ollama for air-gapped deployments.

Credentials

Admin provisions per-user ERP credentials via the Setup Wizard. Encrypted and isolated.

RBAC

Admin and user roles with per-user credential isolation. Each user sees only what their ERP permissions allow.

Backup

Docker volume contains all state. Include it in your standard backup procedures.

Compliance

Audit logs stored locally for your review. Export or integrate with your SIEM as needed.

See It In Action

Try the live demo with real EBS, PeopleSoft, and Maximo data. Then deploy on your own infrastructure.