TuranDocker Infrastructure Platform
An infrastructure operating system for microservices

(October 2025 - Present)

TuranDocker is a production-grade, self-contained, secure microservice infrastructure platform built entirely on Docker Compose, with Docker Swarm adaptability for future orchestration needs.

It provides a fully containerized environment that integrates databases, messaging systems, gateways, runtimes, observability, monitoring, and application services into a cohesive, reproducible, and operationally safe platform.

Designed not as “just Docker files,” but as a complete infrastructure operating model, TuranDocker enables complex distributed systems to run reliably across multiple business domains (systems) and their internal microservices (apps) with strong isolation, strict security boundaries, deterministic deployment, and zero external infrastructure dependencies beyond Docker itself.

Its primary goals are:

🏗 Core Architectural Philosophy

Most Docker setups are either:

TuranDocker takes a different approach:

TuranDocker follows a System → App → Resource Ownership Model

Instead of treating infrastructure as shared global services, every business domain is treated as an independent system, and every microservice inside that domain is treated as an isolated app.

Example:

    System: bilgi

    Apps:
        - core
        - interaction
        - realtime
        - payment
        - ai

This enables two levels of isolation: System-Level and App-Level

Also it follows these principles:

The result is an infrastructure that behaves predictably across environments—from local development to VPS deployment.

The infrastructure is split into five independent layers:

🧩 System-Level Isolation

Each system owns its own isolated infrastructure namespace.

Example:

    System: bilgi

Resources:

This guarantees:

Adding a new system should never require redesign.

Only:

    new secrets 
    new validation/initiation scripts
    new DNS
    new app deployment

⚙ App-Level Isolation Inside Each System

Inside every system, each microservice gets its own isolated access boundaries.

Example:

    System: bilgi

    Apps:
        - core
        - interaction
        - realtime

This means:

PostgreSQL / MySQL

System DB + App DB Strategy

    bilgi                → system DB
    bilgi_core           → app DB
    bilgi_payment        → app DB
    bilgi_ai             → app DB

Rules:

This allows:

    core ≠ payment ≠ ai

without permission leakage. This is true app isolation—not just naming convention.

MongoDB

Mongo uses the same ownership model.

Example:

    User: bilgi
    DB: bilgi

    User: bilgi_interaction
    DB: bilgi_interaction

    User: bilgi_realtime
    DB: bilgi_realtime

Rules:

This is true app isolation—not just naming convention.

Redis

Redis uses:

    system:*                  → Global Governance: Global feature flags, infrastructure health logs, and node coordination
    bilgi:system:*            → System Governance: System-wide audit counters or cross-service orchestration locks
    bilgi:core:*              → App Governance
    bilgi:interaction:*       → App Governance
    bilgi:realtime:*          → App Governance
    bilgi:payment:*           → App Governance
    bilgi:ai:*                → App Governance

via key prefixes.

Since Redis has weaker native isolation, application-level namespace enforcement is used.

This keeps:

cleanly separated.

RabbitMQ

RabbitMQ service follows a high-discipline architecture centered on system Isolation and predictable topology.

Each system (e.g. bilgi) operates within a dedicated vHost, creating a hard logical boundary that prevents cross-system data leakage.

    vhost: /bilgi

Access is governed by distinct roles—Admin (Cluster), Ops (Topology Owner), and App (Runtime User)—enforced via SCRAM-SHA-256 and regex-based ACLs.

A system-level user (e.g. bilgi) acts as the primary monitoring agent for the vHost. It is granted monitoring tags to oversee the health of the messaging plane without possessing permission to read or write to specific application queues, maintaining a "separation of duties."

Application permissions are restricted via regex (e.g., ^bilgi\.core\..*), physically preventing services from accessing sibling queues.

    bilgi.core.user.account

Rules:

This prevents accidental queue access across services.

Kafka

Kafka service follows the strongest ownership model.

Example topics:

    bilgi.chat.message
    bilgi.notification.intent
    bilgi.audit.logs

Each topic has:

cleanly separated.

Example:

    Owner:
        bilgi_realtime

Rules:

READ: Only owner service can consume.

WRITE: Can be:

    OWNER_ONLY
       or
    OPEN

Examples:

    notification.intent → OPEN
    chat.message → OWNER_ONLY

This creates real message ownership boundaries.

Initiation script implements this with:

This is significantly stronger than standard shared Kafka setups.

🔐 Security Model

Security is not optional. It is enforced by default. Internal services are not publicly exposed (localhost bindings where needed).

TLS Everywhere

TLS is enforced across all services. All critical services use encrypted internal transport:

Examples:

    ssl=on
    requireTLS
    require-secure-transport=ON
    keystore + truststore
    CA verification

No insecure production traffic is assumed. Security is treated as a first-class concern, not an afterthought. For example, Kafka uses SCRAM authentication and Keystore / truststore validation.

Docker Secrets

Passwords are never stored inside Compose YAML or environment file. Instead secrets are stored outside the project (/opt/turan/secrets). Also strict file permission checks are applied (e.g., 0600, 0640).

Docker secrets feature is used for:

This ensures least exposure, auditable secret ownership, secure VPS operations.

Least Privilege Access

Every user gets only required permissions. admin is never used for convenience. Always, minimum required privilege is used and this policy is enforced across databases, Kafka, RabbitMQ and observability tools.

🗄️ Database Layer (docker-compose.db.yml)

The Database Layer provides the persistent storage foundation of the platform. It is designed for strong isolation, secure access control, deterministic initialization, and production-grade operational safety. Supported services include:

Key Platform Guarantees

Across all database services:

This makes the database layer secure, reproducible, and operationally reliable.

📨 Messaging Layer (docker-compose.msg.yml)

The Messaging Layer provides asynchronous communication, event delivery, background processing, and service decoupling across the platform. It is designed for reliability, strict isolation, deterministic initialization, and secure event-driven architecture. Supported services include:

Both systems are initialized with strict access control, TLS enforcement, and repeatable provisioning before application services are allowed to start.

🐇 RabbitMQ

RabbitMQ is used for:

RabbitMQ security model:

🏛 Apache Kafka

Kafka is used for:

Kafka is one of the most advanced parts of TuranDocker. Kafka is configured with:

This prevents the most common production failures caused by weak Kafka defaults.

🛠️ RabbitMQ Topology & Routing Standard

Every application component is provisioned with a standardized, self-healing exchange-queue structure:

This configuration ensures that messaging structures are idempotent, self-verifying, and strictly isolated before the first application starts.

🛠️ Kafka Engineering Highlights

Kafka setup is one of the most advanced parts of the project:

This eliminates common Kafka pitfalls such as:

🌐 Edge + Traffic Layer (docker-compose.inf.yml)

TuranDocker supports both Modern Runtime Patterns and Classic Runtime Patterns, allowing the platform to run both modern microservices and legacy production workloads within the same infrastructure model. This eliminates the need for separate operational strategies for old and new systems.

Modern Runtime Pattern

    Traefik → Service → App Container

It is used for:

This model is optimized for horizontal scalability, container-native deployment, service isolation, high-performance API workloads.

Classic Runtime Pattern

    Traefik → Nginx → PHP-FPM

It is used for:

This preserves compatibility with existing production systems without forcing premature rewrites.

Traefik (Primary Edge Router)

Traefik acts as the primary ingress layer and secure front door of the platform. It handles:

Examples:

    ms.turanbilgi.com
    core.turanbilgi.com
    interaction.turanbilgi.com
    realtime.turanbilgi.com

Traefik ensures that public traffic enters the platform through a secure, centralized, and observable edge layer.

Nginx

It is used for:

Example: ms.turanbilgi.com

Nginx provides operational flexibility for workloads that benefit from traditional web server behavior while remaining fully integrated into the platform architecture.

Optional API Gateways

Supported but profile-disabled:

This allows future evolution toward advanced gateway patterns such as:

without forcing unnecessary complexity during early infrastructure stages.

📊 Observability Layer (docker-compose.obs.yml)

Production systems fail. TuranDocker assumes that failure is normal—not exceptional. The Observability Layer exists to make failures visible, diagnosable, and recoverable. It provides monitoring, logs, tracing, alerting, and operational introspection across the entire platform.

📈 Metrics

Prometheus + Grafana

Prometheus and Grafana are used for:

This provides real-time awareness of platform behavior.

📝 Logs

Loki + Logstash + Elasticsearch + Kibana

Loki, Logstash, Elasticsearch, and Kibana are used for:

This removes the need to debug production issues by manually entering containers.

🔍 Distributed Tracing

Jaeger

Jaeger is used for:

This is especially critical for event-driven microservice systems.

🐞 Error Tracking

GlitchTip

GlitchTip is used for:

This ensures failures are surfaced immediately rather than discovered later.

🔄 Queue Monitoring

Bull Board

Bull Board is used for:

This is especially important for worker-heavy systems.

🧩 Application Layer (docker-compose.app.yml)

The Application Layer contains the actual business services of the platform. Microservices connect to the shared infrastructure via a unified network (turan-net).

Examples include:

These services are intentionally lightweight because infrastructure concerns are centralized in shared platform layers.

🔗 Unified Connectivity Model (turan-net)

All application services connect through a shared internal Docker network: turan-net. This provides:

Applications do not manage infrastructure independently. They consume platform services through controlled access models.

🚀 Deterministic Initialization + Verification

This is one of TuranDocker’s strongest design decisions.Infrastructure is not considered “ready” because a container started. It is only ready after:

The initialization scripts perform create, update, verify, and fail-fast. This eliminates it started but doesn't work which is the most common production failure. This is enterprise-grade operational behavior.

⚙ Orchestration System:

Instead of relying solely on Docker Compose startup order, TuranDocker introduces a script-driven orchestration layer that controls the full infrastructure lifecycle. This ensures deterministic deployment, predictable recovery, and repeatable production operations.

run.sh — Entry Point

The entire platform lifecycle is managed through a centralized execution entry point. This controls:

This creates a single operational source of truth for infrastructure execution.

🔄 Execution Pipeline:

Each infrastructure layer follows a strict and repeatable lifecycle:

    Pre → Start → Init → Verify → Post

This ensures consistent deployment across all services.

1. Pre Scripts (*.pre.sh)

Used before containers start. Responsibilities include:

Examples:

2. Compose Up

Services are started using Docker Compose. This includes:

A container is not trusted simply because it is running. Health checks must confirm operational readiness.

3. Init Scripts (*.init.sh)

Idempotent infrastructure initialization. These scripts safely support both first-time deployment and repeated execution. Examples include:

This guarantees deterministic state across deployments.

Verification Phase

After initialization, the system performs real access verification. This includes:

Example: A MongoDB user is not considered valid until authenticated read and write operations succeed.

This is significantly stronger than simple resource creation.

5. Post Scripts (*.post.sh)

Optional finalization steps. Examples include reporting, cleanup, final synchronization, deployment summaries, and operational alerts.

This orchestration model transforms Docker Compose from a container launcher into a true production infrastructure control system. That is one of the defining strengths of TuranDocker.

🧠 Why This Architecture Matters

Most Docker projects are containers first and architecture later. TuranDocker is the opposite: architecture first and containers second. It is not simply “Docker Compose”. It is an infrastructure operating model. It creates a clean separation between Infrastructure Platform vs Business Applications which makes the system:

This is a major reason why TuranDocker behaves like a platform—not merely a Docker project. It is not just a deployment setup—it’s an infrastructure blueprint. It demonstrates:

TuranDocker provides:

In One Sentence:
TuranDocker is a secure, isolated, production-grade infrastructure operating system for multi-domain microservice platforms built entirely on Docker Compose.

Use Cases:

Key Takeaways: