TuranBilgi Business Platform:
API-First Distributed Microservices Architecture

(July 2025 - Present)

TuranBilgi is a modern, API-first, multi-client business platform designed to support complex domain workflows alongside high-performance real-time communication. The platform follows a distributed microservices architecture with event-driven coordination, enabling independent scaling, fault isolation, and long-term extensibility across backend services and client applications.

Platform Overview & Architectural Focus

TuranBilgi is designed as a unified backend system serving multiple client applications through a shared API layer. All business logic, authentication, authorization, and real-time capabilities are centralized within the platform and exposed through a shared API layer. This ensures consistent behavior across all consuming clients while allowing internal services to evolve independently. It is designed around long-term platform sustainability rather than short-term feature delivery. Its architecture emphasizes:

Platform Microservices

TuranBilgi platform consists of several domain microservices, each focused on a specific area of platform functionality while communicating and coordinating through events or synchronous API calls.

  1. Core Business Microservice
  2. Interaction & Discussion Microservice
  3. Payment, Billing & Financial Processing Microservice
  4. Real-Time Communication, Notification & Audit Processing Microservice
  5. AI Microservice

1. Core Business Microservice

The Core Business microservice is implemented using Node.js (Express.js with TypeScript) and exposes both REST and GraphQL APIs. It serves as the primary orchestration and transactional domain service of the platform, coordinating long-running business workflows, enforcing business rules, and managing the lifecycle of core business entities.

This microservice does not replace domain ownership of other services, but instead acts as the central workflow coordinator and external-integration boundary, ensuring that domain services remain decoupled from infrastructure and third-party concerns.

Core Responsibilities:

Business Workflow Orchestration

The Core Business microservice supports a collaborative and moderated workflow where users and administrators interact to create, refine, and finalize ideas, projects, contracts, invoices, and support tickets. It manages multi-step, stateful workflows that span multiple domains and user roles:

  1. Idea Management & Discussion:
    • Users or prospective clients submit ideas into the system.
    • Moderator administrators review submissions and may propose refinements or initiate new idea threads.
    • All discussions and refinements are handled via the Interaction microservice, allowing commenting and refining ideas and collaborative iteration until an idea reaches maturity.
  2. Project Creation & Collaboration:
    • Once an idea and its discussion thread are finalized, administrators can create a project and associated applications based on the root idea and the agreed upon discussion thread.
    • Project details and applications are further discussed and refined through the Interaction microservice until consensus is reached.
  3. Contract Management:
    • After project agreement, contracts are created and managed within the Core Business microservice.
    • Contract terms are discussed between administrators and the associated users via the Interaction microservice.
    • Once agreed, contracts transition to an immutable state, locking associated project definitions and obligations.
  4. Invoice Generation & Payment Coordination:
    • Administrators generate invoices over time based on agreed contracts.
    • Users pay invoices, while invoice details and payment-related discussions continue through the Interaction microservice.
    • Payment execution and provider-specific integrations are owned exclusively by the Payment microservice, which operates independently and is intentionally isolated for security and compliance reasons.
  5. Support Ticket Management:
    • Users create support tickets within the Core Business microservice.
    • Support administrators review, respond, and resolve tickets.
    • Ticket discussions occur through the Interaction microservice until resolution, after which tickets are closed, the resolutions are recorded, and the tickets are archived.
Authentication & Account Management

In addition to business workflows, the Core Business microservice is responsible for:

Event-Driven Integration & External Side Effects

The Core Business microservice acts as the central gateway for external asynchronous integrations, while preserving a fully event-driven internal architecture.

External integrations are intentionally not handled directly by domain services. Instead:

This design prevents infrastructure concerns from leaking into domain services and ensures consistent handling of external systems.

Infrastructure Components:

Architectural Rationale:

This approach ensures:

2. Interaction & Discussion Microservice (Shared Collaboration Layer)

To support structured communication, decision-making, and accountability across all business domains, TuranBilgi includes a dedicated Interaction & Discussion microservice. This service implements a unified, reusable interaction model that enables threaded discussions, contextual replies, moderation workflows, and historical traceability for all major business entities.

Rather than embedding isolated comment systems within individual services, TuranBilgi centralizes all collaborative interactions into a shared domain layer, ensuring consistency, scalability, and governance.

The microservice is implemented using NestJS with TypeScript and exposes both REST and GraphQL interfaces. It operates in close coordination with the Core Business microservice through an event-driven Saga-based architecture.

Shared Interaction Model:

All major platform entities require structured communication and collaborative workflows:

Entity Purpose Discussion Support
Idea Brainstorming Yes
Support Ticket Problem Solving Yes
Project Coordination Yes
Project App Requirements Analysis Yes
Contract Negotiation Yes
Invoice Billing Questions Yes
Payment Dispute Resolution Yes

These entities share common interaction requirements:

This unified design is referred to internally as the Shared Interaction Model.

Core Responsibilities:

The Interaction & Discussion microservice is implemented using NestJS and provides both REST and GraphQL APIs. It operates independently from the Core Business API and focuses exclusively on collaborative workflows. Key responsibilities include:

  1. Thread & Comment Management
    • Creation and management of discussion threads
    • Hierarchical reply structures
    • Entity-agnostic attachment of discussions
    • Rich metadata support (status, visibility, ownership)
    • Soft deletion and version history
  2. Moderation & Governance
    • User and administrator moderation tools
    • Reporting and review workflows
    • Content visibility control
    • Policy enforcement
    • Abuse detection and escalation pipelines
  3. Accountability & Auditability
    • Immutable activity logs
    • Per-action attribution
    • Historical change tracking
    • Legal and compliance support
  4. API & Developer Interfaces

    The service exposes:

    • REST APIs with OpenAPI (Swagger) documentation
    • GraphQL APIs with Apollo Sandbox support
    • JWT-secured endpoints for client and service access
    • Legal and compliance support

    This dual-interface strategy enables frontend teams to select the most appropriate interaction model for each client application.

Data Management:

The interaction subsystem is optimized for high-volume, write-heavy collaboration workloads.

All discussions are linked to business entities through a generic entity reference model, enabling flexible cross-domain associations without schema coupling.

Event Integration:

The service integrates with the platform’s event-driven infrastructure.

This ensures synchronization between business workflows and interaction state.

Architectural Benefits:

The Interaction & Discussion microservice provides:

It functions as the platform’s unified collaboration and discourse layer.

3. Payment, Billing & Financial Processing Microservice

To support secure, scalable, and regulation-compliant financial operations, TuranBilgi includes a dedicated Payment, Billing & Financial Processing microservice. This microservice handles payment intents, billing workflows, PSP integrations, settlement reconciliation, and fraud prevention, isolating financial execution from core business logic and maintaining strong transactional boundaries.

Rather than embedding payment logic directly within the Core Business API, TuranBilgi isolates all monetary processing into a specialized domain service. This separation ensures strong security boundaries, regulatory compliance, independent scalability, and fault isolation for all financial operations.

The microservice is implemented using NestJS with TypeScript and exposes REST APIs for transactional workflows and webhook integrations. It operates in close coordination with the Core Business microservice through an event-driven Saga-based architecture.

Architectural Role:

The Payment microservice functions as the platform’s financial execution and settlement layer. It is responsible for:

All financial state transitions are managed locally and synchronized with other services via Kafka events, avoiding cross-service database coupling.

Core Responsibilities:

1. Payment Orchestration & Processing

The microservice manages the full lifecycle of invoice payments, including:

All payment operations are executed within database transactions and coordinated through the outbox pattern to guarantee consistency.

2. External Provider Integration Layer

A dedicated adapter layer abstracts third-party payment providers and normalizes their APIs into a unified internal interface. This layer supports:

This design enables seamless migration or multi-provider support without impacting upstream business logic.

3. Webhook & Settlement Processing

The microservice exposes secure webhook endpoints to receive asynchronous confirmations and settlement notifications from PSPs. Key responsibilities include:

Webhook processing pipelines are isolated from public APIs to prevent external interference with transactional state.

4. Financial Event Production & Saga Coordination

The Payment microservice participates in distributed business workflows using event-driven Saga choreography. It emits domain events such as:

These events are published via the MongoDB Outbox and Kafka pipeline and consumed by the Core Business and Real-Time services to update invoice states, trigger notifications, and generate audit records. No direct database access between services is permitted, preserving domain isolation.

5. Reconciliation & Reporting Subsystem

To ensure long-term financial accuracy and regulatory compliance, the microservice includes automated reconciliation pipelines. These processes support:

This subsystem provides resilience against message loss, provider outages, and partial failures.

6. Fraud Prevention & Compliance Support

The payment service incorporates security and compliance controls, including:

Sensitive card data is never stored within platform databases and is handled exclusively by certified PSPs.

API & Integration Interfaces:

The Payment microservice exposes specialized APIs optimized for transactional reliability.

Supported Interfaces

GraphQL is intentionally excluded from payment execution paths to reduce complexity and ensure deterministic transactional behavior.

Data Management:

The microservice maintains its own isolated relational database optimized for financial workloads.

All monetary records are immutable and append-only, ensuring auditability and traceability.

Event Integration:

The microservice is tightly integrated with TuranBilgi’s event-driven infrastructure.

This bidirectional integration enables consistent financial synchronization across distributed domains.

Distributed Reliability & Consistency:

To guarantee financial correctness in a distributed environment, the Payment microservice employs:

These mechanisms ensure consistency without requiring distributed transactions.

Architectural Benefits:

The Payment, Billing & Financial Processing microservice provides:

It functions as the platform’s dedicated financial execution and settlement authority while remaining loosely coupled to core business workflows.

4. Real-Time Communication, Notification & Audit Processing Microservice

A dedicated NestJS and Socket.IO–based microservice is responsible for managing all real-time communication, notification delivery, and centralized audit projections across the distributed platform, ensuring responsive client experiences and operational traceability. The service maintains persistent WebSocket connections and exposes REST and GraphQL APIs to support scalable, low-latency client interactions.

Rather than acting as a centralized event authority, this microservice operates as a specialized consumer and processor within a federated, domain-driven event architecture. Each domain service owns its own event production and outbox mechanisms, while this service focuses on real-time delivery, notification orchestration, and audit aggregation.

Core Responsibilities:

1. Real-Time Chat & Collaboration System

The microservice provides a full-featured, managed chat and collaboration subsystem supporting both private and room-based communication.

Administrative & Management APIs

Through REST and GraphQL interfaces, administrators can:

These management capabilities enable controlled, enterprise-grade collaboration environments.

Messaging Features

The chat subsystem supports:

All messaging operations are synchronized across distributed gateways using Kafka and Redis to ensure consistency and fault tolerance.

2. Notification Management System

The platform implements a flexible, event-driven notification system supporting both manual and automated workflows. This microservice acts as the primary notification orchestration and delivery engine, consuming domain events emitted by distributed services and transforming them into user-facing notifications.

Notification Types
Notification Creation Modes

A. Manual (Admin-Driven)
Administrators can:

B. System-Generated (Event-Driven)
Distributed domain services emit notification-related events via their own MongoDB Outbox and Kafka publishing mechanisms for business events such as:

These events are published to Kafka by the originating services and consumed by the notification processors for delivery.

Notification Interaction Tracking

The notification subsystem supports:

This enables accurate notification lifecycle management and user engagement tracking.

3. Centralized Audit Log Aggregation via Event Streaming

This microservice serves as the centralized audit projection and compliance authority for the Turan Bilgi platform.

All domain microservices emit standardized audit events as part of their own transactional workflows using the MongoDB Outbox pattern. These events are reliably published to Kafka through service-local outbox processors, ensuring atomicity, durability, and failure isolation without cross-service coupling.

The Realtime microservice acts as the sole consumer and persistence owner of audit events. It consumes audit-related Kafka topics, validates and enriches incoming events, and persists them into an immutable, append-only audit log store.

Core responsibilities include:

This design enforces clear ownership boundaries: domain services are responsible only for emitting audit intent, while audit persistence, retention, and compliance guarantees are centralized. The approach preserves domain autonomy, avoids distributed transactions, and enables scalable, fault-tolerant audit processing through event-driven architecture.

Distributed Scalability & Coordination:

To support multi-instance deployments and horizontal scaling:

This infrastructure enables independent scaling of producers and consumers while maintaining platform-wide consistency, or ensuring consistent behavior across clusters.

Architectural Benefits:

This microservice is designed to scale independently from core business and interaction microservices and provides:

It functions as the platform’s real-time delivery and compliance projection layer, rather than as a centralized event authority.

5. AI Microservice (Intelligent Extension Layer)

To support advanced intelligence, automation, and knowledge-driven features, TuranBilgi includes an optional AI Microservice that extends the platform’s capabilities into natural language understanding, semantic search, content summarization, and assisted workflows.

The AI microservice is developed using Python and built on the FastAPI framework, enabling high-performance asynchronous processing and scalable API-driven intelligence services. This service operates as an independent extension layer within the microservice architecture, allowing AI capabilities to evolve separately from the core business services.

Purpose

The AI microservice is designed as a loosely coupled, language-agnostic intelligence layer that consumes business events (such as idea creation, discussion updates, and support ticket changes) and produces knowledge artifacts that enhance user experience and platform insight. It does not replace existing services but augments them with machine-assisted intelligence.

By isolating AI workloads into a dedicated Python-based service, TuranBilgi ensures:

This architecture enables TuranBilgi to progressively introduce intelligent features without compromising the stability or modularity of the core platform.

Key Responsibilities

The AI microservice provides:

  1. Semantic Embedding Generation — Creating vector representations of text content from discussions, tickets, and ideas for similarity search and retrieval.
  2. Content Summarization — Producing concise summaries of long discussion threads or support histories to improve readability and decision-making.
  3. Semantic Search & Matching — Enabling users and services to find related content using meaning-based search queries (not just keyword matching).
  4. RAG-style Assistant Responses — Combining retrieved content with language models to answer questions in context (e.g., “What are the main issues in idea X?”).
  5. Optional Moderation Support — Assisting with detection of content quality issues or policy violations. (Future expansion)

Architecture Integration

The AI microservice operates independently from the other microservices but integrates through the platform’s event infrastructure:

This separation maintains microservice boundaries, avoids direct coupling to the database store of other microservices, and enables independent scaling of AI workloads.

Benefits

Adding the AI microservice to the platform enables:

Future-Ready Approach

While the core platform handles transactional and collaboration workloads, the AI microservice is designed for extensibility into:

This aligns with TuranBilgi’s long-term extensibility goals while preserving clear service boundaries.

Security & Authentication

TuranBilgi adopts a shared JWT-based authentication model across all microservices to ensure consistent and secure identity propagation throughout the system.

The Core Business service acts as the primary identity authority, issuing signed JWT access tokens after successful authentication. Client applications include these tokens in subsequent requests to other microservices, which independently validate the token signature and claims without requiring centralized session storage.

This approach provides:

JWT expiration policies and claim validation mechanisms enforce time-bound access while preserving scalability and performance.

This model balances strong security guarantees with operational simplicity, avoiding the need for centralized session management or complex gateway-level token orchestration.

Testing Strategy & Quality Assurance

TuranBilgi follows a structured, multi-layered testing strategy designed to ensure reliability, security, and long-term scalability across its distributed microservice architecture.

Testing is implemented using modern, ecosystem-appropriate frameworks (such as Jest, Vitest, SuperTest, MSW, and language-native testing tools) depending on service requirements. This layered testing strategy supports safe refactoring, reduces regression risk, and ensures consistent behavior in a distributed environment.

Testing Layers

All services follow clearly defined validation layers to ensure correctness, resilience, and predictable system behavior. The testing strategy includes:

Backend Microservices (Node.js)

AI Microservice (FastAPI – Python)

The AI microservice is tested using:

Test Environment Architecture

Testing environments are fully isolated from production and are provisioned using containerized infrastructure. They include:

Integration tests run against real, isolated service dependencies to accurately simulate production interactions without impacting live systems. Unit tests mock messaging clients, while integration tests validate real message flows against isolated broker instances. This ensures realistic validation of event-driven communication without impacting production systems. This approach ensures deterministic, reproducible results across local development and continuous integration (CI) environments.

CI Pipeline Strategy

Each microservice runs automated tests in the CI pipeline on:

Pipelines enforce:

No service can be deployed unless it passes all automated quality gates.

Security Validation

Security is integrated into the testing lifecycle through:

This ensures resilience against common attack vectors and misconfigurations.

Why This Testing Strategy

This testing strategy was chosen to:

By aligning testing tools with each technology stack (NestJS, Express/Fastify, FastAPI), TuranBilgi achieves both architectural clarity and operational stability. This approach balances developer velocity with strict quality controls, enabling rapid innovation without compromising system stability.

Containerization, Deployment, Backup & Observability

The entire ecosystem is containerized under the TuranDocker project. This includes the Core Business, Interaction, Payment, and Real-Time microservices, PostgreSQL, MongoDB, Redis, Apache Kafka, RabbitMQ, reverse proxies (Nginx), and all the observability & monitoring tools. This setup enables fast, repeatable deployments on any Linux VPS while providing a strong foundation for future orchestration with Kubernetes or other container platforms, while remaining fully operational with Docker Compose or Docker Swarm.

Backup Strategy

To ensure data durability and disaster recovery, TuranBilgi implements automated backup pipelines for critical data stores:

These mechanisms guarantee that production data remains protected, restorable, and resilient against operational faults or disasters.

Observability & Monitoring

TuranBilgi adopts a comprehensive observability and monitoring strategy to provide deep visibility into system behavior, performance, and reliability across its distributed landscape. Observability goes beyond basic monitoring by correlating metrics, logs, and error events to enable rapid diagnosis and root cause analysis in complex deployments.
The observability platform includes:

Together, these components provide a unified observability surface that enhances operational confidence, reduces time to resolution, and supports continuous performance optimization as the platform evolves.

Orchestration Strategy & Migration Path

TuranBilgi is intentionally designed with orchestration portability in mind. While the platform currently operates efficiently using Docker Compose or Docker Swarm, its container boundaries, stateless service design, externalized configuration, and volume management patterns ensure a smooth migration path to Kubernetes when scaling requirements justify it.

Phase 1 — Docker Compose (Current / Early Stage)

Ideal for:

Benefits:

At this stage, Kubernetes would add unnecessary operational weight.

Phase 2 — Docker Swarm (Optional Intermediate Stage)

Suitable when:

Swarm provides:

However, Swarm remains simpler than Kubernetes and is often sufficient for medium-scale production workloads.

Phase 3 — Kubernetes (When It Becomes Necessary)

Migration to Kubernetes becomes justified when TuranBilgi experiences one or more of the following:

Kubernetes introduces production-grade orchestration capabilities such as:

The migration can be executed incrementally:
  1. Ensure all services:

    • Are fully stateless (except databases/brokers)
    • Use environment-based configuration
    • Store state only in volumes or external systems
  2. Convert Docker Compose services into:

    • Kubernetes Deployments
    • Services (ClusterIP)
    • Ingress resources
    • ConfigMaps & Secrets
  3. Migrate infrastructure components:

    • PostgreSQL → StatefulSet
    • MongoDB → StatefulSet
    • Kafka/RabbitMQ → Operator-based deployment (recommended)
  4. Introduce:

    • Horizontal Pod Autoscalers
    • Liveness and readiness probes
    • Resource limits and requests
  5. Gradually transition traffic from VPS deployment to Kubernetes cluster.

Because TuranBilgi already enforces clean container boundaries, this migration would be structural rather than architectural. Infrastructure evolution does not require service redesign, preserving long-term architectural stability.

API Gateway Considerations

API Gateways (e.g., Kong, KrakenD, Nginx Plus, Ambassador) provide centralized request routing, authentication, rate limiting, caching, and observability at the edge.

For TuranBilgi, an API Gateway is not necessary in early stages because:

When an API Gateway Becomes Justified:

Migration Path:

  1. Deploy the gateway in front of existing microservices without changing service internals.
  2. Move TLS termination, authentication enforcement, and request routing to the gateway incrementally.
  3. Integrate gateway-specific features (rate limiting, caching, analytics) as traffic grows.
  4. Continue leveraging observability stack for detailed service-level metrics while the gateway provides edge-level insights.

Adding a gateway is incremental and should occur only when operational complexity or external exposure grows beyond what direct service calls can safely and efficiently manage.

Service Mesh Considerations (Consul Connect / Istio)

Service meshes provide:

However, they introduce:

When a Service Mesh Is NOT Necessary

For TuranBilgi, a service mesh is not required when:

In early and mid-stage deployments, adding Istio or Consul Connect would be architectural overengineering.

When a Service Mesh Becomes Justified

For TuranBilgi, a service mesh may become beneficial if:

At that stage:

Strategic Deployment Path for TuranBilgi

Current Stage:

Scaling Stage (Future):

TuranBilgi’s architecture is intentionally modular and container-native, ensuring that orchestration upgrades can occur without redesigning core services.

Architectural Philosophy

TuranBilgi prioritizes:

This approach ensures that infrastructure evolves in response to real operational needs rather than theoretical scalability assumptions.

Future Extensions: Serverless & Micro-Frontends

While TuranBilgi currently uses a microservice architecture, future extensions may incorporate Serverless and Micro-Frontend patterns:

These patterns are not required at MVP, but become meaningful and beneficial as the platform grows, teams scale, and feature sets become more independent.

Client Applications

TuranBilgi is a multi-client platform, designed to serve different user experiences through a shared backend infrastructure. Client applications include:

All client applications consume the same TuranBilgi APIs (REST, GraphQL, and WebSockets), ensuring: