Getting Started

Product presentation

What is BullSequana AI software product?

Platform Architecture Overview

The platform is designed as a layered architecture that separates infrastructure, AI capabilities, enterprise data services, and business applications. This modular design allows organizations to adopt the platform progressively while maintaining strong operational foundations and scalability.

Each layer builds on the capabilities of the layer beneath it:

  • Runtime - the operational foundation that provides infrastructure, security, and platform services
  • CoreAI - foundational AI capabilities and developer tools for building intelligent systems
  • ProAI - enterprise data, analytics, and large-scale AI operations
  • Use Cases - production AI applications that deliver business value

Runtime

The Runtime layer provides the foundational infrastructure and operational services that power the entire platform. It ensures that AI workloads run securely, reliably, and at scale across environments.

This layer includes the core platform capabilities required to manage networking, compute resources, security, and operational workflows. It abstracts the complexity of distributed infrastructure so that higher layers can focus on building intelligent systems.

Key capabilities include:

  • Secure networking and API access
  • Identity, authentication, and access control
  • Secrets and configuration management
  • GPU orchestration and model inference runtime
  • Event-driven workflows and orchestration engines
  • Data persistence and artifact storage
  • Observability, logging, and telemetry
  • CI/CD and platform automation

The Runtime layer also provides the core inference infrastructure used by higher layers. AI services and models deployed through CoreAI ultimately execute on the Runtime layer's compute and orchestration capabilities.

Because of its foundational role, Runtime can be adopted independently by organizations that want to operate their own AI infrastructure and runtime environment.

CoreAI

The CoreAI layer provides the central AI platform capabilities used by developers, data scientists, and AI engineers to build, deploy, and operate AI-powered systems.

CoreAI expands on the infrastructure provided by the Runtime layer by introducing developer-facing AI services, model management tools, and unified access to AI providers.

This layer enables teams to:

  • Access AI services through unified APIs and development interfaces
  • Manage models and AI assets through centralized registries
  • Deploy and operate large language models and AI agents
  • Route requests through LLM proxy services
  • Build retrieval augmented generation (RAG) pipelines
  • Integrate vector search and semantic retrieval
  • Monitor model usage, performance, and costs

CoreAI also acts as an integration bridge between AI systems and enterprise data platforms. Through mechanisms such as MCP servers and tool integrations, CoreAI services can securely connect to data platforms provided in the ProAI layer.

For example:

  • CoreAI agents can query enterprise datasets managed by ProAI
  • RAG pipelines can retrieve embeddings and documents from ProAI data stores
  • AI assistants can “speak to your data” by securely accessing analytics platforms

All AI workloads deployed through CoreAI ultimately rely on the Runtime layer for inference execution and operational orchestration.

ProAI

The ProAI layer introduces enterprise-grade data engineering, streaming, and analytics capabilities that support large-scale AI systems.

While CoreAI focuses on building and deploying AI capabilities, ProAI provides the data platforms and processing pipelines that power those capabilities.

This layer enables organizations to build data-driven AI systems by providing:

  • Data ingestion pipelines and ETL workflows
  • Real-time event streaming and message processing
  • Analytical data platforms for large-scale querying
  • Data warehousing and lakehouse architectures
  • Business intelligence and visualization tools

ProAI serves as the data foundation for AI applications. AI services developed in CoreAI can leverage this layer to access structured datasets, historical records, real-time streams, and analytical insights.

Typical integrations between layers include:

  • CoreAI agents retrieving contextual information from ProAI data systems
  • RAG pipelines using enterprise datasets stored in ProAI
  • AI assistants querying analytics platforms to generate insights

Use case applications built on top of enterprise data pipelines

By combining CoreAI and ProAI, organizations can build AI systems that are deeply integrated with their enterprise data ecosystems.

Use Cases

The Use Cases layer represents the business-facing AI applications built on top of the platform.

These solutions combine the infrastructure provided by Runtime, the AI capabilities provided by CoreAI, and the enterprise data systems provided by ProAI to deliver production-ready AI applications.

Typical examples include:

  • Conversational AI assistants
  • Voice and speech processing applications
  • Intelligent document processing
  • AI-powered workflow automation
  • Domain-specific copilots and knowledge assistants

Use case applications typically rely on ProAI data platforms for context and knowledge, while leveraging CoreAI services for reasoning, generation, and orchestration.

Because these applications run on the underlying platform layers, they benefit from the same operational capabilities such as scalability, security, and observability provided by Runtime.

Organizations adopting the full stack - Runtime + CoreAI + ProAI + Use Cases - gain a complete AI platform capable of supporting the entire lifecycle of enterprise AI solutions, from infrastructure to production applications.

Layer Interaction

While the platform is organized into distinct layers, the components are designed to work together as an integrated system. Each layer builds on the capabilities of the layer beneath it and exposes services that can be consumed by the layers above it. This structure allows the platform to remain modular while still enabling powerful cross-layer integrations.

At a high level, the interaction flow follows a clear progression:

Runtime -> CoreAI -> ProAI -> Use Cases

Each step expands the capabilities available to the platform and ultimately enables the delivery of production-ready AI solutions.

Runtime -> CoreAI

The Runtime layer provides the operational foundation that CoreAI relies on to execute AI workloads.

CoreAI services do not run independently; they use the Runtime infrastructure for:

  • model inference execution on CPU and GPU resources
  • workflow orchestration and event-driven processing
  • API exposure and networking
  • identity, authentication, and security enforcement
  • logging, monitoring, and operational telemetry
  • artifact storage and model persistence

In practice, this means that when a developer deploys an AI model, agent, or RAG pipeline through CoreAI, the actual compute, scheduling, and runtime execution are handled by the Runtime layer.

CoreAI -> ProAI

The CoreAI layer integrates with the ProAI layer to enable AI systems to interact with enterprise data.

CoreAI provides the intelligence layer, including LLMs, agents, and retrieval pipelines. ProAI provides the structured data systems, streaming platforms, and analytical engines that supply the data used by those AI systems.

Integration between these layers typically happens through:

  • MCP servers and tool interfaces that expose ProAI data systems to AI agents
  • retrieval pipelines where CoreAI accesses vector stores and document datasets managed in ProAI
  • AI assistants that query enterprise data platforms to generate insights or reports
  • event-driven data flows where ProAI pipelines trigger AI workflows

This connection enables a powerful pattern often described as "AI speaking to your data", where AI services can securely access enterprise datasets to provide contextualized responses and automated insights.

ProAI -> Use Cases

The Use Cases layer consumes capabilities from both CoreAI and ProAI to deliver production-ready AI applications.

While CoreAI provides reasoning, generation, and orchestration capabilities, ProAI provides the contextual data that makes these applications useful in real-world scenarios.

Typical patterns include:

  • AI assistants answering questions using enterprise knowledge bases
  • analytics copilots generating insights from data warehouses
  • document processing systems extracting and enriching data stored in analytics platforms
  • intelligent automation pipelines triggered by real-time event streams

By combining the intelligence of CoreAI with the enterprise data capabilities of ProAI, the platform enables organizations to build applications that are both AI-driven and data-aware.

End-to-End Flow

When all layers are combined, the platform enables a full lifecycle for enterprise AI solutions:

  • Runtime provides the secure infrastructure and operational environment.
  • CoreAI enables developers to build and deploy AI models, agents, and retrieval systems.
  • ProAI provides enterprise data pipelines, analytics platforms, and event streams.
  • Use Cases deliver real business applications powered by AI and enterprise data.

This layered approach allows organizations to adopt the platform incrementally while maintaining a consistent architecture as their AI capabilities grow.

On this page