Open WebUI Component

User-friendly web interface for LLM interaction with RAG support, multi-backend support (vLLM, Ollama), and optional S3 storage.

Architecture

Open WebUI - Chat interface + RAG
Pipelines - Custom processing (port 9099)
LLM Backend - vLLM or Ollama
Vector DB - Qdrant for RAG

Quick Reference

REQUIRED = Must be defined by user
Attribute Example Default Effect
namespace REQ open-webui - K8s namespace
global_version REQ 3.1.8 - Helm chart version
service_port REQ 8080 8080 Container port
image_version latest latest Image tag
replicas 1 1 Pod replicas
pvc true false Enable PVC
pvc_size 2Gi 2Gi Storage size
pipelines_enabled true true Enable Pipelines
websocket_enabled false false WebSocket (needs Redis)

Link Variables (16 Total)

Variable Link Type Generated ENV
__vllm open-webui-vllm OPENAI_API_BASE_URLS
__vllm_model open-webui-model Per-model endpoints
__vllm_chat_model (filtered) Chat models only
__vllm_embedding_model (filtered) RAG_EMBEDDING_MODEL, RAG_OPENAI_API_BASE_URL
__ollama open-webui-ollama OLLAMA_BASE_URLS
__qdrant open-webui-qdrant VECTOR_DB=qdrant, QDRANT_URI
__reldb open-webui-postgresql DATABASE_URL
__swbucket open-webui-swbucket S3_ENDPOINT_URL, S3_BUCKET_NAME, S3_ACCESS_KEY_ID
__valkey open-webui-valkey WEBSOCKET_MANAGER=redis, REDIS_URL
__langflow open-webui-langflow Pipelines integration

Model Type Filtering

vLLM model sub-components with model_type attribute:

model_type: chat - Available in chat dropdown
model_type: embedding - Used for RAG embeddings

RAG_EMBEDDING_ENGINE set to "openai" when embedding model linked

Generated Files

File Condition Contains
helm/helm-values.yaml Always Helm chart values
secret/cloud.env Always All auto-configured ENVs
configmap/langflow-pipeline.yaml __langflow linked Python pipeline code

Ports

Port Purpose Condition
8080 Web UI Always
9099 Pipelines API pipelines_enabled=true