Node List
This section provides a detailed reference for every node type available in KAI-Flow.
Default Nodes¶
Start Node¶
- Category: Default
- Purpose: Entry point for workflow execution
- Inputs: None
- Outputs:
trigger- Initiates workflow - Configuration: None required
- Usage: Every workflow should begin with a Start node
End Node¶
- Category: Default
- Purpose: Marks workflow completion
- Inputs:
input- Receives final data - Outputs: None
- Configuration: None required
- Usage: Optional but recommended for workflow clarity
Agent Nodes¶
React Agent¶
- Category: Agents
- Purpose: The reasoning engine that orchestrates tool usage with ReAct (Reasoning + Acting) logic
- Inputs:
llm- Language model connectiontools- Array of tool connectionsmemory(optional) - Memory connectionquery- User query input
- Outputs:
response- Agent's final responseintermediate_steps- Reasoning trace
- Configuration:
- System Prompt: Define agent personality and instructions
- Max Iterations: Limit reasoning loops
- Return Intermediate Steps: Include reasoning in output
LLM Nodes¶
OpenAI Chat¶
- Category: LLMs
- Purpose: Connect to OpenAI GPT models
- Inputs:
messages- Conversation messages - Outputs:
llm- LLM instance for agents - Configuration:
- Credential: Select saved OpenAI API key
- Model: Choose GPT version (gpt-4o, gpt-4o-mini, gpt-3.5-turbo, etc.)
- Temperature: Creativity level (0-2)
- Max Tokens: Response length limit
- Top P: Nucleus sampling parameter
- Streaming: Enable/disable streaming responses
OpenAI Compatible¶
- Category: LLMs
- Purpose: Connect to OpenAI-compatible APIs (Ollama, local models, etc.)
- Inputs: Same as OpenAI Chat
- Outputs: Same as OpenAI Chat
- Configuration:
- Base URL: Custom API endpoint
- API Key: Authentication
- Model Name: Model identifier
- Same parameters as OpenAI Chat
Tool Nodes¶
Tavily Search¶
- Category: Tools
- Purpose: AI-optimized web search
- Inputs:
query- Search query - Outputs:
tool- Tool instance for agents - Configuration:
- Credential: Tavily API key
- Max Results: Number of results (1-10)
- Include Answer: Return summarized answer
- Include Raw Content: Include full page content
- Include Images: Include image URLs
- Search Depth: Basic or Advanced
HTTP Client¶
- Category: Tools
- Purpose: Make HTTP API requests
- Inputs:
url- Request URLbody- Request body (for POST/PUT)
- Outputs:
tool- Tool instanceresponse- API response
- Configuration:
- Method: GET, POST, PUT, DELETE, PATCH
- URL: Endpoint URL
- Headers: Custom headers (JSON)
- Body: Request body (JSON)
- Timeout: Request timeout (seconds)
- Authentication: None, Basic, Bearer Token
Retriever¶
- Category: Tools
- Purpose: Search and retrieve documents from vector store
- Inputs:
vector_store- Vector store connectionquery- Search query
- Outputs:
tool- Tool instancedocuments- Retrieved documents
- Configuration:
- Top K: Number of results to retrieve
- Score Threshold: Minimum similarity score
- Search Type: Similarity or MMR
Cohere Reranker¶
- Category: Tools
- Purpose: Re-rank search results for better relevance
- Inputs:
documents- Documents to rerankquery- Original query
- Outputs:
documents- Reranked documents - Configuration:
- Credential: Cohere API key
- Model: Reranking model
- Top N: Number of results to return
Memory Nodes¶
Buffer Memory¶
- Category: Memory
- Purpose: Store short-term conversation context
- Inputs: None (automatic)
- Outputs:
memory- Memory instance - Configuration:
- Memory Key: Storage identifier
- Return Messages: Format output
- Max Token Limit: Memory size limit
Conversation Memory¶
- Category: Memory
- Purpose: Persistent chat history across sessions
- Inputs:
session_id- Unique session identifier - Outputs:
memory- Memory instance - Configuration:
- Memory Key: Storage identifier
- Session ID Field: Custom session field name
- Window Size: Number of messages to keep
Trigger Nodes¶
Timer Start¶
- Category: Triggers
- Purpose: Schedule workflow execution (cron jobs)
- Inputs: None
- Outputs:
trigger- Trigger event - Configuration:
- Cron Expression: Schedule pattern (e.g., "0 9 * * *" for 9 AM daily)
- Timezone: Execution timezone
- Enabled: Toggle timer on/off
- Node Controls:
- Start button: Activate timer
- Stop button: Deactivate timer
- Trigger Now button: Execute immediately
Webhook Trigger¶
- Category: Triggers
- Purpose: Start workflow from external HTTP request
- Inputs: None
- Outputs:
trigger- Trigger eventbody- Webhook payloadheaders- Request headers
- Configuration:
- Webhook Path: Custom URL path
- Method: POST, GET, PUT
- Response Mode: Sync or Async
- Authentication: None, API Key, Basic Auth
- Node Controls:
- Start Listening button: Activate webhook
- Stop button: Deactivate webhook
- Environments: Provides separate endpoints for testing (
/api/v1/webhook-test/...) and production (/api/v1/webhook/...).
Respond To Webhook¶
- Category: Triggers
- Purpose: Send custom HTTP responses for webhook requests
- Inputs:
input,status_code,response_body,response_headers,content_type - Outputs: None
- Configuration:
- HTTP Status Code: e.g., 200, 201, 400, 500
- Response Config: All Incoming Items, No Data, JSON
- Response Body: Custom JSON or string content
- Content-Type: application/json, text/plain, text/html
- Custom Headers: JSON object for additional HTTP headers
- Usage: Place this terminator node at the end of a webhook-triggered workflow to provide full HTTP response control.
Kafka Trigger¶
- Category: Triggers
- Purpose: Start workflow from Kafka message events
- Inputs: None
- Outputs:
trigger- Trigger event - Configuration:
- Credential: Kafka connection details (SASL/Plain support)
- Topic: Kafka topic to subscribe to
- Group ID: Consumer group identifier
- Advanced Options: Auto-commit intervals, fetch bytes, JSON parsing
- Node Controls:
- Start Listening button: Activate Kafka consumer
- Stop button: Deactivate Kafka consumer
Document Loader Nodes¶
Document Loader¶
- Category: Document Loaders
- Purpose: Load documents from various file types
- Inputs:
file_path- Path or uploaded file - Outputs:
documents- Loaded documents - Configuration:
- Source Type: File Upload, URL, S3
- File Type: PDF, TXT, CSV, DOCX, JSON
- Chunk Documents: Split into smaller pieces
- Metadata: Additional document info
Web Scraper¶
- Category: Document Loaders
- Purpose: Extract content from websites
- Inputs:
urls- Target URLs - Outputs:
documents- Scraped content - Configuration:
- URLs: List of URLs to scrape
- Include Links: Extract hyperlinks
- Wait Time: Page load delay
- CSS Selector: Target specific elements
- Max Depth: Follow links depth
- Node Controls:
- Scrape button: Start scraping
- Stop button: Cancel operation
Vector Store Nodes¶
Vector Store Orchestrator¶
- Category: Vector Stores
- Purpose: Manage PostgreSQL/pgvector operations
- Inputs:
embeddings- Embedding model connectiondocuments- Documents to store
- Outputs:
vector_store- Vector store instance - Configuration:
- Collection Name: Vector collection identifier
- Distance Strategy: Cosine, Euclidean, or Dot Product
- Pre-Delete Collection: Clear before insert
- Index Configuration: HNSW parameters
Processing Nodes¶
Kafka Producer¶
- Category: Processing
- Purpose: Send messages to a Kafka topic
- Inputs:
input- Message content (priority over property message) - Outputs:
output- Message delivery metadata (topic, partition, offset) - Configuration:
- Credential: Kafka connection details (SASL/Plain support)
- Topic: Target Kafka topic
- Message: Default message content
- Key: Message key
- Headers: Message header key/value pairs
- Additional Options: Acks (assure delivery), Compression (gzip), Timeout
- JSON Encode: Automatically wrap non-dictionary/list values in JSON
Code Node¶
- Category: Processing
- Purpose: Execute custom Python or JavaScript code
- Inputs:
input- Data from connected node (accessible asnode_data) - Outputs:
output- Execution result - Configuration:
- Language: Select Python or JavaScript
- Code: Code editor for selected language
- Timeout: Execution time limit
- Continue on Error: Continue workflow if code fails
- Enable Validation: Validate code for dangerous operations
Condition Node¶
- Category: Processing
- Purpose: Conditional routing based on evaluations
- Inputs:
input- Value to evaluate - Outputs:
true- Route when condition matchesfalse- Route when condition fails
- Configuration:
- Input Value: Value to check
- Operation: Contains, NotContains, EndsWith, Equal, NotEqual, StartsWith, Regex, IsEmpty, NotEmpty.
- Compare Value: Value to compare against
Embedding Nodes¶
OpenAI Embeddings¶
- Category: Embeddings
- Purpose: Generate vector embeddings for text
- Inputs:
text- Text to embed - Outputs:
embeddings- Embedding model instance - Configuration:
- Credential: OpenAI API key
- Model: text-embedding-ada-002, text-embedding-3-small, etc.
- Dimensions: Embedding vector size
Splitter Nodes¶
Chunk Splitter¶
- Category: Splitters
- Purpose: Split documents into smaller chunks for embedding
- Inputs:
documents- Documents to split - Outputs:
chunks- Split document chunks - Configuration:
- Chunk Size: Maximum characters per chunk
- Chunk Overlap: Overlap between chunks
- Separator: Split character/pattern
- Split Type: Character, Token, Sentence
Text Processing Nodes¶
String Input¶
- Category: Text Processing
- Purpose: Provide static or templated text input
- Inputs: None
- Outputs:
text- Output text - Configuration:
- Text Content: The text to output
- Template Variables: Use
${{var}}syntax for dynamic content