Getting Started
Welcome to Caskada! This framework helps you build powerful, modular AI applications using a simple yet expressive abstraction based on nested directed graphs.
1. Installation
First, ensure you have Caskada installed:
pip install caskadanpm install caskada # or pnpm/yarnFor more installation options, see the Installation Guide.
2. Core Concepts
Caskada is built around a minimalist yet powerful abstraction that separates data flow from computation:
Node: The fundamental building block that performs a single task with a clear lifecycle (
prep→exec→post).Flow: Orchestrates nodes in a directed graph, supporting branching, looping, and nesting.
Memory: Manages state, separating it into a shared
globalstore and a forkablelocalstore for isolated data flow between nodes.
3. Your First Flow
Let's build a simple Question-Answering flow to demonstrate Caskada's core concepts:
Step 1: Design Your Flow
Our flow will have two nodes:
GetQuestionNode: Captures the user's questionAnswerNode: Generates an answer using an LLM
graph LR
A[GetQuestionNode] --> B[AnswerNode]Step 2: Implement the Nodes
import asyncio
from caskada import Node, Flow, Memory
from utils import call_llm # Your LLM implementation
class GetQuestionNode(Node):
async def prep(self, memory):
"""Get text input from user."""
memory.question = input("Enter your question: ")
class AnswerNode(Node):
async def prep(self, memory):
"""Extract the question from memory."""
return memory.question
async def exec(self, question: str | None):
"""Call LLM to generate an answer."""
prompt = f"Answer the following question: {question}"
return await call_llm(prompt)
async def post(self, memory, prep_res: str | None, exec_res: str):
"""Store the answer in memory."""
memory.answer = exec_res
print(f"AnswerNode: Stored answer '{exec_res}'")import { Flow, Memory, Node } from 'caskada'
import { input } from '@inquirer/prompts'
import { callLLM } from './utils/callLLM'
// Define interfaces for Memory stores (optional but good practice)
interface QAGlobalStore {
question?: string
answer?: string
}
class GetQuestionNode extends Node<QAGlobalStore> {
async prep(memory: Memory<QAGlobalStore>): Promise<void> {
memory.question = await input({ message: 'Enter your question: ' })
}
}
class AnswerNode extends Node<QAGlobalStore> {
async prep(memory: Memory<QAGlobalStore>): Promise<string | undefined> {
return memory.question
}
async exec(question: string | undefined): Promise<string> {
const prompt = `Answer the following question: ${question}`
return await callLLM(prompt)
}
async post(memory: Memory<QAGlobalStore>, prepRes: string | undefined, execRes: string): Promise<void> {
memory.answer = execRes
console.log(`AnswerNode: Stored answer '${execRes}'`)
}
}Step 3: Connect the Nodes into a Flow
from .nodes import GetQuestionNode, AnswerNode # defined in the previous step
from caskada import Flow
def create_qa_flow():
get_question_node = GetQuestionNode()
answer_node = AnswerNode()
# Connect nodes get_question_node → answer_node using the default action
get_question_node >> answer_node # >> is Pythonic syntax sugar for .next(node)
# Create the Flow, specifying the starting node
return Flow(start=get_question_node)// import { GetQuestionNode, AnswerNode } from './nodes'; // defined in the previous step
import { Flow } from 'caskada'
function createQaFlow(): Flow {
const getQuestionNode = new GetQuestionNode()
const answerNode = new AnswerNode()
// Connect nodes getQuestionNode → answerNode using the default action
getQuestionNode.next(answerNode)
// Create the Flow, specifying the starting node
return new Flow(getQuestionNode)
}Step 4: Run the Flow
import asyncio
from .flow import create_qa_flow # defined in the previous step
async def main():
memory = {} # Initialize empty memory (which acts as the global store)
qa_flow = create_qa_flow()
print("Running QA Flow...")
# Run the flow, passing the initial global store.
# The flow modifies the memory object in place.
# The run method returns the final execution tree (we ignore it here).
await qa_flow.run(memory)
# Access the results stored in the global store
print("\n--- Flow Complete ---")
print(f"Question: {memory.question}")
print(f"Answer: {memory.answer}")
if __name__ == '__main__':
asyncio.run(main())import { createQaFlow, QAGlobalStore } from './flow' // defined in the previous steps
async function main() {
// Initialize the global store (can be an empty object)
const globalStore: QAGlobalStore = {}
const qaFlow = createQaFlow()
console.log('Running QA Flow...')
// Run the flow, passing the initial global store.
// The flow modifies the globalStore object in place.
// The run method returns the final execution tree (we ignore it here).
await qaFlow.run(globalStore)
// Access the results stored in the global store
console.log('\n--- Flow Complete ---')
console.log(`Question: ${globalStore.question ?? 'N/A'}`)
console.log(`Answer: ${globalStore.answer ?? 'N/A'}`)
}
main().catch(console.error)4. Key Design Principles
Caskada follows these core design principles:
Separation of Concerns: Data storage (the
memoryobject managing global/local stores) is separate from computation logic (Nodeclasses).Explicit Data Flow: Data dependencies between steps are clear and traceable through
memoryaccess inprep/postand the results passed betweenprep→exec→post.Composability: Complex systems (
Flows) are built from simple, reusable components (Nodes), and Flows themselves can be nested within other Flows.Minimalism: The framework provides only essential abstractions (
Node,Flow,Memory), avoiding vendor-specific implementations or excessive boilerplate.
5. Next Steps
Now that you understand the basics, explore these resources to build sophisticated applications:
Core Abstractions: Dive deeper into nodes, flows, and communication
Design Patterns: Learn more complex patterns like Agents, RAG, and MapReduce
Agentic Coding Guide: Best practices for human-AI collaborative development
If you prefer, jump straight into our example projects:
Last updated