Construirea workflow-urilor AI in n8n: Pattern-uri de integrare si bune practici
Combinatia dintre capabilitatile de automatizare ale n8n si serviciile AI creeaza oportunitati puternice pentru automatizare inteligenta. De la suport clienti alimentat de AI pana la generare automata de continut, aceste integratii transforma modul in care opereaza companiile.
Acest ghid acopera pattern-uri practice pentru construirea de workflow-uri n8n robuste, alimentate de AI.
Arhitectura integrarii AI in n8n
┌─────────────────────────────────────────────────────────────┐
│ n8n Workflow │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Trigger │ → │ Prepare │ → │ AI Node │ → │ Process │ │
│ │ │ │ Context │ │ │ │ Response │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ ↑ │ │ │ │
│ │ ↓ ↓ ↓ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Memory / Context Store │ │
│ └────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Pattern 1: Bot de suport clienti cu AI
Prezentare generala a workflow-ului
{
"name": "AI Customer Support Bot",
"nodes": [
{
"name": "Webhook Trigger",
"type": "n8n-nodes-base.webhook",
"parameters": {
"path": "support-bot",
"httpMethod": "POST"
}
},
{
"name": "Load Conversation History",
"type": "n8n-nodes-base.postgres",
"parameters": {
"operation": "executeQuery",
"query": "SELECT * FROM conversations WHERE session_id = '{{ $json.session_id }}' ORDER BY created_at DESC LIMIT 10"
}
},
{
"name": "Build Context",
"type": "n8n-nodes-base.code",
"parameters": {
"jsCode": "// Build conversation context\nconst history = $input.all();\nconst userMessage = $('Webhook Trigger').first().json.message;\n\nconst messages = history.map(h => ({\n role: h.json.role,\n content: h.json.content\n}));\n\nmessages.push({ role: 'user', content: userMessage });\n\nreturn [{ json: {\n messages,\n session_id: $('Webhook Trigger').first().json.session_id,\n user_message: userMessage\n}}];"
}
},
{
"name": "OpenAI Chat",
"type": "@n8n/n8n-nodes-langchain.openAi",
"parameters": {
"model": "gpt-4",
"messages": {
"values": [
{
"role": "system",
"content": "You are a helpful customer support agent for TechCorp. Be concise, friendly, and helpful. If you can't help with something, offer to connect the user with a human agent."
}
]
},
"options": {
"temperature": 0.7,
"maxTokens": 500
}
}
},
{
"name": "Save Conversation",
"type": "n8n-nodes-base.postgres",
"parameters": {
"operation": "insert",
"table": "conversations",
"columns": "session_id, role, content, created_at"
}
},
{
"name": "Detect Escalation Need",
"type": "n8n-nodes-base.code",
"parameters": {
"jsCode": "const response = $input.first().json.text;\nconst escalationTriggers = [\n 'speak to human',\n 'talk to agent',\n 'not helpful',\n 'frustrated',\n 'complaint'\n];\n\nconst userMessage = $('Build Context').first().json.user_message.toLowerCase();\nconst needsEscalation = escalationTriggers.some(t => userMessage.includes(t));\n\nreturn [{ json: {\n response,\n needs_escalation: needsEscalation,\n session_id: $('Build Context').first().json.session_id\n}}];"
}
},
{
"name": "Route Response",
"type": "n8n-nodes-base.if",
"parameters": {
"conditions": {
"boolean": [
{
"value1": "={{ $json.needs_escalation }}",
"value2": true
}
]
}
}
},
{
"name": "Send to Human Queue",
"type": "n8n-nodes-base.slack",
"parameters": {
"channel": "#support-escalations",
"text": "Customer needs human assistance\nSession: {{ $json.session_id }}"
}
},
{
"name": "Return Response",
"type": "n8n-nodes-base.respondToWebhook",
"parameters": {
"respondWith": "json",
"responseBody": "={{ { response: $json.response, escalated: $json.needs_escalation } }}"
}
}
]
}Gestionare avansata a contextului
// Code node: Advanced context building with RAG
const buildEnhancedContext = async () => {
const userMessage = $('Webhook Trigger').first().json.message;
const sessionId = $('Webhook Trigger').first().json.session_id;
// Load conversation history
const history = $('Load Conversation History').all();
// Search knowledge base for relevant context
const relevantDocs = await searchKnowledgeBase(userMessage);
// Build system prompt with context
const systemPrompt = `You are a helpful customer support agent for TechCorp.
RELEVANT KNOWLEDGE BASE CONTENT:
${relevantDocs.map(d => d.content).join('\n\n')}
GUIDELINES:
- Be concise and helpful
- Reference specific documentation when available
- Offer to escalate to human support if needed
- Never make up information not in the knowledge base`;
// Build messages array
const messages = [
{ role: 'system', content: systemPrompt },
...history.map(h => ({ role: h.json.role, content: h.json.content })),
{ role: 'user', content: userMessage }
];
return [{
json: {
messages,
session_id: sessionId,
relevant_docs: relevantDocs.map(d => d.id)
}
}];
};
return await buildEnhancedContext();Pattern 2: Pipeline automatizat de generare de continut
Workflow de generare continut
{
"name": "AI Content Generation Pipeline",
"nodes": [
{
"name": "Schedule Trigger",
"type": "n8n-nodes-base.scheduleTrigger",
"parameters": {
"rule": {
"interval": [{ "field": "hours", "hoursInterval": 6 }]
}
}
},
{
"name": "Fetch Content Ideas",
"type": "n8n-nodes-base.postgres",
"parameters": {
"operation": "executeQuery",
"query": "SELECT * FROM content_ideas WHERE status = 'pending' ORDER BY priority DESC LIMIT 1"
}
},
{
"name": "Research Topic",
"type": "n8n-nodes-base.httpRequest",
"parameters": {
"url": "https://api.serper.dev/search",
"method": "POST",
"body": {
"q": "={{ $json.topic }} latest news trends"
}
}
},
{
"name": "Generate Outline",
"type": "@n8n/n8n-nodes-langchain.openAi",
"parameters": {
"model": "gpt-4",
"messages": {
"values": [
{
"role": "system",
"content": "You are an expert content strategist. Create detailed blog post outlines that are SEO-optimized and engaging."
},
{
"role": "user",
"content": "Create an outline for a blog post about: {{ $('Fetch Content Ideas').first().json.topic }}\n\nResearch context:\n{{ $json.organic }}"
}
]
}
}
},
{
"name": "Generate Draft",
"type": "@n8n/n8n-nodes-langchain.openAi",
"parameters": {
"model": "gpt-4",
"messages": {
"values": [
{
"role": "system",
"content": "You are an expert content writer. Write engaging, well-researched blog posts following the provided outline. Include practical examples and actionable insights."
},
{
"role": "user",
"content": "Write a complete blog post following this outline:\n\n{{ $json.text }}\n\nTarget length: 1500-2000 words\nTone: Professional but accessible\nInclude: Introduction, main sections, conclusion, and key takeaways"
}
]
},
"options": {
"maxTokens": 4000
}
}
},
{
"name": "Quality Check",
"type": "n8n-nodes-base.code",
"parameters": {
"jsCode": "const content = $input.first().json.text;\n\n// Word count check\nconst wordCount = content.split(/\\s+/).length;\n\n// Readability check (simplified Flesch-Kincaid)\nconst sentences = content.split(/[.!?]+/).length;\nconst words = wordCount;\nconst syllables = content.match(/[aeiouy]+/gi)?.length || 0;\nconst readabilityScore = 206.835 - 1.015 * (words / sentences) - 84.6 * (syllables / words);\n\n// Heading structure check\nconst hasH1 = content.includes('# ');\nconst hasH2 = content.includes('## ');\n\nconst qualityScore = {\n wordCount,\n readabilityScore: Math.round(readabilityScore),\n hasProperStructure: hasH1 && hasH2,\n passesQuality: wordCount >= 1000 && readabilityScore > 50\n};\n\nreturn [{ json: { content, ...qualityScore } }];"
}
},
{
"name": "Quality Gate",
"type": "n8n-nodes-base.if",
"parameters": {
"conditions": {
"boolean": [
{
"value1": "={{ $json.passesQuality }}",
"value2": true
}
]
}
}
},
{
"name": "Save Draft",
"type": "n8n-nodes-base.postgres",
"parameters": {
"operation": "update",
"table": "content_ideas",
"updateKey": "id",
"columns": "status, draft_content, quality_score"
}
},
{
"name": "Request Human Review",
"type": "n8n-nodes-base.slack",
"parameters": {
"channel": "#content-review",
"text": "New draft ready for review!\nTopic: {{ $('Fetch Content Ideas').first().json.topic }}\nWord count: {{ $json.wordCount }}\nQuality score: {{ $json.readabilityScore }}"
}
}
]
}Pattern 3: Procesare de date cu AI
Clasificarea si extragerea documentelor
// Code node: Document processing pipeline
const processDocument = async () => {
const document = $input.first().json;
// Step 1: Extract text from document
const extractedText = await extractText(document.file_url);
// Step 2: Classify document type
const classificationPrompt = `Classify this document into one of these categories:
- invoice
- contract
- report
- correspondence
- other
Document text (first 1000 chars):
${extractedText.substring(0, 1000)}
Respond with just the category name.`;
const classification = await callOpenAI(classificationPrompt);
// Step 3: Extract relevant fields based on classification
const extractionPrompts = {
invoice: `Extract the following from this invoice:
- Invoice number
- Date
- Vendor name
- Total amount
- Line items (as JSON array)
Document:
${extractedText}
Respond in JSON format.`,
contract: `Extract the following from this contract:
- Parties involved
- Contract date
- Key terms
- Important dates
- Obligations
Document:
${extractedText}
Respond in JSON format.`,
report: `Summarize this report:
- Main topic
- Key findings
- Recommendations
- Data points mentioned
Document:
${extractedText}
Respond in JSON format.`
};
const extractionPrompt = extractionPrompts[classification] || extractionPrompts.report;
const extractedData = await callOpenAI(extractionPrompt, { responseFormat: 'json' });
return [{
json: {
document_id: document.id,
classification,
extracted_data: JSON.parse(extractedData),
processed_at: new Date().toISOString()
}
}];
};
return await processDocument();Pattern 4: Procesare inteligenta a alertelor
Triaj inteligent al alertelor
{
"name": "Intelligent Alert Triage",
"nodes": [
{
"name": "Alert Webhook",
"type": "n8n-nodes-base.webhook",
"parameters": {
"path": "alerts",
"httpMethod": "POST"
}
},
{
"name": "Load Alert Context",
"type": "n8n-nodes-base.code",
"parameters": {
"jsCode": "// Gather context for the alert\nconst alert = $input.first().json;\n\n// Load recent similar alerts\nconst recentAlerts = await $getWorkflowData('recent_alerts') || [];\n\n// Check if this is a repeat/related alert\nconst relatedAlerts = recentAlerts.filter(a =>\n a.source === alert.source &&\n Date.now() - new Date(a.timestamp).getTime() < 3600000 // Last hour\n);\n\nreturn [{ json: {\n alert,\n related_alerts: relatedAlerts,\n is_repeated: relatedAlerts.length > 0\n}}];"
}
},
{
"name": "AI Triage",
"type": "@n8n/n8n-nodes-langchain.openAi",
"parameters": {
"model": "gpt-4",
"messages": {
"values": [
{
"role": "system",
"content": "You are an expert IT operations analyst. Analyze alerts and determine:\n1. Severity (critical, high, medium, low)\n2. Likely root cause\n3. Recommended immediate actions\n4. Whether this should wake someone up\n\nRespond in JSON format with keys: severity, root_cause, actions, requires_immediate_attention"
},
{
"role": "user",
"content": "Analyze this alert:\n\n{{ JSON.stringify($json.alert) }}\n\nRelated recent alerts: {{ JSON.stringify($json.related_alerts) }}"
}
]
},
"options": {
"responseFormat": "json_object"
}
}
},
{
"name": "Parse Triage Result",
"type": "n8n-nodes-base.code",
"parameters": {
"jsCode": "const triageResult = JSON.parse($input.first().json.text);\nconst alert = $('Load Alert Context').first().json.alert;\n\nreturn [{ json: {\n ...alert,\n triage: triageResult\n}}];"
}
},
{
"name": "Route by Severity",
"type": "n8n-nodes-base.switch",
"parameters": {
"dataType": "string",
"value1": "={{ $json.triage.severity }}",
"rules": {
"rules": [
{ "value2": "critical" },
{ "value2": "high" },
{ "value2": "medium" }
]
}
}
},
{
"name": "Page On-Call (Critical)",
"type": "n8n-nodes-base.pagerDuty",
"parameters": {
"operation": "createIncident",
"title": "CRITICAL: {{ $json.alert.title }}",
"details": "Root cause: {{ $json.triage.root_cause }}\n\nRecommended actions:\n{{ $json.triage.actions.join('\\n') }}"
}
},
{
"name": "Slack Alert (High)",
"type": "n8n-nodes-base.slack",
"parameters": {
"channel": "#alerts-high",
"text": "🔴 High Priority Alert: {{ $json.alert.title }}\n\nRoot cause: {{ $json.triage.root_cause }}\nActions: {{ $json.triage.actions.join(', ') }}"
}
},
{
"name": "Log for Review (Medium)",
"type": "n8n-nodes-base.postgres",
"parameters": {
"operation": "insert",
"table": "alert_queue"
}
}
]
}Pattern 5: Orchestrare multi-model
Selectia celui mai potrivit model
// Code node: Intelligent model routing
const selectAndCallModel = async () => {
const task = $input.first().json;
// Define model capabilities and costs
const models = {
'gpt-4': {
capabilities: ['complex_reasoning', 'coding', 'analysis', 'creative'],
costPer1kTokens: 0.03,
maxTokens: 8192,
latency: 'high'
},
'gpt-3.5-turbo': {
capabilities: ['general', 'simple_qa', 'summarization'],
costPer1kTokens: 0.002,
maxTokens: 4096,
latency: 'low'
},
'claude-3-opus': {
capabilities: ['complex_reasoning', 'long_context', 'analysis'],
costPer1kTokens: 0.015,
maxTokens: 200000,
latency: 'medium'
},
'claude-3-haiku': {
capabilities: ['general', 'simple_qa', 'fast_response'],
costPer1kTokens: 0.00025,
maxTokens: 200000,
latency: 'very_low'
}
};
// Analyze task requirements
const taskAnalysis = {
requiresComplexReasoning: task.complexity === 'high',
inputLength: task.input.length,
needsSpeed: task.priority === 'realtime',
costSensitive: task.budget === 'low'
};
// Select best model
let selectedModel = 'gpt-3.5-turbo'; // Default
if (taskAnalysis.requiresComplexReasoning && !taskAnalysis.costSensitive) {
selectedModel = 'gpt-4';
} else if (taskAnalysis.inputLength > 100000) {
selectedModel = 'claude-3-opus';
} else if (taskAnalysis.needsSpeed) {
selectedModel = 'claude-3-haiku';
}
// Call selected model
const result = await callModel(selectedModel, task.input, task.systemPrompt);
return [{
json: {
model_used: selectedModel,
result: result,
tokens_used: result.usage,
estimated_cost: calculateCost(result.usage, models[selectedModel].costPer1kTokens)
}
}];
};
return await selectAndCallModel();Gestionarea erorilor si rezilienta
Pattern robust de apeluri AI
// Code node: Resilient AI API calls
const callAIWithRetry = async () => {
const input = $input.first().json;
const maxRetries = 3;
const baseDelay = 1000;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
// Make the AI call
const response = await callOpenAI(input.prompt, input.options);
// Validate response
if (!response || !response.text) {
throw new Error('Empty response from AI');
}
// Return successful response
return [{
json: {
success: true,
response: response.text,
attempt,
model: input.options.model
}
}];
} catch (error) {
console.log(`Attempt ${attempt} failed: ${error.message}`);
// Check if error is retryable
const retryableErrors = [
'rate_limit_exceeded',
'timeout',
'service_unavailable',
'503',
'429'
];
const isRetryable = retryableErrors.some(e =>
error.message.toLowerCase().includes(e.toLowerCase())
);
if (!isRetryable || attempt === maxRetries) {
// Log error and use fallback
await logError(error, input);
// Try fallback model
if (input.fallbackModel) {
try {
const fallbackResponse = await callOpenAI(input.prompt, {
...input.options,
model: input.fallbackModel
});
return [{
json: {
success: true,
response: fallbackResponse.text,
usedFallback: true,
model: input.fallbackModel
}
}];
} catch (fallbackError) {
// Even fallback failed
}
}
return [{
json: {
success: false,
error: error.message,
attempts: attempt
}
}];
}
// Exponential backoff
const delay = baseDelay * Math.pow(2, attempt - 1);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
};
return await callAIWithRetry();Rezumat bune practici
## Bune practici pentru integrarea AI
### Design
- [ ] Defineste scheme clare de input/output
- [ ] Implementeaza gestionarea erorilor corecta
- [ ] Foloseste modelele potrivite pentru fiecare task
- [ ] Proiecteaza pentru degradare gratiala
### Securitate
- [ ] Nu loga niciodata prompt-uri/raspunsuri sensibile
- [ ] Valideaza si sanitizeaza toate input-urile
- [ ] Foloseste criptarea credentialelor
- [ ] Implementeaza rate limiting
### Performanta
- [ ] Foloseste cache pentru raspunsuri unde e posibil
- [ ] Foloseste streaming pentru raspunsuri lungi
- [ ] Implementeaza timeout-uri
- [ ] Monitorizeaza costurile API
### Fiabilitate
- [ ] Implementeaza logica de retry
- [ ] Foloseste modele de fallback
- [ ] Gestioneaza intreruperile API cu gratie
- [ ] Monitorizeaza ratele de succes
### Managementul costurilor
- [ ] Urmareste consumul de token-uri
- [ ] Seteaza alerte de buget
- [ ] Foloseste niveluri adecvate de modele
- [ ] Optimizeaza lungimea prompt-urilorConcluzie
Workflow-urile n8n alimentate de AI deschid posibilitati puternice de automatizare, dar necesita design atent pentru securitate, fiabilitate si managementul costurilor. Pattern-urile din acest ghid ofera o baza pentru construirea de integratii AI robuste.
La DeviDevs, ajutam organizatiile sa construiasca workflow-uri de automatizare sofisticate, alimentate de AI. Contacteaza-ne pentru a discuta nevoile tale de automatizare.
Sistemul tau AI e conform cu EU AI Act? Evaluare gratuita de risc - afla in 2 minute →