Generic chatbots give generic answers. A chatbot trained on your data, connected to your systems—that’s actually useful.
This tutorial shows you how to build one with n8n and Claude. No ML expertise required.
What We’re Building
[User Message] → [n8n Webhook] → [Context Retrieval] → [Claude API]
↓
[Chat Response] ← [Format Response] ← [Claude Response]
Features:
- Answers questions about your business/product
- Pulls context from your knowledge base
- Remembers conversation history
- Escalates to humans when needed
- Works on any website
Prerequisites
- n8n (self-hosted or cloud)
- Anthropic API key
- Basic understanding of webhooks
- A knowledge base (Notion, Google Docs, or even a text file)
Step 1: Create the Webhook Entry Point
Every chat message needs somewhere to go.
n8n Webhook Configuration
- Add a Webhook node
- Set HTTP Method: POST
- Set Path:
/chat - Response Mode: Last Node
Your webhook URL will look like:
https://your-n8n-instance.com/webhook/chat
Expected Input Format
{
"message": "What are your pricing plans?",
"session_id": "user_abc123",
"metadata": {
"page_url": "https://yoursite.com/pricing",
"user_agent": "Mozilla/5.0..."
}
}
Step 2: Conversation Memory
Chatbots without memory are frustrating. Let’s fix that.
Simple Memory with Redis
Add a Redis node to store/retrieve conversation history.
Get History:
Operation: Get
Key: chat:{{$json.session_id}}
Store History (after response):
Operation: Set
Key: chat:{{$json.session_id}}
Value: {{$json.updated_history}}
TTL: 3600 (1 hour)
Memory Structure
{
"messages": [
{"role": "user", "content": "Hi, what do you do?"},
{"role": "assistant", "content": "We help businesses automate..."},
{"role": "user", "content": "What are your pricing plans?"}
],
"context": {
"identified_topic": "pricing",
"user_intent": "purchase_consideration"
}
}
Alternative: Database Memory
If you prefer PostgreSQL or MongoDB:
CREATE TABLE chat_sessions (
session_id VARCHAR(255) PRIMARY KEY,
messages JSONB,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);
Step 3: Knowledge Base Retrieval
This is what makes your chatbot actually useful—connecting it to your data.
Option A: Simple Keyword Matching
For smaller knowledge bases (< 100 documents):
const knowledgeBase = [
{
keywords: ['pricing', 'cost', 'price', 'plans'],
content: `Our pricing plans:
- Starter: $29/month - Up to 1,000 automations
- Pro: $99/month - Unlimited automations
- Enterprise: Custom pricing - Contact us`
},
{
keywords: ['refund', 'cancel', 'money back'],
content: `We offer a 30-day money-back guarantee.
To cancel, email support@example.com.`
}
];
const userMessage = $json.message.toLowerCase();
const relevantDocs = knowledgeBase.filter(doc =>
doc.keywords.some(keyword => userMessage.includes(keyword))
);
return { relevantContext: relevantDocs.map(d => d.content).join('\n\n') };
Option B: Notion Integration
Pull live data from Notion:
- Add Notion node (Database Query)
- Filter by relevant tags or search
- Extract page content
Database ID: your_knowledge_base_id
Filter:
Property "Tags" contains {{$json.detected_topic}}
Option C: Vector Search (Advanced)
For large knowledge bases, use embeddings:
- Pre-index your documents with embeddings
- Embed the user query
- Find similar documents
// Using Pinecone or similar
const queryEmbedding = await getEmbedding($json.message);
const results = await pinecone.query({
vector: queryEmbedding,
topK: 3,
includeMetadata: true
});
return {
relevantContext: results.matches.map(m => m.metadata.content).join('\n\n')
};
Step 4: The Claude Integration
Now we send everything to Claude.
System Prompt
You are a helpful customer support assistant for [Your Company].
Your role:
- Answer questions about our products and services
- Help users solve problems
- Guide users to the right resources
- Escalate to humans when you can't help
Your knowledge:
{{relevant_context}}
Guidelines:
1. Be concise but complete
2. If you don't know something, say so
3. For complex issues, offer to connect them with a human
4. Never make up information not in your knowledge base
5. Be friendly but professional
Current conversation context:
- User is on page: {{page_url}}
- Conversation topic: {{identified_topic}}
Anthropic Node Configuration
Model: claude-3-5-sonnet-20241022
Messages Array:
[
{"role": "system", "content": "{{system_prompt}}"},
...{{conversation_history}},
{"role": "user", "content": "{{$json.message}}"}
]
Parameters:
Max Tokens: 500
Temperature: 0.7
Handling Different Response Types
Add a Code node to parse Claude’s response and determine actions:
const response = $json.content;
// Check for escalation signals
const escalationPhrases = [
"speak to a human",
"contact support",
"I'm not able to help with that",
"complex situation"
];
const needsEscalation = escalationPhrases.some(phrase =>
response.toLowerCase().includes(phrase)
);
// Check for action triggers
const actionPatterns = {
schedule_demo: /schedule.*demo|book.*call/i,
view_pricing: /pricing|plans|cost/i,
contact_sales: /sales.*team|enterprise/i
};
let suggestedAction = null;
for (const [action, pattern] of Object.entries(actionPatterns)) {
if (pattern.test(response)) {
suggestedAction = action;
break;
}
}
return {
message: response,
needs_escalation: needsEscalation,
suggested_action: suggestedAction,
session_id: $('Webhook').item.json.session_id
};
Step 5: Response Formatting
Structure the response for your frontend.
Response Schema
{
"success": true,
"response": {
"message": "Our Starter plan is $29/month and includes...",
"type": "text",
"suggested_actions": [
{
"label": "View Pricing",
"action": "navigate",
"url": "/pricing"
},
{
"label": "Talk to Sales",
"action": "escalate",
"type": "sales"
}
],
"quick_replies": [
"What's included in Pro?",
"Do you offer discounts?",
"Can I try it free?"
]
},
"session_id": "user_abc123"
}
Generating Quick Replies with AI
Add another Claude call to suggest follow-up questions:
Based on this conversation, suggest 3 short follow-up questions
the user might ask next. Return as JSON array.
Last assistant message: {{response}}
User's original question: {{user_message}}
Step 6: Human Escalation
Sometimes AI can’t help. Make the handoff smooth.
Escalation Triggers
const shouldEscalate =
// AI explicitly says it can't help
$json.needs_escalation ||
// User explicitly asks
/speak.*human|real person|agent/i.test($json.user_message) ||
// Sentiment is frustrated
$json.sentiment_score < -0.5 ||
// Multiple failed attempts
$json.failed_attempts > 2;
Escalation Workflow
[Escalation Triggered] → [Create Support Ticket]
↓
[Notify Support Team (Slack)]
↓
[Send User Confirmation]
Slack Alert Format
🔔 *Chat Escalation*
*Session:* {{session_id}}
*Page:* {{page_url}}
*Conversation:*
{{conversation_summary}}
*Reason:* {{escalation_reason}}
[View Full Conversation] [Assign to Me]
Step 7: Frontend Integration
Now let’s add this to a website.
Simple Chat Widget
<!-- Chat Widget -->
<div id="chat-widget" class="chat-widget">
<div id="chat-messages" class="chat-messages"></div>
<div class="chat-input">
<input type="text" id="chat-input" placeholder="Ask a question...">
<button onclick="sendMessage()">Send</button>
</div>
</div>
<script>
const CHAT_API = 'https://your-n8n-instance.com/webhook/chat';
const SESSION_ID = 'user_' + Math.random().toString(36).substr(2, 9);
async function sendMessage() {
const input = document.getElementById('chat-input');
const message = input.value.trim();
if (!message) return;
// Show user message
addMessage(message, 'user');
input.value = '';
// Send to n8n
try {
const response = await fetch(CHAT_API, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message,
session_id: SESSION_ID,
metadata: {
page_url: window.location.href
}
})
});
const data = await response.json();
addMessage(data.response.message, 'assistant');
// Show quick replies if available
if (data.response.quick_replies) {
showQuickReplies(data.response.quick_replies);
}
} catch (error) {
addMessage('Sorry, something went wrong. Please try again.', 'error');
}
}
function addMessage(text, type) {
const messages = document.getElementById('chat-messages');
const div = document.createElement('div');
div.className = `message ${type}`;
div.textContent = text;
messages.appendChild(div);
messages.scrollTop = messages.scrollHeight;
}
function showQuickReplies(replies) {
const messages = document.getElementById('chat-messages');
const container = document.createElement('div');
container.className = 'quick-replies';
replies.forEach(reply => {
const button = document.createElement('button');
button.textContent = reply;
button.onclick = () => {
document.getElementById('chat-input').value = reply;
sendMessage();
};
container.appendChild(button);
});
messages.appendChild(container);
}
</script>
<style>
.chat-widget {
position: fixed;
bottom: 20px;
right: 20px;
width: 350px;
height: 500px;
border: 1px solid #e0e0e0;
border-radius: 12px;
display: flex;
flex-direction: column;
background: white;
box-shadow: 0 4px 12px rgba(0,0,0,0.15);
}
.chat-messages {
flex: 1;
overflow-y: auto;
padding: 16px;
}
.message {
margin-bottom: 12px;
padding: 10px 14px;
border-radius: 18px;
max-width: 80%;
}
.message.user {
background: #007bff;
color: white;
margin-left: auto;
}
.message.assistant {
background: #f0f0f0;
color: #333;
}
.chat-input {
display: flex;
padding: 12px;
border-top: 1px solid #e0e0e0;
}
.chat-input input {
flex: 1;
padding: 10px;
border: 1px solid #ddd;
border-radius: 20px;
margin-right: 8px;
}
.chat-input button {
padding: 10px 20px;
background: #007bff;
color: white;
border: none;
border-radius: 20px;
cursor: pointer;
}
.quick-replies {
display: flex;
flex-wrap: wrap;
gap: 8px;
margin-top: 8px;
}
.quick-replies button {
padding: 6px 12px;
border: 1px solid #007bff;
background: white;
color: #007bff;
border-radius: 16px;
cursor: pointer;
font-size: 13px;
}
</style>
Step 8: Analytics & Improvement
Track conversations to improve over time.
Metrics to Track
const analyticsEvent = {
event: 'chat_message',
session_id: $json.session_id,
timestamp: new Date().toISOString(),
metrics: {
message_length: $json.message.length,
response_time_ms: Date.now() - $json.start_time,
was_escalated: $json.needs_escalation,
topic: $json.detected_topic,
sentiment: $json.sentiment_score
}
};
// Send to your analytics (Mixpanel, Amplitude, etc.)
Weekly Report
📊 *Chatbot Weekly Report*
*Volume:*
- Total conversations: 342
- Total messages: 1,247
- Avg messages/conversation: 3.6
*Performance:*
- Resolved without escalation: 89%
- Avg response time: 1.2s
- User satisfaction: 4.2/5
*Top Topics:*
1. Pricing (28%)
2. Features (22%)
3. Integration (18%)
4. Billing (12%)
*Improvement Opportunities:*
- 15 questions about "API limits" had low confidence
- "Refund policy" needs clearer documentation
Complete n8n Workflow Overview
1. Webhook (POST /chat)
↓
2. Redis: Get Session History
↓
3. Code: Detect Topic/Intent
↓
4. Notion: Fetch Relevant Knowledge
↓
5. Code: Build System Prompt
↓
6. Anthropic: Generate Response
↓
7. Code: Parse Response + Actions
↓
8. Branch: Needs Escalation?
├── Yes → Create Ticket + Slack Alert
└── No → Continue
↓
9. Anthropic: Generate Quick Replies
↓
10. Code: Format Final Response
↓
11. Redis: Update Session History
↓
12. Respond to Webhook
Advanced Features
Multi-language Support
Add language detection and response translation:
// Detect language
const detectedLanguage = await detectLanguage($json.message);
// Add to system prompt
if (detectedLanguage !== 'en') {
systemPrompt += `\n\nRespond in ${detectedLanguage}.`;
}
Product Recommendations
If the chatbot detects purchase intent:
Based on the user's questions about {{topic}}, recommend
relevant products from this catalog:
{{product_catalog}}
Return as JSON with product_id, name, and reason.
Proactive Engagement
Trigger chatbot based on user behavior:
// Frontend: Track user behavior
if (timeOnPricingPage > 30 && !chatOpened) {
openChat();
sendMessage("Hi! I see you're checking out our pricing. Any questions I can help with?");
}
Troubleshooting
”Responses are too slow”
- Reduce max_tokens
- Use claude-3-haiku for simple queries
- Cache common responses
- Implement streaming responses
”Chatbot makes things up”
- Be more explicit in the system prompt: “Only use information from the provided context”
- Lower temperature to 0.3
- Add validation step before responding
”Conversations feel robotic”
- Add personality to the system prompt
- Use temperature 0.7-0.8
- Include example conversations in the prompt
”Users keep asking the same thing”
- Improve your knowledge base for that topic
- Add it as a quick reply
- Consider a dedicated FAQ section
What’s Next
Once your basic chatbot works:
- Add more data sources: Connect CRM, order history, support tickets
- Implement feedback: Let users rate responses
- A/B test prompts: Find what works best
- Add voice: Integrate with Twilio for phone support
Need help building a custom chatbot for your business? Book a free consultation and we’ll design a solution for your specific needs.