Using whatsapp spliting and Routing

Describe the problem/error/question

I created a workflow where i want to split and route whatsapp messages to different paths.

The problem is it only goes to the text path even if i send a audio or video through whatsapp. I want to achieve that i send a message via whats app and if i am sending a audio or video i want the spliter to send it through the right path.

What is the error message (if any)?

i dont get any error code

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.63.4
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • **Running n8n via n8n cloud
  • :**
  • Operating system: Windows

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
  • **n8n version: 1.63.4
  • **Database (default: SQLite): i am using the default
  • **n8n EXECUTIONS_PROCESS setting (default: own, main): i dont undertsnad this question but everything is default.
  • **Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • **Operating system: my OS is windows 11

Hi @Derrel,

Welcome to the community :tada:

Tip for sharing your workflow in the forum

Pasting your n8n workflow


Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </> (preformatted text) in the editor and pasting in your workflow.

```
<your workflow>
```

Make sure that you’ve removed any sensitive information from your workflow and include dummy data or pinned data as much as you can!


Could you maybe share parts of your data by pinning it in the workflow as well? Or at least give the schema of the json payload?

Ok hope this helps?

{
“meta”: {
“templateId”: “2466”,
“instanceId”: “66f412173b19ed7b2ff7424f60f5efb953e6dc2d8db864775210ddbf57bfa355”
},
“nodes”: [
{
“parameters”: {
“updates”: [
“messages”
]
},
“id”: “38ffe41a-ecdf-4bb4-bd55-51998abab0f5”,
“name”: “WhatsApp Trigger”,
“type”: “n8n-nodes-base.whatsAppTrigger”,
“position”: [
980,
580
],
“webhookId”: “0b1b3a9b-2f6a-4f5a-8385-6365d96f4802”,
“typeVersion”: 1,
“credentials”: {
“whatsAppTriggerApi”: {
“id”: “SrHOvJCgXnCpSOlR”,
“name”: “WhatsApp OAuth account”
}
}
},
{
“parameters”: {
“resource”: “media”,
“operation”: “mediaUrlGet”,
“mediaGetId”: “={{ $json[‘messages[0].audio’].id }}”,
“requestOptions”: {}
},
“id”: “a35ac268-eff0-46cd-ac4e-c9b047a3f893”,
“name”: “Get Audio URL”,
“type”: “n8n-nodes-base.whatsApp”,
“position”: [
1980,
40
],
“typeVersion”: 1,
“credentials”: {
“whatsAppApi”: {
“id”: “DGZc7otlGOCO5I6B”,
“name”: “WhatsApp account”
}
}
},
{
“parameters”: {
“resource”: “media”,
“operation”: “mediaUrlGet”,
“mediaGetId”: “={{ $json.video.id }}”,
“requestOptions”: {}
},
“id”: “a3be543c-949c-4443-bf82-e0d00419ae23”,
“name”: “Get Video URL”,
“type”: “n8n-nodes-base.whatsApp”,
“position”: [
1980,
400
],
“typeVersion”: 1,
“credentials”: {
“whatsAppApi”: {
“id”: “DGZc7otlGOCO5I6B”,
“name”: “WhatsApp account”
}
}
},
{
“parameters”: {
“resource”: “media”,
“operation”: “mediaUrlGet”,
“mediaGetId”: “={{ $json.image.id }}”,
“requestOptions”: {}
},
“id”: “dd3cd0e7-0d1e-40cf-8120-aba0d1646d6d”,
“name”: “Get Image URL”,
“type”: “n8n-nodes-base.whatsApp”,
“position”: [
1980,
740
],
“typeVersion”: 1,
“credentials”: {
“whatsAppApi”: {
“id”: “DGZc7otlGOCO5I6B”,
“name”: “WhatsApp account”
}
}
},
{
“parameters”: {
“model”: “gpt-4o-2024-08-06”,
“options”: {}
},
“id”: “b4ea61ba-4a58-461e-9191-04f5721e4e07”,
“name”: “OpenAI Chat Model”,
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“position”: [
3000,
540
],
“typeVersion”: 1,
“credentials”: {
“openAiApi”: {
“id”: “4k7PsmAWAIqJ03Vt”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“mode”: “runOnceForEachItem”,
“language”: “python”,
“pythonCode”: “import cv2\nimport numpy as np\nimport base64\n\ndef extract_evenly_distributed_frames_from_base64(base64_string, max_frames=10):\n # Decode the Base64 string into bytes\n video_bytes = base64.b64decode(base64_string)\n \n # Write the bytes to a temporary file\n video_path = ‘/tmp/temp_video.mp4’\n with open(video_path, ‘wb’) as video_file:\n video_file.write(video_bytes)\n \n # Open the video file using OpenCV\n video_capture = cv2.VideoCapture(video_path)\n \n # Get the total number of frames in the video\n total_frames = int(video_capture.get(cv2.CAP_PROP_FRAME_COUNT))\n \n # Calculate the step size to take ‘max_frames’ evenly distributed frames\n step_size = max(1, total_frames // (max_frames - 1))\n \n # List to store selected frames as base64\n selected_frames_base64 = \n \n for i in range(0, total_frames, step_size):\n # Set the current frame position\n video_capture.set(cv2.CAP_PROP_POS_FRAMES, i)\n \n # Read the frame\n ret, frame = video_capture.read()\n if ret:\n # Convert frame (NumPy array) to a Base64 string\n frame_base64 = convert_frame_to_base64(frame)\n selected_frames_base64.append(frame_base64)\n if len(selected_frames_base64) >= max_frames:\n break\n \n # Release the video capture object\n video_capture.release()\n\n return selected_frames_base64\n\ndef convert_frame_to_base64(frame):\n # Convert the frame (NumPy array) to JPEG format\n ret, buffer = cv2.imencode(‘.jpg’, frame)\n if not ret:\n return None\n\n # Encode JPEG image to Base64\n frame_base64 = base64.b64encode(buffer).decode(‘utf-8’)\n return frame_base64\n\nbase64_video = _input.item.binary.data.data\nframes_base64 = extract_evenly_distributed_frames_from_base64(base64_video, max_frames=10)\n\nreturn { "output": frames_base64 }”
},
“id”: “2d3d177a-12ab-4c98-9867-12fca212eaf0”,
“name”: “Capture Frames”,
“type”: “n8n-nodes-base.code”,
“position”: [
2300,
400
],
“typeVersion”: 2
},
{
“parameters”: {
“fieldToSplitOut”: “output”,
“options”: {}
},
“id”: “401c5be9-55c1-4be0-92f9-8745fe7cafcc”,
“name”: “Split Out Frames”,
“type”: “n8n-nodes-base.splitOut”,
“position”: [
2460,
400
],
“typeVersion”: 1
},
{
“parameters”: {
“url”: “={{ $json.url }}”,
“authentication”: “predefinedCredentialType”,
“nodeCredentialType”: “whatsAppApi”,
“options”: {}
},
“id”: “a3505c93-2719-4a11-8813-39844fe0dd1a”,
“name”: “Download Video”,
“type”: “n8n-nodes-base.httpRequest”,
“position”: [
2140,
400
],
“typeVersion”: 4.2,
“credentials”: {
“whatsAppApi”: {
“id”: “DGZc7otlGOCO5I6B”,
“name”: “WhatsApp account”
}
}
},
{
“parameters”: {
“operation”: “toBinary”,
“sourceProperty”: “output”,
“options”: {}
},
“id”: “ba151eed-6658-4adf-b8fa-a1290a41125e”,
“name”: “Convert to Binary”,
“type”: “n8n-nodes-base.convertToFile”,
“position”: [
2620,
400
],
“typeVersion”: 1.1
},
{
“parameters”: {
“aggregate”: “aggregateAllItemData”,
“options”: {
“includeBinaries”: true
}
},
“id”: “40905337-8f62-4ed2-acab-345096a75295”,
“name”: “Aggregate Binary Frames”,
“type”: “n8n-nodes-base.aggregate”,
“position”: [
2780,
400
],
“typeVersion”: 1
},
{
“parameters”: {
“url”: “={{ $json.url }}”,
“authentication”: “predefinedCredentialType”,
“nodeCredentialType”: “whatsAppApi”,
“options”: {}
},
“id”: “b22e3a7d-5fa1-4b8d-be08-b59f5bb5c417”,
“name”: “Download Audio”,
“type”: “n8n-nodes-base.httpRequest”,
“position”: [
2140,
40
],
“typeVersion”: 4.2,
“credentials”: {
“whatsAppApi”: {
“id”: “DGZc7otlGOCO5I6B”,
“name”: “WhatsApp account”
}
}
},
{
“parameters”: {
“url”: “={{ $json.url }}”,
“authentication”: “predefinedCredentialType”,
“nodeCredentialType”: “whatsAppApi”,
“options”: {}
},
“id”: “dcadbd30-598e-443b-a3a7-10d7f9210f49”,
“name”: “Download Image”,
“type”: “n8n-nodes-base.httpRequest”,
“position”: [
2140,
740
],
“typeVersion”: 4.2,
“credentials”: {
“whatsAppApi”: {
“id”: “DGZc7otlGOCO5I6B”,
“name”: “WhatsApp account”
}
}
},
{
“parameters”: {
“model”: “gpt-4o-2024-08-06”,
“options”: {}
},
“id”: “c14d1e32-2bdf-4aae-a658-beefad9cf016”,
“name”: “OpenAI Chat Model1”,
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“position”: [
3000,
880
],
“typeVersion”: 1,
“credentials”: {
“openAiApi”: {
“id”: “4k7PsmAWAIqJ03Vt”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“model”: “gpt-4o-mini”,
“options”: {}
},
“id”: “c94b021a-72fb-483d-9190-42032bcfe583”,
“name”: “OpenAI Chat Model2”,
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“position”: [
3000,
1180
],
“typeVersion”: 1,
“credentials”: {
“openAiApi”: {
“id”: “4k7PsmAWAIqJ03Vt”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“model”: “gpt-4o-2024-08-06”,
“options”: {}
},
“id”: “f6a8a432-edc3-447d-88c8-405eff62e1c8”,
“name”: “OpenAI Chat Model3”,
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“position”: [
3700,
760
],
“typeVersion”: 1,
“credentials”: {
“openAiApi”: {
“id”: “4k7PsmAWAIqJ03Vt”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“sessionIdType”: “customKey”,
“sessionKey”: “=whatsapp-tutorial-{{ $json.from }}”
},
“id”: “d38b6f73-272e-4833-85fc-46ce0db91f6a”,
“name”: “Window Buffer Memory”,
“type”: “@n8n/n8n-nodes-langchain.memoryBufferWindow”,
“position”: [
3820,
760
],
“typeVersion”: 1.2
},
{
“parameters”: {
“assignments”: {
“assignments”: [
{
“id”: “d990cbd6-a408-4ec4-a889-41be698918d9”,
“name”: “message_type”,
“type”: “string”,
“value”: “={{ $(‘Split Out Message Parts’).item.json.type }}”
},
{
“id”: “23b785c3-f38e-4706-80b7-51f333bba3bd”,
“name”: “message_text”,
“type”: “string”,
“value”: “={{ $json.text }}”
},
{
“id”: “6e83f9a7-cf75-4182-b2d2-3151e8af76b9”,
“name”: “from”,
“type”: “string”,
“value”: “={{ $(‘WhatsApp Trigger’).item.json.messages[0].from }}”
},
{
“id”: “da4b602a-28ca-4b0d-a747-c3d3698c3731”,
“name”: “message_caption”,
“type”: “string”,
“value”: “={{ $(‘Redirect Message Types’).item.json.video && $(‘Redirect Message Types’).item.json.video.caption || ‘’ }}\n{{ $(‘Redirect Message Types’).item.json.image && $(‘Redirect Message Types’).item.json.image.caption || ‘’}}\n{{ $(‘Redirect Message Types’).item.json.audio && $(‘Redirect Message Types’).item.json.audio.caption || ‘’}}”
}
]
},
“options”: {}
},
“id”: “3459f96b-c0de-4514-9d53-53a9b40d534e”,
“name”: “Get User’s Message”,
“type”: “n8n-nodes-base.set”,
“position”: [
3400,
580
],
“typeVersion”: 3.4
},
{
“parameters”: {
“fieldToSplitOut”: “messages”,
“options”: {
“includeBinary”: true
}
},
“id”: “7a4c9905-37f0-4cfe-a928-91c7e38914b9”,
“name”: “Split Out Message Parts”,
“type”: “n8n-nodes-base.splitOut”,
“position”: [
1200,
580
],
“typeVersion”: 1
},
{
“parameters”: {},
“id”: “f2ecc9a9-bdd9-475d-be0c-43594d0cb613”,
“name”: “Wikipedia”,
“type”: “@n8n/n8n-nodes-langchain.toolWikipedia”,
“position”: [
3940,
760
],
“typeVersion”: 1
},
{
“parameters”: {
“content”: “### 2. Transcribe Audio Messages :speech_balloon:\nFor audio messages or voice notes, we can use GPT4o to transcribe the message for our AI Agent.”,
“height”: 97.23360184119679,
“width”: 356.65822784810103,
“color”: 7
},
“id”: “0d3d721e-fefc-4b50-abe1-0dd504c962ff”,
“name”: “Sticky Note1”,
“type”: “n8n-nodes-base.stickyNote”,
“position”: [
1760,
-80
],
“typeVersion”: 1
},
{
“parameters”: {
“content”: “### 3. Describe Video Messages (no sound) :clapper:\nFor video messages, one approach to understand the contents is to split the file into frames and have our AI’s Vision mode to describe what is happening across the frames. We won’t be able to capture the audio however using this method however. To capture audio, we’d first need to extract the audio track which is outside the scope of this demonstration.”,
“height”: 155.56271576524733,
“width”: 560.8101265822784,
“color”: 7
},
“id”: “59de051e-f0d4-4c07-9680-03923ab81f57”,
“name”: “Sticky Note2”,
“type”: “n8n-nodes-base.stickyNote”,
“position”: [
2360,
200
],
“typeVersion”: 1
},
{
“parameters”: {
“content”: “### 4. Analyse Image Messages :national_park:\nFor image messages, we can use GPT4o to explain what is going on in the message for our AI Agent.”,
“height”: 97.23360184119679,
“width”: 356.65822784810103,
“color”: 7
},
“id”: “e2ca780f-01c0-4a5f-9f0a-e15575d0b803”,
“name”: “Sticky Note3”,
“type”: “n8n-nodes-base.stickyNote”,
“position”: [
2480,
660
],
“typeVersion”: 1
},
{
“parameters”: {
“content”: “### 5. Text summarizer :blue_book:\nFor text messages, we don’t need to do much transformation but it’s nice to summarize for easier understanding.”,
“height”: 97.23360184119679,
“width”: 428.24395857307246,
“color”: 7
},
“id”: “6eea3c0f-4501-4355-b3b7-b752c93d5c48”,
“name”: “Sticky Note4”,
“type”: “n8n-nodes-base.stickyNote”,
“position”: [
2200,
1120
],
“typeVersion”: 1
},
{
“parameters”: {
“amount”: 0
},
“id”: “925a3871-9cdb-49f9-a2b9-890617d09965”,
“name”: “Get Text”,
“type”: “n8n-nodes-base.wait”,
“position”: [
1980,
1040
],
“webhookId”: “99b49c83-d956-46d2-b8d3-d65622121ad9”,
“typeVersion”: 1.1
},
{
“parameters”: {
“content”: “## 6. Generate Response with AI Agent\nRead more about the AI Agent node\n\nNow that we’ll able to handle all message types from WhatsApp, we could do pretty much anything we want with it by giving it our AI agent. Examples could include handling customer support, helping to book appointments or verifying documents.\n\nIn this demonstration, we’ll just create a simple AI Agent which responds to our WhatsApp user’s message and returns a simple response.”,
“height”: 655.4660529344073,
“width”: 639.3555811277332,
“color”: 7
},
“id”: “9225a6b9-322a-4a33-86af-6586fcf246b9”,
“name”: “Sticky Note5”,
“type”: “n8n-nodes-base.stickyNote”,
“position”: [
3320,
340
],
“typeVersion”: 1
},
{
“parameters”: {
“content”: “## 7. Respond to WhatsApp User\nRead more about the Whatsapp node\n\nTo close out this demonstration, we’ll simple send a simple text message back to the user. Note that this WhatsApp node also allows you to send images, audio, videos, documents as well as location!”,
“height”: 520.778826237054,
“width”: 478.15512082853854,
“color”: 7
},
“id”: “5a863e5d-e7fb-4e89-851b-e0936f5937e7”,
“name”: “Sticky Note6”,
“type”: “n8n-nodes-base.stickyNote”,
“position”: [
3980,
340
],
“typeVersion”: 1
},
{
“parameters”: {
“operation”: “send”,
“phoneNumberId”: “353975924475037”,
“recipientPhoneNumber”: “={{ $(‘WhatsApp Trigger’).item.json.messages[0].from }}”,
“textBody”: “={{ $json.output }}”,
“additionalFields”: {},
“requestOptions”: {}
},
“id”: “89df6f6c-2d91-4c14-a51a-4be29b1018ec”,
“name”: “Respond to User”,
“type”: “n8n-nodes-base.whatsApp”,
“position”: [
4380,
580
],
“typeVersion”: 1,
“credentials”: {
“whatsAppApi”: {
“id”: “DGZc7otlGOCO5I6B”,
“name”: “WhatsApp account”
}
}
},
{
“parameters”: {
“promptType”: “define”,
“text”: “Here is an image sent by the user. Describe the image and transcribe any text visible in the image.”,
“messages”: {
“messageValues”: [
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”
}
]
}
},
“id”: “2f0fd658-a138-4f50-95a7-7ddc4eb90fab”,
“name”: “Image Explainer”,
“type”: “@n8n/n8n-nodes-langchain.chainLlm”,
“position”: [
3020,
740
],
“typeVersion”: 1.4
},
{
“parameters”: {
“promptType”: “define”,
“text”: “=The user sent the following message\nmessage type: {{ $json.message_type }}\nmessage text or description:\n{{ $json.message_text }}\n{{ $json.message_caption ? message caption: ${$json.message_caption.trim()} : ‘’ }}”,
“options”: {
“systemMessage”: “You are a general knowledge assistant made available to the public via whatsapp. Help answer the user’s query succiently and factually.”
}
},
“id”: “85eaad3a-c4d1-4ae7-a37b-0b72be39409d”,
“name”: “AI Agent”,
“type”: “@n8n/n8n-nodes-langchain.agent”,
“position”: [
3720,
580
],
“typeVersion”: 1.6
},
{
“parameters”: {
“promptType”: “define”,
“text”: “These are frames from a video sent by the user. Explain what is going on from start to end.”,
“messages”: {
“messageValues”: [
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”
},
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”,
“binaryImageDataKey”: “data_1”
},
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”,
“binaryImageDataKey”: “data_2”
},
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”,
“binaryImageDataKey”: “data_3”
},
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”,
“binaryImageDataKey”: “data_4”
},
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”,
“binaryImageDataKey”: “data_5”
},
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”,
“binaryImageDataKey”: “data_6”
},
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”,
“binaryImageDataKey”: “data_7”
},
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”,
“binaryImageDataKey”: “data_8”
},
{
“type”: “HumanMessagePromptTemplate”,
“messageType”: “imageBinary”,
“binaryImageDataKey”: “data_9”
}
]
}
},
“id”: “96a24aa6-9ae5-4c37-9175-d4189987a568”,
“name”: “Video Explainer”,
“type”: “@n8n/n8n-nodes-langchain.chainLlm”,
“position”: [
3020,
400
],
“typeVersion”: 1.4
},
{
“parameters”: {
“promptType”: “define”,
“text”: “={{ $json.messages }}”,
“messages”: {
“messageValues”: [
{
“message”: “Summarize the user’s message succinctly.”
}
]
}
},
“id”: “2ad0e104-0924-47ef-ad11-d84351d72083”,
“name”: “Text Summarizer”,
“type”: “@n8n/n8n-nodes-langchain.chainLlm”,
“position”: [
3020,
1040
],
“typeVersion”: 1.4
},
{
“parameters”: {
“resource”: “audio”,
“operation”: “transcribe”,
“options”: {}
},
“id”: “9252b266-0452-4141-b8d5-b4f2b5fd76fb”,
“name”: “Audio Transcriber”,
“type”: “@n8n/n8n-nodes-langchain.openAi”,
“position”: [
3180,
40
],
“typeVersion”: 1.5,
“credentials”: {
“openAiApi”: {
“id”: “4k7PsmAWAIqJ03Vt”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“pageId”: {
“__rl”: true,
“mode”: “url”,
“value”: “”
},
“options”: {}
},
“id”: “2d6cbe9b-1b35-497c-bdab-987088237e41”,
“name”: “Notion”,
“type”: “n8n-nodes-base.notionTool”,
“typeVersion”: 2.2,
“position”: [
4120,
800
]
},
{
“parameters”: {
“calendar”: {
“__rl”: true,
“value”: “@gmail.com",
“mode”: “list”,
“cachedResultName”: “********@gmail.com
},
“additionalFields”: {}
},
“id”: “00577d91-1b67-4df8-80a0-107c83ab5537”,
“name”: “Google Calendar”,
“type”: “n8n-nodes-base.googleCalendarTool”,
“typeVersion”: 1.1,
“position”: [
4320,
800
],
“credentials”: {
“googleCalendarOAuth2Api”: {
“id”: “wz5vQ47m1Gthfmvy”,
“name”: “Google Calendar account”
}
}
},
{
“parameters”: {
“rules”: {
“values”: [
{
“conditions”: {
“options”: {
“caseSensitive”: true,
“leftValue”: “”,
“typeValidation”: “strict”,
“version”: 2
},
“conditions”: [
{
“operator”: {
“type”: “boolean”,
“operation”: “true”,
“singleValue”: true
},
“leftValue”: “={{ $json.type == ‘audio’ && Boolean($json.audio) }}”,
“rightValue”: “audio”
}
],
“combinator”: “and”
},
“renameOutput”: true,
“outputKey”: “Audio Message”
},
{
“conditions”: {
“options”: {
“caseSensitive”: true,
“leftValue”: “”,
“typeValidation”: “strict”,
“version”: 2
},
“conditions”: [
{
“id”: “82aa5ff4-c9b6-4187-a27e-c7c5d9bfdda0”,
“operator”: {
“type”: “boolean”,
“operation”: “true”,
“singleValue”: true
},
“leftValue”: “={{ $json.type == ‘video’ && Boolean($json.video) }}”,
“rightValue”: “”
}
],
“combinator”: “and”
},
“renameOutput”: true,
“outputKey”: “Video Message”
},
{
“conditions”: {
“options”: {
“caseSensitive”: true,
“leftValue”: “”,
“typeValidation”: “strict”,
“version”: 2
},
“conditions”: [
{
“id”: “05b30af4-967b-4824-abdc-84a8292ac0e5”,
“operator”: {
“type”: “boolean”,
“operation”: “true”,
“singleValue”: true
},
“leftValue”: “={{ $json.type == ‘image’ && Boolean($json.image) }}”,
“rightValue”: “”
}
],
“combinator”: “and”
},
“renameOutput”: true,
“outputKey”: “Image Message”
}
]
},
“options”: {
“fallbackOutput”: “extra”,
“renameFallbackOutput”: “Text Message”
}
},
“id”: “325dac6d-6698-41e0-8d2f-9ac5d84c245e”,
“name”: “Redirect Message Types”,
“type”: “n8n-nodes-base.switch”,
“position”: [
1480,
580
],
“typeVersion”: 3.2,
“notesInFlow”: false,
“alwaysOutputData”: true
}
],
“connections”: {
“WhatsApp Trigger”: {
“main”: [
[
{
“node”: “Split Out Message Parts”,
“type”: “main”,
“index”: 0
}
]
]
},
“Get Audio URL”: {
“main”: [
[
{
“node”: “Download Audio”,
“type”: “main”,
“index”: 0
}
]
]
},
“Get Video URL”: {
“main”: [
[
{
“node”: “Download Video”,
“type”: “main”,
“index”: 0
}
]
]
},
“Get Image URL”: {
“main”: [
[
{
“node”: “Download Image”,
“type”: “main”,
“index”: 0
}
]
]
},
“OpenAI Chat Model”: {
“ai_languageModel”: [
[
{
“node”: “Video Explainer”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Capture Frames”: {
“main”: [
[
{
“node”: “Split Out Frames”,
“type”: “main”,
“index”: 0
}
]
]
},
“Split Out Frames”: {
“main”: [
[
{
“node”: “Convert to Binary”,
“type”: “main”,
“index”: 0
}
]
]
},
“Download Video”: {
“main”: [
[
{
“node”: “Capture Frames”,
“type”: “main”,
“index”: 0
}
]
]
},
“Convert to Binary”: {
“main”: [
[
{
“node”: “Aggregate Binary Frames”,
“type”: “main”,
“index”: 0
}
]
]
},
“Aggregate Binary Frames”: {
“main”: [
[
{
“node”: “Video Explainer”,
“type”: “main”,
“index”: 0
}
]
]
},
“Download Audio”: {
“main”: [
[
{
“node”: “Audio Transcriber”,
“type”: “main”,
“index”: 0
}
]
]
},
“Download Image”: {
“main”: [
[
{
“node”: “Image Explainer”,
“type”: “main”,
“index”: 0
}
]
]
},
“OpenAI Chat Model1”: {
“ai_languageModel”: [
[
{
“node”: “Image Explainer”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“OpenAI Chat Model2”: {
“ai_languageModel”: [
[
{
“node”: “Text Summarizer”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“OpenAI Chat Model3”: {
“ai_languageModel”: [
[
{
“node”: “AI Agent”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Window Buffer Memory”: {
“ai_memory”: [
[
{
“node”: “AI Agent”,
“type”: “ai_memory”,
“index”: 0
}
]
]
},
“Get User’s Message”: {
“main”: [
[
{
“node”: “AI Agent”,
“type”: “main”,
“index”: 0
}
]
]
},
“Split Out Message Parts”: {
“main”: [
[
{
“node”: “Redirect Message Types”,
“type”: “main”,
“index”: 0
}
]
]
},
“Wikipedia”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“Get Text”: {
“main”: [
[
{
“node”: “Text Summarizer”,
“type”: “main”,
“index”: 0
}
]
]
},
“Image Explainer”: {
“main”: [
[
{
“node”: “Get User’s Message”,
“type”: “main”,
“index”: 0
}
]
]
},
“AI Agent”: {
“main”: [
[
{
“node”: “Respond to User”,
“type”: “main”,
“index”: 0
}
]
]
},
“Video Explainer”: {
“main”: [
[
{
“node”: “Get User’s Message”,
“type”: “main”,
“index”: 0
}
]
]
},
“Text Summarizer”: {
“main”: [
[
{
“node”: “Get User’s Message”,
“type”: “main”,
“index”: 0
}
]
]
},
“Audio Transcriber”: {
“main”: [
[
{
“node”: “Get User’s Message”,
“type”: “main”,
“index”: 0
}
]
]
},
“Notion”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“Google Calendar”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“Redirect Message Types”: {
“main”: [
[
{
“node”: “Get Audio URL”,
“type”: “main”,
“index”: 0
}
],
[
{
“node”: “Get Video URL”,
“type”: “main”,
“index”: 0
}
],
[
{
“node”: “Get Image URL”,
“type”: “main”,
“index”: 0
}
],
[
{
“node”: “Get Text”,
“type”: “main”,
“index”: 0
}
]
]
}
},
“pinData”: {
“Split Out Message Parts”: [
{
“messages”: {
“from”: "316
”,
“id”: “wamid.HBgLMzE2NDI4MDQxMTIVAgASGCBCN0NFNDNCRDhERTgzMUYwRDU1NUI5ODNCREY5QkVBRgA=”,
“timestamp”: “1730229756”,
“type”: “audio”,
“audio”: {
“mime_type”: “audio/ogg; codecs=opus”,
“sha256”: “88ULvrLq8xfxj1iWcAXU1AzepVievinGRZHsWJoVFwQ=”,
“id”: “868036682104112”,
“voice”: true
}
},
“messages[0].audio”: {
“mime_type”: “audio/ogg; codecs=opus”,
“sha256”: “88ULvrLq8xfxj1iWcAXU1AzepVievinGRZHsWJoVFwQ=”,
“id”: “868036682******”,
“voice”: true
}
}
]
}
}

Hey @Derrel

Just realised this template was one of mine.
It seems your incoming WhatsApp event was different to mine when I was building this workflow. To fix, you’ll need to add an extra node to remap the event object so the template can understand it.

Jim thank you. Yes indeed this is your template. The issue i have now is that everything goes through audio even if i send a text message through WhatsApp maybe the whatsapp output has changed abit?

@Derrel Thanks for the confirmation.

Can you post the output of the WhatsApp trigger so I can compare? If it’s a lot of work to scrub out the private parts, happy for you to DM me instead.

Thank you @Jim_Le how do i DM you?

@Derrel Click on my profile picture and it’ll bring up my profile modal. There is a green message button to the right.