## Cognis – Automated animated math lesson pipeline built with n8n Hi n8n community! I wanted to share a workflow I built for **Cognis**, an AI-powered educational platform generating animated math lesson videos for Uzbekistan's grades 1–4 curriculum. -

Cognis – Automated animated math lesson pipeline built with n8n

Hi n8n community! I wanted to share a workflow I built for Cognis, an AI-powered educational platform generating animated math lesson videos for Uzbekistan’s grades 1–4 curriculum.


What it does

The pipeline takes official Uzbek math textbooks and curriculum standards as input, and outputs fully animated educational videos — in Uzbek, Russian, and English — without any manual work between steps.

End-to-end flow:
Textbook PDFGPT-4.1 topic parserScene plannerSVG animation director (27 primitives)FLUX.2 image genWan 2.2 I2VRemotion scene assemblerElevenLabs TTS (3 langs)Adaptive audio syncFinal MP4


Key nodes in the workflow

  • HTTP Request → GPT-4.1 for topic extraction, scene planning, script writing, and SVG validation/auto-fix loop
  • HTTP Request → FLUX.2 [dev] for character image generation
  • HTTP Request → Wan 2.2 I2V for image-to-video animation
  • HTTP Request → ElevenLabs for TTS in 3 languages
  • Code node → Adaptive sync: calculates animation timing from audio duration
  • IF node → SVG validator retry loop (regenerates if validation fails)
  • HTTP Request → Remotion render API for final MP4 export
  • Split in batches → Parallel processing per scene

The interesting part — adaptive audio sync

Instead of hardcoding animation durations, each scene’s timing is driven by the actual TTS audio length. This means switching languages (Uzbek → Russian → English) requires zero manual adjustment — the animation adapts automatically.


Why n8n?

The pipeline has ~40 nodes across 4 parallel lanes. n8n made it easy to:

  • Handle retry logic for AI calls
  • Run scene generation in parallel batches
  • Connect 6 different AI APIs without custom glue code
  • Test individual nodes without re-running the full pipeline

Happy to share the workflow JSON if anyone is building similar AI video generation pipelines. Would love feedback from the community!

Tags: ai, education, video-generation, gpt-4, elevenlabs, remotion

the adaptive audio sync piece is the interesting bit — most people hardcode animation durations and then scramble when switching languages. letting the actual TTS output drive the timing is a much cleaner approach. curious whether the SVG validation retry loop ever runs more than 2-3 iterations in practice, or does the LLM usually get it right on the first pass?