Whisper as a Service + Langchain Template


I’m playing a little bit with the langchain node and it’s choking on me.

I chose the [AI/LangChain] Digest Podcast Episode template and before the “Chunk the transcript into several parts, and refine-summarize it”, I introduced the “Whisper-1 example” from the OpenAI examples, just to load an mp3 and transcribe it. Instead of the “Whisper-transcribe” node which goes to the openai api copied it and pointed the copy to my own whisper as a service to not incur in costs (https://github.com/ahmetoner/whisper-asr-webservice).

Well… it loads ok the mp3 file and transcribes ok the text, but while entering the langchain node I got the following error:

Problem in node ‘Summarize Transcript‘

Cannot read properties of undefined (reading ‘pageContent’)

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: Version 1.11.2
  • Database (default: SQLite): SQLite
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker

Hi @luisalrp, that’s a cool addition to that template! You were pretty close too, just need to modify a few things in your workflow:

  1. Seems like “Whiser as a Service” is sending data with content-type: text/plain even though you’ve used output=json header. But you can force this in n8n by changing Options -> Response -> Response Format -> JSON. Check the gif below:
    CleanShot 2023-10-24 at 16.18.22

  2. The error you are seeing is due to “Workflow Input to JSON” document loader is not able to parse-out since it’s looking for it in /transcript but Whisper as a Service returns it as text. So the Pointers field should be /text. With these changes the workflow seems to pass :tada:. Btw, it’s good to use that pointer to only use /text in your chain because otherwise “segmentes” array would also get passed to the LM for summarization which could result in much more token costs.

Here’s the updated workflow:


Hi @oleg.

Thank you for your reply. It is very complete and didactic. It has worked quite well with all the proposed changes :tada:

Now I am getting an error on the next group of nodes of the template. I didn’t expect it to give me an error, since I haven’t touched anything in the template (except for the node connection and the OpenAI API Key selection).

Problem in node ‘Extract Topics & Questions‘
Error on node "Structured Output Parser" which is connected via input "ai_outputParser"

Another thing I’m thinking, is that when making the summary, if I wanted to write an article or a series of articles, the AI would have lost too much context, am I right? Especially if we are talking about terribly novel things.

What would be the strategy you would use for this other assumption? Ask it instead of a summary to pull out the keypoints and attach some context to write an article? What nodes would you use? I guess it is a mix of the right langchain functions and the right prompts but I still need to learn more.

I’ll keep the research. I am pretty sure that with the right youtube videos and forum and blog post it is doable :smiley:

Best, Luis.

@luisalrp Happy it worked! :raised_hands: I’m not sure what’s wrong with the output parsing in your case though, runs as expected when I run it with the template. Could you provide more info about the input/output in your workflow?

As to your second quesiton, not sure I fully follow. You want to create an article based on some lengthy body of text, let’s say a book? I guess there’s several levels of summarisation that you could apply for this.

How I would start to approach this:

  1. Run summarization for every N pages by always providing it with some context about the book + Nth page content selection
  2. Create a chain that accepts the chunked(for N pages) summarization and returns key ideas/most important facts for each
  3. Create a chain that accepts summary of summaries and key ideas and returns article outline — heading + very short description of the section
  4. Embed the book into chunks and store them in some vector store
  5. For each section, run QA retrieval chain over vector store and let AI write content of the section based on the retrieved context

Writing the final section content based on the content from the vector store should improve a chance of LM using the right information to write content for that section.

Sounds like an intriguing project! Can’t wait to see the results if you decide to go ahead with it. Please share it with the community! :blush:


Haha, it’s not rocket science what I am trying, but I’m pseudo programmer and there are a lot of new concepts that are not easy.

I developed a product that is not complete yet, and I have to start working on the marketing side, since if nobody knows it, it is like if it was never built.

I attended some weeks ago to a conference and one of the speakers talked about AI and how he used it. He created a web and publish articles of topics from experts of videos he likes. So his stack load a video, transcribes it, and then writes an article using the content from the video, and also it writes with the same style as he does.

It blowed my mind :exploding_head:. Here is the result (it is in Spanish).

Thanks for sharing your approach. I’m going to try it again!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.