Help Needed: Dynamically Finding Todoist Project & Section IDs with AI Agent Workflow

Hi everyone,

I’m currently working on building an AI-powered agent within n8n to manage Todoist tasks based on natural language input. The goal is to have a seamless integration where a user can make requests (e.g., “Create a task ‘Review report’ in project ‘Client Work’, section ‘Pending Review’ with the label ‘urgent’”), and the agent handles the task creation/update in Todoist.

My Workflow Setup:

  1. User input (e.g., chat message) triggers the workflow.
  2. An OpenAI Chat Model node (using function calling, within a Langchain agent structure) processes the request to extract key information like task content, target project name, target section name, and label names.
  3. Subsequent n8n nodes are supposed to process this information and interact with the Todoist API.

I’m facing difficulties specifically when trying to create or update tasks that require specifying a Project ID and a Section ID.

  • The AI model correctly identifies the names of the project and section based on the user’s request.
  • However, the Todoist nodes (like “Create Task”) require the numerical Project ID and Section ID for these operations.
  • My attempts to dynamically look up these IDs based on the names provided by the AI often fail, leading to errors like “400 Bad Request - Invalid argument value”.

Specific Problem Area:

I’m struggling to find a robust way to:

  1. Reliably fetch the correct Project ID using the project name provided by the AI (using “Todoist - Get Many Projects” with filters).
  2. Crucially, fetch the correct Section ID using the section name provided by the AI, ensuring it belongs to the specific Project ID found in the previous step (using “Todoist - Get Many Sections” with filters for both Project ID and Section Name).
  3. Handle potential errors gracefully (e.g., project/section name not found, or multiple matches).

I suspect my current method of filtering and mapping the results might be incorrect, especially in reliably passing the correct IDs to the final Todoist “Create Task” node.

My Question:

Could anyone share best practices, node configurations, or example workflow snippets for reliably mapping project/section names (identified by an AI) to their corresponding IDs in Todoist within an n8n workflow? How do you handle the dependency between finding the Project ID first and then finding the Section ID within that specific project?

I’m happy to provide a link to my current workflow template for context.

Thanks so much in advance for any guidance or suggestions!

Here’s the JSON code

Hi @Nukleoh,

Since you are starting the workflow with this query:

create a task X, in the Y project and Z section

What about adding an Information Extractor node before the AI agent, with the following attributes: task, project, and section?

This way, you will have more control over these attributes before passing them to the agent, reducing the AI’s need to guess.
I think you can even eliminate the use of $fromAI()

I hope the general idea is clear. Try implementing it and see the results.

Hello @mohamed3nan,

Thanks a lot for your suggestion regarding the “Information Extractor” node!

I’ve actually been working on this specific workflow integration all day today. Following a similar logic of needing structured data, I’ve already implemented a Set node after my main AI Agent (the OpenAI node processing the natural language request).

This Set node successfully isolates the variables identified by the AI – like project_name, section_name, task_content, label_names, etc. – based on the AI’s output (which seems to correctly extract the names from the user request).

However, even with these variables correctly populated with the names, I’m still encountering the exact same 400 Bad Request - Invalid argument value error when the workflow reaches the Todoist “Create Task” node.

My strong suspicion remains that the core issue lies here: the Todoist API requires the numerical Project ID and Section ID, not the names. While my AI and Set nodes now correctly provide the names, I haven’t found a reliable solution within n8n to consistently look up the corresponding IDs based on these names before calling the Create Task node. Getting the Section ID specific to the found Project ID seems particularly tricky.

So, while extracting information earlier might shift where the names are parsed, the fundamental challenge I’m stuck on is converting those identified names into the required IDs for the Todoist API call.

I still believe the ID lookup based on names is where I need the most help.

My workflow link is available if taking a look helps clarify the situation.

Thanks again for your input!

@Nukleoh
Thank you for explaining it so clearly, I believe I understand now.

Unfortunately I do not see a built-in action to retrieve project information directly.
However, you can utilize the Todoist API to fetch a list of all your projects.
The response will be a JSON array containing details for each project, such as id and name

Once you have this list, you can search for the project by name to obtain its corresponding ID.

Consider implementing this as a sub-workflow that executes after receiving the required inputs, then continue to the agent…

Thank you so much for that detailed explanation and the very helpful suggestion!

Your approach makes perfect sense. I believe I understand the concept now.

I’ll start working on implementing this solution right away. I’ll definitely report back here and share how it goes (and hopefully the working solution!) once I’ve figured it out.

Thanks again for pointing me in this direction, I really appreciate your help!

1 Like

Olá @Nukleoh and respected @mohamed3nan
Good morning

Open a separate workflow and test this suggestion below to see if it makes sense for you.

This workflow extracts the names via AI, then converts each name (project, section, label) to ID using searches and functions, with error handling at each step.

Only after everything is resolved, create the task in Todoist.

The AI ​​Agent needs to be configured to correctly extract the variables (project_name, section_name, task_content, label_names) and pass them to the “Set Vars” node. Because the AI ​​Agent needs to return a structured JSON.

{
  "project_name": "Meu Projeto",
  "section_name": "Revisão",
  "task_content": "Fazer relatório",
  "label_names": "urgente,importante"
}

Here is a suggestion. I hope it helps you.

If this suggestion solved your problem, mark my post as a solution (blue box with check mark) so that this ongoing discussion does not distract others who want to find the answer to the original question and leave your click on the heart :heart:. Thank you :+1:

2 Likes

Hello @interss and @mohamed3nan

Good morning, and thank you so much for your detailed reply, the clear explanation, and the workflow suggestion!

Your breakdown of the process definitely helped me to move forward and gave me a much clearer picture of the necessary steps. The emphasis on the structured JSON output from the AI is also a key point.

I’ve been working on implementing this logic based on the guidance provided in this thread. However, I’m still encountering some specific bugs within my current workflow that are preventing it from running smoothly from end to end right now.

Because of these ongoing issues, my workflow isn’t quite ready to be presented here as a complete, working solution just yet.

Therefore, as you suggested, I will mark your helpful post as the solution for now and close this topic so it doesn’t distract others seeking answers. I really appreciate the time and effort you put into helping me.

If I manage to get a fully working and cleaned-up version later on, I will try to come back and edit or share it if possible.

Thanks again to everyone who contributed!

This workflow is corrupted and clearly AI-generated. You didn’t even review or test it. The Todoist node is also using an outdated version, which is another clear sign of AI generation.

Additionally, the following nodes are not correctly connected!

@interss Kindly take a bit more time to review and test your posts before sharing. It’s perfectly fine to use AI, but ensuring the content works properly would greatly improve its quality and help everyone.

2 Likes

I hope you are well.
Thank you for your reply.

Feel free to share your bugs in this same thread, and I will do my best to help.
N8N is constantly evolving and this is exciting and challenging, because no one can know everything, so it is necessary to constantly study.

I am available to be of assistance in any way!

Hello, respected @mohamed3nan I hope you are well.

Thank you for your answer.

When you think about the use of AI and wonder if I use AI, you are perfectly right, because I use AI every day.

I would love for AIs like Gemini, Claude, OpenAI, DeepSeek, Grok, Llama, among others, to be so advanced that it would be enough to copy the user’s question and paste it into the AI ​​and it would answer correctly.

But I know and you know that this is not possible yet (unfortunately).

If you take @Nukleoh question and paste it into any of the AIs I mentioned above (I suggest you do this, you will see how interesting the behavior of each one is), you will see the types of answers they deliver.

When you do this, you will realize that I dedicated a considerable amount of time to answering this question posed by @Nukleoh

About reviewing and testing the flow, at the beginning of each of my posts, you will see that I take care to ask the community user to open a separate workflow, include their credentials and make the necessary adjustments to verify that the solution makes sense. When the user goes to do their tests, they do not only have a suggested solution, but they have at hand a possible inspiration (both to modify their entire flow, or to take part and attach it to their flow).

I recognize that the user can make 100% use of some flows, but I also recognize that they can use only a few excerpts and adjust them according to their needs.

That said, you have already seen that the community posts answers of all varieties and degrees of difficulty. Which is interesting and exciting, because it is a way to grow and learn.

I admit that I learn along with the questions, because I have not experienced or experienced the scenario addressed by the user.

And it constantly happens to me that when I’m answering, there are already several answers when I go to post (after translating, because I don’t speak English, understanding the question, seeing if I can be helpful, answering, translating it into English and then posting it in the community)… It’s okay to have several answers, but when I’m answering, there’s only the question on the page, and as soon as I post the answer, the page refreshes and several answers appear. This happened to me with this question that you and I helped this user. Your answer wasn’t there, because when I’m looking at other questions and see that it’s been answered (I see if I can be helpful), if I can’t be helpful, I avoid answering or continuing to answer to avoid confusing the user or hindering the reasoning of whoever is helping.

Now notice that between the time the question was posted and my answer there is a gap of 2 hours, that is, it took me almost 2 hours to answer this user’s question and in that time you had already answered it twice… Which shows how much time it took me to answer, but when I posted and the page was refreshed, there were already 2 answers from you.

I apologize if I caused any discomfort and know that I admire your quick way of thinking and responding. I am truly not at your level of knowledge, but I continue to study, learn and try to be useful in some way.

Hi @mohamed3nan

You are absolutely right. I did encounter several issues while trying to implement the concepts discussed, and I initially thought the mistakes were on my end during the adaptation process.

I now realize, the problems as mentioned and it confirms the difficulties I was facing weren’t solely due to my own errors in understanding.

I’ve since had to significantly rework and readapt the approach based on the core ideas discussed earlier to get a functional workflow.

It definitely underscores your point about the importance of thoroughly reviewing and testing shared or potentially AI-generated workflows. I appreciate you highlighting these specific issues.

@interss I thank you for pointing me the solution with the right nodes to use anyway, i like problems and in top of that solving them :slight_smile:

Here’s the workflow i made finally to make it work

Thanks again to the 2 of you

1 Like

@Nukleoh I am happy for your recognition and for having been useful in some way.

I also agree with your words about what the respectable @mohamed3nan said.

I hope that in the last message I sent 1 hour ago I made it clearer how I seek to help and be useful to the community.

I thank you for sharing the final solution. Almost no one comes back to share the final solution, because I know that many times just a snippet or an idea of ​​a flow already helps the person in the community and you coming back and sharing the solution also helps us grow and learn together.

Have a great day.
I wish you all success!

@interss Your response was a great help. That said, I understand @mohamed3nan comment, as it’s possible that some people might just be looking for a direct solution [or: “quick fix”].

Being new to development, seeing things explained in depth is actually much more helpful for me. It allowed me to learn how to use different nodes than the usual ones and showed me a different way of thinking about problem-solving, and for that, I thank you.

Regarding the solution, I feel it’s only right to share it. You take the time to respond; the least I can do is share the solution [once I have it working properly]. It’s just common courtesy.

Great day to you 2

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.