Help with AI prompting

Describe the problem/error/question

  1. My goal is that the ai will get a job link (URL) using the (get_job_description) tool
  2. scrap the website for keywords
  3. use one of my templates (get_IT_resume or get_customer_resume)
  4. create a new resume using the keywords using the tool (create_new_resume)

The issue that I am having is that the work flow isn’t predictable. Sometimes it will use “get_IT_resume” and “get_customer_resume” but other times it will not. However it will never copy the resume from get_IT_Resume and paste it into create_new_resume.

I am new to AI prompting and wondering if its my prompting that is the issue or do I need to add steps

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

I am running 2.4.4 using a Docker container on Docker Desktop
no database

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

@Unfounded8673 Enhance the uniformity and regulate within n8n’s AI Agent and tools.

  1. Ensure crucial steps are predictable.
    Shift actions that are critical and must happen such as scraping or template selection from Agent to normal nodes outside of Agent. This makes sure that they always run without being dependent on the model.

  2. Clarify the prompt further.
    Include your system prompt with specific, high-level rules such as.

  • Always call get job description first.
  • Call either get_IT_resume or get_customer_resume exactly once.
  • Always call “create_new_resume” with the generated content
    The model is guided by this.
  1. Convey the content of the resume explicitly.
    In your create_new_resume node, add a body parameter. Implement the $fromAI('resume_body', ...) function to insert resume text. Revise the prompt so that the model is instructed to always give this content there.

  2. Consider encapsulating multi-step logic.
    Employ configurable sub-workflows or a unique bespoke tool to scrape, choose templates and create documents. Allow the Agent to work on only high level tasks.

You can implement the changes to get reliable and predictable results.

What you say makes sense. However, the choice of which resume is dependent on the results of the scrap. Would the worker have access to the sub workflows? The URL will change each time that is why I use the chat trigger. Can I use a chat trigger outside of an agent?

Indeed, you have the option to utilize a Chat Trigger outside an AI Agent as the entry point for your workflow. If you wish to call a subworkflow (like scraping or resume selection) depending on the scraped result, you will invoke that subworkflow from normal workflow logic (by means of Execute Workflow/Subworkflow nodes). This is because, a worker/chain called by an AI Agent will not automatically have access to other subworkflows unless requested.

This is my updated workflow. I am struggling with how to pass data from a parent workflow to the child.

This is the Resume choice tool

1 Like

To pass data from one workflow to another you should use “Execute Another Workflow” node in your parent workflow and in your child workflow add a trigger “When executed by another workflow” trigger with this setup if now you want to pass some data from parent to child, first your child workflow trigger config should look like this:

And once you have added the SomeData now make sure to publish this workflow, now goto your parent workflow’s “Execute Sub Workflow Node” and there you will be able to see it like this:

And so this is how you transfer data between workflows seamlessly, something i would address that when testing the parent-child workflow setup you have to see the executions log for any errors, hope this helps

Hello @Unfounded8673 ,

Because you are using the AI Agent (via the Tool Workflow node in the parent), the standard “Execute Workflow” trigger in the child won’t work. The AI needs a specific “Tool” interface to talk to.

Here is how to fix the connection and pass your data:

1. Fix the Child Workflow (The Root Cause)

  • Delete the node When Executed by Another Workflow.
  • Add a Workflow Tool Trigger node instead.
  • In this new node, look for the Schema section. This is where you define what data acts as the “Input.” Add a field here (e.g., user_id or resume_category) so the child workflow knows what to expect.

2. Pass the Data from the Parent Now, go back to your Parent Workflow and open the resume_choice_tool node:

  • To let the AI decide: If you want the AI to figure out the data (like “IT” vs “Customer Service”) from the chat, just leave it alone. The AI will look at the Schema you created in Step 1 and fill it in automatically.
  • To force specific data: If you need to pass the ID from your database node, look for Workflow Inputs.
    • Set it to Define Below.
    • Add a field with the exact same name you used in the Child Schema (e.g., user_id).
    • Map the value using an expression: {{ $('Get row(s)').first().json.id }}.

Once you swap that trigger node in the child workflow, the connection will work!

Your solution doesn’t seem to work. My screen does not look like yours.
when I go to database it then says there was an error workflow not active.

@Unfounded8673 mention where exactly you are having issues in following up with my flow & setup?

So I understand what you are saying. This is my second version of this workflow that I am having issues with. @Anshul_Namdev suggested that I separate the logic into separate workflows. Your solution works, but I lose a lot of context, like Job title, company name and keywords. The job title and company name aren’t a huge deal, but because each job description has a different number of keywords that I would need to put on my resume, I wouldn’t be able to put the keywords as a parameter, because it might change.

I am having difficulty knowing how to structure this workflow. I do not know how to have the AI take the information that I have and use the keywords to edit a per existing resume.

This is what you have asked and that is why i have made that brief of how to do that.

The parameters you can add is endless and is not bounded to just user_id or resume_category.

Use AI Agent give that AI Agent toosl to read your information and the set of instructions which contains keywords to use.

Let me frame this for you, as an example. You have a resume incoming which will be uploaded on a certain folder of google drive and that would be the trigger, once uploaded the workflow would start and will run the AI Agent , the Agent will use Google Docs tools to read/update and write content into that file and will save it and once done will generate a small text report to send the user that he made some changes which are listed. And in the Agent tool you can give FireCrawl/SerpAPI for scraping data and keywords from website and can give template read access so AI agent knows what to follow along, and in the docs either allow edit or just let the AI agent create a new resume, with this kind of architecture according to your design and use case would be really helpful, let me know if you need more clarify or a example structure of how things would look like.

Hi @Unfounded8673, welcome!

As you mentioned, this approach isn’t predictable, using a single AI agent with many tools requires a very strong model that follow instructions and strict prompting, You could use an AI Agent Tool as a sub-tool to split the tasks as well,

IMO, since the workflow you described is straightforward, the solution is to make your workflow deterministic (node-by-node) rather than autonomous:

instead of one agent doing everything, build the workflow sequentially:

  • Take the Job URL.
  • Scrape the website to extract the required info.
  • Use an AI step here just to generate/fill the specific resume template.
  • Generate the final resume.

This keeps the process simple and defined without dealing with the complexity and unpredictability of an AI Agent’s logic..

Thank you. I totally agree and like your organization of the workflow. I am actually trying to implement it but struggling with the third step. I have the first and second done and they work just fine. I am just struggling with the implementation of the third and fourth step

You can create a general google doc template that contains some basic, unchanged information, and for the sections you want to modify, leave them as empty tables or placeholder text that you can later find and replace, or insert content into..

There are many options in the Google Docs node that you can experiment with:

so In the workflow, the fourth step here would be to duplicate the template, then modify it, this way, you can generate a different CV each excution..

The third step with AI part here would like:

  • You take the output of the scraped job description.
  • Decide exactly what information you need from it.
  • Then use the Output Parser so that its results become inputs for the CV fields.

By configuring the Output Parser properly, the outputs will map directly to the CV sections in your template..

This approach makes the process deterministic and keeps you in full control, with the AI limited to generating the fillings for the predefined template sections.

It requires some efforts, especially to get the referencing right, but the results are more reliable i think..

1 Like

When I try to pass information from the parent work flow to the child I get the following errors. I am getting an error called invalid syntax if I do this "{{“test” : $(‘job_information’).item.json.body.data.jobDescription }}

and there was an error: the provided workflow is not valid json when I do this {{“test” : $(‘job_information’).item.json.body.data.jobDescription }}

My child work flow is setup to accept all data from parent

@Unfounded8673 Make your sub workflow published first, and then from the Execute Sub Workflow node in the drop down select the sub workflow, if you already have had setup your sub workflow’s input they would display in your main workflow.

I can’t publish my child node because it says I have one error and I have to fix it before I can publish it. But how does publishing fix my JSON issues?

Please share the error with us. Your JSON issue exists because you are using Define Below option in the Execute sub workflow node and giving an actual workflow JSON which is hard to maintain and is very prone to errors, so instead i want you to select the sub workflow from the list (database) option.

For your error the most probable one must be that you have not defined input fields in the trigger of the sub workflow which is When executed by another flow node, that node settings should look a little similar to this:


So you have to click on that Add Field to add a input which this workflow would get from main workflow, and now follow the this:

Hope this helps.