Workaround for Dynamic Credentials in n8n (Using the Built-in API)

In the current n8n app, credentials are resolved at design time and are not truly dynamic at runtime. This creates challenges for use cases where credentials must be created, updated, or switched dynamically during execution (for example multi-tenant workflows, per-request authentication, or rotating secrets). Since nodes rely on a static credential reference, n8n does not currently support fully dynamic credentials out of the box.

To work around this limitation, I’m using the built-in n8n API to create and manage credentials programmatically at runtime.

Approach:

  1. Use the built-in n8n API to create a base_credential that is referenced by the workflow nodes

  2. During runtime, use the n8n API to create a tmp_credential containing the dynamic credential values

  3. Copy the required fields from tmp_credential into the base_credential via the API

  4. Validate that the data in base_credential matches the tmp_credential

  5. Continue processing in subsequent nodes using the base_credential

This approach keeps node configurations unchanged while allowing credentials to be created and updated dynamically at runtime, working within n8n’s current credential resolution model.

Posting this in case it helps others dealing with similar dynamic or multi-tenant credential scenarios.

I would do it by creating a tmp_credential object on demand when needed. This way I can avoid hardcoding credentials everywhere in my app. What’s the difference between base_credential and tmp_credential that would make this approach feasible?

1 Like

Hi @achamm

Yes, this is fully on-demand. My use case is a multi-tenant application where credentials are passed in via a webhook at runtime.

The key difference between base_credential and tmp_credential is how they are used, not their structure. The base_credential acts as a stable, pre-configured credential reference that all nodes are already wired to at design time. This is necessary because n8n nodes cannot switch credential references dynamically during execution.

The tmp_credential, on the other hand, is created on demand via the built-in n8n API using the credential data received from the webhook. It is never referenced directly by workflow nodes. Instead, it serves as a temporary container for tenant-specific connection data.

At runtime:

  • The webhook provides tenant-specific credentials

  • A tmp_credential is created via the n8n credentials API /api/v1/credentials

  • The validated data from tmp_credential is copied into the existing base_credential

  • Nodes continue execution using the same base_credential reference, but with updated connection data

This approach avoids hardcoding credentials across the app, supports multi-tenant isolation, and works within n8n’s current limitation that credentials are resolved at design time. The API is essential here because it allows credentials to be created, updated, and validated programmatically during workflow execution.

So the feasibility of this approach comes from separating credential identity (base_credential) from credential data (tmp_credential), while keeping node configurations unchanged.

This sounds pretty cool! I think it’s a good system with valid security! There are n8n credential nodes also, so let me know how your project goes!

1 Like

This is a clever and well-reasoned workaround. You’re correctly exploiting the fact that in n8n credential identity is static but credential data is mutable, which is really the only lever available today for dynamic / multi-tenant scenarios.

One thing I’d call out for anyone adopting this pattern is isolation under concurrency. Mutating a single base_credential at runtime is safe only if executions are strictly serialized. If two executions overlap, there’s a real risk of credential bleed (tenant A’s run briefly using tenant B’s secrets).

A safer variant that keeps the same idea but adds isolation is:

Per-tenant base credentials + per-tenant locking

  • Create one stable base_credential per tenant (wired at design time).

  • Store a mapping tenant_id → credential_id.

  • At runtime:

    • Acquire a lock for that tenant (Postgres advisory lock or Redis).

    • Update only that tenant’s base credential via the n8n API.

    • Run the rest of the workflow.

    • Release the lock.

This way:

  • Nodes still reference a stable credential ID (n8n-friendly).

  • Different tenants can run concurrently.

  • Same-tenant executions are serialized, preventing overwrites.

If higher security is required, the next step up is avoiding credential mutation entirely and using a small “credential broker” service (nodes authenticate to the broker, broker holds tenant secrets), but your approach is a very solid middle ground for self-hosted, controlled environments.

Would be interested to hear:

  • whether you expect concurrent executions per tenant, and

  • whether you’re using Postgres already (advisory locks make this very clean).

Thanks for sharing this — it’s one of the more practical dynamic-credential patterns I’ve seen discussed here.

1 Like

Thanks @Michael_Long I really appreciate the detailed feedback!

You’re spot on about the concurrency risk. In my setup, the only part that really needs isolation is the credential mutation itself, the rest of the workflow can safely run in parallel.

I’m leaning toward a table-based lock approach rather than serializing the entire workflow through a queue:
Acquire a lock scoped to base_credential_id or tenant_id just before updating the credential
Update the base_credential using the tmp credential data
Release the lock immediately.

Downstream nodes continue execution without waiting
This keeps the critical section very short, preserves throughput, and eliminates the risk of credential bleed for overlapping executions. TTLs or timeouts on the lock ensure we don’t run into deadlocks.

I’m still considering the per-tenant base credential variant you mentioned, which would allow true concurrency across tenants while serializing only same-tenant requests. But from the looks of it this can also be dynamic and specify the target credential name or if.

That, combined with the lock just around the mutation, seems like the ideal balance between safety and performance.

To answer your questions:
Yes, I do expect concurrent executions per tenant, so the lock is essential for the mutation step.
We are using Postgres, so advisory locks are a perfect fit for managing that critical section cleanly.

Thanks again for validating the approach it’s really helped clarify the safest way to handle dynamic credentials in multi-tenant workflows.

1 Like

This makes a lot of sense, and your approach is well thought out.

Limiting isolation to just the credential mutation step is exactly the right call. Using a short-lived, tenant- or credential-scoped lock (for example with Postgres advisory locks) to guard that update gives you the safety you need without sacrificing throughput:

  • Acquire the lock for the tenant or base credential

  • Update the credential via the n8n API

  • Release the lock immediately

  • Let the rest of the workflow continue in parallel

That keeps the critical section small, avoids credential bleed under concurrency, and fits cleanly with how n8n resolves credentials at node execution time.

The per-tenant base credential variant you mentioned is also a strong refinement. It preserves true concurrency across tenants while ensuring same-tenant requests are serialized only where necessary. Combined with a mutation-scoped lock, it’s a very practical balance between correctness and performance within n8n’s current model.

Your approach will be genuinely useful for others facing multi-tenant or dynamic credential challenges in n8n. It helps provide a solution that is much safer and easier for people to apply confidently in real production workflows.

1 Like

Just wanted to share an update and close the loop on this thread. The implementation is now live and working reliably in a multi-tenant setup :tada:

What’s working well:

  • Dynamic credentials are created on-demand via the n8n Credentials API

  • A stable base_credential remains wired to nodes at design time

  • Tenant-specific credential data is injected at runtime

  • Concurrency is safely handled using a lightweight Postgres lock table

  • The lock only governs the credential mutation section, not the entire workflow

  • Downstream execution runs in parallel without credential bleed

Final architecture highlights:

  • One base_credential per tenant

  • Lock scope = tenant_id (or base_credential_id)

  • Short-lived 30 seconds TTL-based locks to prevent deadlocks

  • No full workflow serialization, no queues blocking long executions

  • Works cleanly with parallel executions across tenants

This approach turned out to be a good balance between:

  • n8n’s current constraint (static credential identity)

  • Real-world multi-tenant requirements

  • Performance and safety under concurrency

Thanks to @achamm @Michael_Long who contributed ideas and feedback, especially around isolation and locking. Hopefully this helps others who are trying to solve dynamic credential use cases without over-serializing their workflows.

Happy to share more details if anyone’s interested in the locking or workflow structure.

3 Likes

Your Welcome! Happy to help!

2 Likes