For every job, allow the user to define a setting that limits the number of concurrent runs that the workflow accepts.
For example, say there are three workflows,
C and they all call another workflow called
X accesses a shared resource that, for whatever reason, can/should only handle a limited load or can only be accessed by a limited number of runs at at time.
Say that for
X we set this new configuration value
max_concurrent_jobs=2, and then
C all call
X at (more or less) the same time. This should mean that
X’s runs for
B, call them
Xb fire concurrently, but
X’s run for
C, call it
Xc is queued, and will only run after
Xb are done. This also means that
C’s run that fired
Xc just waits for
Xc to finish. So:
A -> Xa B -> Xb C -> Xc
Then causes this:
(A)===>(X) (Xa)=============> (B)===>(X) (Xb)=============> (C)===>(X) (Xc)=============>
Xa finishes before
(A)===>(X) (Xa)======> (B)===>(X) (Xb)=============> (C)===>(X) (Xc)=============>
It should also work for concurrent calls from the same job, so if
A fires three times…
(A1)===>(X) (Xa1)======> (A2)===>(X) (Xa2)=============> (A3)===>(X) (Xa3)=============>
In my specific use case, I’m working with an API with a rate limit. My job already uses
wait nodes to make going over the API’s rate limit for a single run of
X less likely, but I cannot account for
X being fired multiple times.
This could apply to several use cases —e.g.:
- An API with a rate limit.
- A database with a limited thread pool.
- Writing to a shared file.
This would be akin to how Throttle Concurrent Builds Plugin for Jenkins works.