Our system was developed to automate the evaluation of handwritten exams submitted by users through a custom-built dashboard.
- Upload and Text Extraction Stage
The user accesses the dashboard and uploads 9 handwritten PDF pages.
This submission triggers a webhook in n8n, which starts the main workflow.
The main workflow then launches 9 parallel subworkflows, each responsible for processing one PDF page.
Each subworkflow performs handwritten text extraction using AI models (Anthropic and Gemini).
- User Validation Stage
After extraction, the transcribed text is displayed on the dashboard for user review.
Once the user approves the transcriptions, a new webhook is triggered in n8n.
- Automated Analysis Stage
This second webhook activates 5 analysis subworkflows, where 6 AI models process the extracted data stored in a database.
One of these subworkflows performs a more complex analysis, with an average runtime of 8–20 minutes (20 minutes being rare).
The other four workflows have an average runtime between 3 and 7 minutes.
- Current Performance
The system operates normally with up to 5 simultaneous users.
However, during stress testing with 500 simultaneous executions, the environment freezes and stops responding.
- Usage Projection
We estimate 3,000 to 4,000 total users per event (48-hour window).
The full automation is only active for 48 hours every 4 months.
We are now with the pro plan, but i think we to change to enterprise plan but im not sure if it will work. Someone to help me?