Hey everyone, I’m looking for someone who can physically or remotely set up a Mac Mini M4 for 24/7 local AI operations. This is a paid one-time project, $500 fixed.
What needs to be set up:
macOS hardened (FileVault, firewall, dedicated user account)
Ollama running and tested with at least one model
n8n self-hosted, running as a persistent service that survives reboots
AnythingLLM connected to local Ollama
ChromaDB for vector memory
Telegram bot integration via n8n
One end-to-end workflow test confirming everything talks to each other
Deliverable is a working machine + written setup summary.
Location: Denver, CO. On-site preferred, but remote screen share works if you’re confident in Mac-based local AI setups.
DM me or reply here if you’ve done something similar. GitHub or homelab examples welcome.
I haven’t worked with macOS specifically for this kind of setup, but the stack itself (Ollama, ChromaDB, n8n as a service, AnythingLLM) is all stuff I’ve configured and debugged before on Linux. Happy to do it via remote screen share.
DM me if you want to chat - I’m in CET timezone, flexible on hours.
Email: [email protected]
Anton saw the portfolio impressive n8n work. Since you mentioned your experience is primarily Linux/VPS, I have two specific ‘Mac-side’ questions to ensure this stays 24/7 stable:
Persistence: macOS doesn’t use systemd. How do you plan to ensure n8n and ChromaDB survive a reboot and run as background services without an active user session logged in?
M4 Performance: Since this isn’t a VPS with shared vCPUs, how will you ensure Ollama leverages the M4’s Unified Memory/GPU efficiently without interfering with the macOS window server?
Connectivity: I’m in MST (7 hours behind you). Are you comfortable with a handoff/sync window between 1 PM and 5 PM MST?
If you’re confident you can translate your Linux stack to a hardened macOS environment, let’s talk.
Docker networking between Ollama and n8n is where most of these setups silently break — models load fine, but the agent never actually talks to the local inference layer.
Shipped a private document-search system for a law firm last month: zero cloud calls, fully hands-free, live in 36 hours.
First move today: I’d validate your Mac Mini’s virtualization config and wire the container network correctly before touching any model.
Want a working docker-compose scaffold sent to you tonight — free, no strings?
I can remotely set up your Mac Mini M4 for a robust, 24/7 local AI operation. I have extensive experience self-hosting n8n and integrating it with local LLM stacks, ensuring everything is persistent and survives reboots.
How I will handle your setup:
Hardening & Persistence: I’ll configure macOS security (FileVault/Firewall) and set up n8n and Ollama as background services (using launchd or Docker) so they auto-start on boot.
The AI Stack: I’ll deploy Ollama, connect it to AnythingLLM, and spin up ChromaDB as your vector store. I’ll ensure the M4’s Unified Memory is properly utilized for optimal inference speeds.
Telegram Integration: I’ll build a “Heartbeat” workflow in n8n that connects your Telegram bot to the local Ollama instance, confirming the end-to-end data flow.
Documentation: You’ll receive a Written Setup Summary with all local endpoints, service commands, and a “Quick Restart” guide.
Why me: I’m a developer focused on production-grade automations. I don’t just “install” apps; I build systems that stay online. I’m comfortable working via remote screen share and can adjust to your Denver timezone for the session.
Anton, saw the portfolio—impressive n8n work. Since your experience is primarily Linux/VPS, I have two specific ‘Mac-side’ questions to ensure this stays 24/7 stable:
Persistence: macOS doesn’t use systemd. For a headless Mac Mini, how do you plan to ensure n8n and ChromaDB survive a reboot and run as background services without an active user session logged in?
Hardware Optimization: How will you ensure Ollama is correctly leveraging the M4’s Unified Memory/GPU (Metal) rather than just hitting the CPU?
Scheduling: I’m in MST (7 hours behind you). Note that I am unavailable on Mondays, Wednesdays, and Fridays between 5 AM and 1 PM local time. Does that sync window work for you?
If you’re confident you can translate your Linux stack to a hardened macOS environment, let’s talk