Self-Hosted AI Models (Quickstart Tutorial)
Learn how to quickly connect self-hosted models to MindStudio for testing & hobbyists
Last updated
Learn how to quickly connect self-hosted models to MindStudio for testing & hobbyists
Last updated
Before getting started, ensure you have the following installed on your machine:
Open a terminal window and run the following command to expose port 11434
to the internet:
Copy the forwarding HTTPS URL Ngrok provides (e.g.
https://xxxx.ngrok.io
). You'll need this later.
Open a second terminal window:
a. Pull the llama3.2
model:
b. Serve the model with the proper environment variables:
Replace the Ngrok URL below with your actual forwarding address from Step 1:
On Mac / Linux:
On Windows:
First
then
This command makes your local Ollama instance accessible through the Ngrok tunnel.
Click "Add Model"
Use the following fields:
API Name: (Ex: llama3.2
)
Endpoint: https://xxxx.ngrok.io/v1
(replace xxxx
with your actual Ngrok subdomain)
Your self-hosted AI model is now ready to be used in any MindStudio AI Agent.
Once added, you can:
Select your AI Model in the Model Settings tab when editing an AI Agent.
Use it across workflows, prompts, and block-level overrides in your automation.
Go to