Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Learn about an AI Agent's core capabilities and functions.
An AI Agent is a customizable, automated system created in MindStudio that performs specific tasks using artificial intelligence. Think of it as your digital assistant, capable of handling repetitive, data-driven, or creative tasks to save you time and effort.
Task Automation
AI Agents are designed to execute tasks automatically based on your configurations. Most of the tasks that are executed leverage AI models to complete the task.
AI-Powered Actions
AI Agents utilize AI models to perform actions such as text generation, summarization, image generation, data extraction, and analysis, and more.
Customizable Workflows
Each AI Agent is built using workflows that define its behavior. Workflows are a sequence of tasks that the Agent executes each time it runs.
Dynamic and Flexible
AI Agents leverage variables, triggers, and conditions to customize the Agent's output based on specific inputs or contexts.
Integration-Ready
AI Agents can be integrated with other platforms and tools via APIs or third-party services like Zapier.
Inputs: AI Agents receive inputs like user provided data, uploaded files, or launch variables.
Workflow Execution: The Agent follows a defined sequence of actions to process the inputs.
Example: Search for the latest tech news → Summarize the results → Send an email.
Outputs: The AI Agent ends its workflow with a resulting response based on the workflow execution.
Efficiency: Automate time-consuming tasks and reduce manual effort.
Scalability: Handle large volumes of data or repetitive tasks seamlessly.
Customization: Tailor workflows to your specific needs with ease.
Integration: Connect AI Agents to other systems for end-to-end automation.
An AI Agent is your go-to solution for leveraging AI to simplify complex tasks, boost productivity, and drive innovation in your workflows.
Content Summarization
AI Agent: Summarizes long articles into concise bullet points.
Workflow: Fetch content → Analyze content→ Summarize → Deliver via email.
Data Processing
AI Agent: Processes survey data to highlight key insights.
Workflow: Import data → Analyze trends → Generate report.
Customer Support
AI Agent: Automates FAQ responses for a website.
Workflow: Receive question → Match with answers → Respond.
AI Agents have countless use cases. In the next article, we’ll be showcasing an extensive list of uses cases that span across several job functions and tasks.
Welcome to MindStudio
MindStudio's intelligence infrastructure gives everyone—from developers to non-technical teams—the power to build and deploy specialized AI Agents.
If you are new to MindStudio, start here to learn the essentials and make your first AI Agent.
MindStudio has best-in-class developer tools to build scalable AI Agents for a variety of fuzzy computing tasks.
MindStudio offers powerful capabilities that enable you to create, test, and deploy sophisticated AI Agents with ease.
Find helpful resources to enhance your experience with MindStudio.
Quickstart Guide
Everything you need to know to get started creating AI Agent.
What is an AI Agent?
Learn about AI Agent core capabilities and how they work.
AI Agents Use Cases
Explore use cases across multiple job functions and departments.
API Reference
Integrate MindStudio's AI Agentinto your applications. Leverage our API for serverless AI function calling.
NPM Package
Install the MindStudio NPM package for easy integration of MindStudio's AI Agent into your Node.js projects.
Workflow Functions
Extend the capabilities of your workflows by running JavaScript or Python code directly within your AI Agents.
Build AI Agents
Explore editor features involved in building AI Agents like prompts, blocks, functions, workflows and more. Learn how to combine these elements to create powerful AI Agents.
Test & Evaluate
Learn about the tools and techniques for testing AI Agents. Understand how to evaluate performance, debug issues, and ensure your AI Agents meet quality standards before deployment.
Publish & Deploy
Learn how to prepare an AI Agent for publishing. Explore various deployment options including integration with no-code automation platforms and deployment to custom applications.
Workspace Management
Discover how to effectively organize and manage your AI Agents, usage controls, billing settings and team members within the MindStudio workspace.
Contact Support
Need help or have questions? Reach out to us via email with any questions you may have about MindStudio.
Glossary
A comprehensive list of key terms and concepts used in MindStudio.
Automatically generate your workflows with AI
The Workflow Generator feature in MindStudio is a powerful tool that simplifies the creation of AI Agents. By describing your desired outcome, the feature automatically generates a detailed workflow plan, complete with prompts, variables, automation steps, and custom functions.
Open a new or existing project in MindStudio.
Navigate to the bottom bar and select the Generate Workflow button.
A modal will appear asking you to describe what you want your AI Agent to do. Click Generate to let MindStudio process your request.
Example:
Ask the user for their name and birthday, then write them a horoscope.
Left Panel: Displays a detailed plan of the workflow, including the system prompt, steps, and variable definitions.
Right Panel: Translates the plan into executable automation steps, complete with code and configurations.
System Prompt: Pre-configured text to guide the workflow.
Variables: Input fields like name
and birthdate
.
Custom Functions: Auto-generated JavaScript or Python functions, such as one to compute a zodiac sign from a birthdate.
Automation Steps: A preview of the blocks added to the workflow.
If the generated workflow looks good, click Accept and Build.
MindStudio will automatically construct the workflow on the canvas.
Use the Debugger to analyze the execution of the workflow:
Review logs for actions, variable updates, and custom function outputs.
Identify and fix any issues or edge cases.
Refine the workflow by adjusting blocks, variables, or custom function code.
Use the preview mode to test the draft:
Input test data.
Observe the execution in real-time, including the generated outputs.
While the Generate Workflow feature in MindStudio is a powerful tool for automating workflow creation, it’s essential to review all generated outputs when accepting an auto-generated build. The feature uses AI to interpret your descriptions and generate workflows, but it is still prone to occasional errors or misconfigurations.
Variables used in prompts, automation blocks, or custom functions may not always be wired correctly.
Example: A variable like
birthdate
might be declared but not correctly referenced in a custom function or automation step.
Custom Function Errors
Auto-generated functions may contain logical gaps or mismatched input expectations.
Example: A function might expect a variable in one format (e.g., a string) but receive it in another (e.g., an object).
Sometimes, blocks may not be fully configured, leaving placeholders or missing settings.
Example: An API call block might not have the correct headers or endpoint set up.
The workflow may not account for scenarios like invalid inputs, empty responses, or unexpected user behavior.
Before clicking Accept and Build, carefully examine the generated plan, especially variable assignments, function configurations, and block connections.
Use the test interface in the function editor to validate inputs, outputs, and logic. Adjust the code as necessary.
Test the workflow with various inputs to identify gaps or inconsistencies in execution.
Analyze runtime logs to ensure all variables and blocks function as intended. Pay close attention to variable updates and outputs.
Treat the generated workflow as a scaffold. Refine and enhance it to meet your specific needs and address any issues.
The Workflow Generator is designed to save time and provide a strong starting point, but human oversight is crucial. Always validate the output, refine where necessary, and ensure your workflows are robust and error-free before deploying them in production.
Get a high-level overview of the MindStudio AI Editor
After creating a new AI Agent, you’ll land in the MindStudio Editor. The editor is made of two key areas: the Explorer and the Navigator (commonly referred to as the Workspace Area).
On the left you’ll find the Explorer Tab. This is where you'll find all of the resources used to build your AI Agents.
Upload and vectorize files to leverage Retrieval Augmented Generation (RAG) in your AI workflows.
Learn more about Data Sources →
Execute JavaScript code in your workflow.
Learn more about Functions →
Interfaces that humans interact with to provide context to the AI Agent.
Learn more about User Inputs →
Sequences of automated actions that your AI Agents follow when they are run.
Learn more about Workflows →
The large area covering the rest of the Editor if the Main Workspace, also refer to as the Navigator. This area will change depending on what you have selected. By default, the Editor will open on the Main.flow
workflow and have the Prompt Tab open.
The Top Bar in the Editor contains the following controls:
Back Button: Navigates to the Workspace Overview Screen
Title: Access general AI Agent Settings
Preview Button: Opens a draft preview of the AI Agent
Publish Button: Saves and deploys all changes to the AI Agent
The bottom bar of the Editor contains the following controls:
Workspace Name: Click to access workspace settings
Help & Support: Access video tutorials, documentation, and live support chat
Collapse Controls: Opens and closes the left and right columns of the Editor
A Comprehensive Guide to Understanding the Costs of AI Agents and How to Manage Your Balance
MindStudio makes it easy to access powerful AI capabilities—without hidden fees or upfront commitments. This guide explains which AI Agents are free to use, how paid agents work, and how to manage your usage balance.
MindStudio offers a range of powerful AI Agents that you can use entirely for free. These agents feature workflows designed to efficiently handle more basic tasks. Their simplified processes result in shorter run times, which helps keep the service free for users.
Here are a few examples of free AI agents:
Research Agents
Fact-check sources, find similar products, and uncover emails in one click.
Content Analysis Agents
Summarize, simplify, or de-bias long articles and videos instantly.
Content Creation Agents
Generate tweets, LinkedIn posts, YouTube ideas, and more—on demand.
Image Generation Agents
Create icons, illustrations, thumbnails, and other visuals in seconds.
MindStudio also offers AI Agents that leverage complex AI models, advanced workflows, and sometimes even integrate custom data. This additional complexity means these agents are more costly to run.
Here are some examples of tasks run by paid AI agents:
Running large-scale data analysis
Using advanced or premium AI models
Integrating external services and APIs
Performing long-running or high-volume tasks
To run these agents, you'll need to have a balance in your MindStudio account.
To tell free agents from paid ones, look for a small label beneath each Agent’s Run button. If you see a “Paid Agent” label, running it will deduct from your balance; if there's no label, the Agent is completely free to use.
This indicator appears on both the MindStudio Agents page and the Chrome Extension, so you can confidently browse and try out different Agents knowing exactly which ones are free and which ones require payment.
In the image below, Deep Research and Research People are marked as Paid Agents, while others like Find Product Alternatives and TL;DR are free to use.
MindStudio uses a prepaid balance system for running paid AI Agents. Your balance is deducted based on the actual costs of model inference and execution.
In your Workspace, navigate to Billing > Balance, then select Add Funds to choose an amount and payment method.
Cost of Running Paid AI Agents
The cost per run varies depending on the specific agent and the amount of data processed. Each paid agent displays a cost estimate on its details page, along with an average run time.
If you're building your own AI Agents in MindStudio, we’ve removed the 2.9% processing fee previously charged on model usage. That means:
You only pay for model inference costs.
No API key management required.
More freedom to experiment and iterate.
If you’re unsure whether an AI Agent is free or paid, look for the usage indicator next to the “Run” button or check the Agent's details in the Workspace. For more information, reach out via Support Chat or visit our Quickstart Guide to learn how to build your own AI Agents.
Feature
Free AI Agents
Paid AI Agents
Cost
$0
Uses balance
API Keys Needed
No
No
Model Access
Basic & Mid-Tier
All Models
Usage Limit
Unlimited
Based on balance
Examples
Content generation, summarization, research, image creation
Premium workflows, external API integrations, advanced automation
The first part of any workflow in MindStudio
The Start Block is the first part of any workflow in MindStudio. It initializes your AI Agent, defines how and when the workflow begins, and sets up key variables that are used throughout the workflow.
Define how the workflow is activated. Triggers can be configured to run:
On-Demand: The default trigger. Requires manual execution.
Scheduled: At specific times or intervals, such as daily at 8:00 AM.
Event-Driven: Based on external inputs like API requests.
Variables that are initialized at the start of the workflow and passed through the entire process. Launch variables can be referenced throughout the workflow using {{variable_name}}
.
The Scheduler is a feature within the Start Block that allows you to set up automated, time-based triggers for your workflow. To enable the Scheduler, change the Triggers > Run Mode configuration from “On-Demand” to “Scheduled”. You may create multiple schedules for a workflow.
Define the Schedule using natural language to describe how often and when the workflow should run.
Examples:
"Every day at 8:00 AM"
"Every Monday and Wednesday at 3:00 PM"
"First day of every month at 9:00 AM"
Set the Time Zone based the relative hours you’d like the workflow to run .
(Optional) Add Launch Variable arguments using key:value
****pairs.
Generate the Schedule by clicking on the Generate Schedule Button
Save the schedule.
Everything you need to know to get started creating AI Agents.
In this article, we’ll be guiding you through everything you need to know to get started on MindStudio and teaching you to create your first AI Agent.
Before starting, you will need to create a MindStudio Account.
After creating an account, you'll land on the Workspace Overview Screen, where you can view all published AI Agents. If there are no AI Agents published, then you will see the Getting Started Guide.
From the Workspace Overview Screen, you’ll find several controls on the left. From here you can:
Create new workspaces.
Access general Workspace Settings.
Explore Usage Explorer to view AI Agent activity.
Manage Billing Settings (e.g., budget limits, payment methods).
Invite team members to your workspace.
Access developer tools like API Keys, Request Logs, and Documentation.
In this guide, we’ll build an example AI Agent to find daily tech news, summarize it, and sends an email every morning at 8:00 AM.
To create an AI Agent, click on the Create New Agent button at the top-right. This will create a new Agent and open the MindStudio Editor.
The editor is made of two key areas: the Explorer Tab and the Navigator.
Explorer Tab
This is where you'll find all of the resources used to build your AI Agents. This includes resources like Data Sources, Functions, User Inputs, and Workflows
Navigator (Main Workspace)
The large area covering the rest of the Editor if the Main Workspace, also refer to as the Navigator. This area will change depending on what you have selected. By default, the Editor will open on the Main.flow
workflow and have the Prompt Tab open.
The System Prompt serves as the AI Agent's core instructions, defining its role, capabilities, and constraints.
You can write a system prompt manually by typing into the blank space below, or you can click on the Generate Prompt button at the bottom left to have the Prompt Generator to write the prompt for you.
Click on the Generate Prompt button at the bottom left of the prompt area. This will open the Prompt Generator Modal.
Using natural language, enter a brief description of what your AI Agent is supposed to do. For this build, we can type something like:
After entering your description, click the Generate button. The system will automatically create a structured prompt based on your input.
Carefully review the generated prompt to make sure it aligns with your requirements. You can edit the prompt directly in the editor if adjustments are needed.
Once your System Prompt is written, navigate to the Automations tab.
The Automations Tab is where you design the workflow for your AI Agent. This section allows you to define the sequence of actions your AI Agent will follow.
Start Block:
Schedule the workflow to run at 8:00 AM daily.
Define a launch variable called topic
.
Google Search Block:
Use a function to perform a Google search.
Reference the topic
variable as the search query.
Save the result as a new variable, google_result
.
Text Generation Block:
Prompt AI to summarize the google_result
.
Use the prompt:
Terminator End Block w/ Email Notification:
Enable Email notifications to send the summary to the specified email.
MindStudio provides access to over 50 AI models from leading providers, allowing you to tailor your workflows for a variety of tasks and use cases.
Access Model Settings
Navigate to the Model Settings Tab in your workflow editor.
Choose a Model
Select from over 50 AI models available.
NOTE: Some models may be locked and can be unlocked by adding a payment method in your Billing Settings.
There are two configuration options for AI models:
Temperature
Controls the randomness of responses:
Higher Temperature: More creative, but less predictable.
Lower Temperature: More consistent, but less diverse.
Max Response Size
Defines the maximum size of the model’s response in tokens.
Example: 400 tokens ≈ 300 words.
Every block that uses AI has the ability to override the underlying models settings and use its own unique AI model and configuration. This allows you to use different models in the workflow based on the task required at each specific step.
Testing and evaluation are critical steps to ensure your AI Agent performs as expected. MindStudio provides tools to debug, validate, and optimize your AI Agents.
Navigate to the Errors Tab to view any issues, such as misspelled variables or misconfigured blocks.
Click on an error in the Errors Tab to highlight the problematic block in the workflow.
Adjust configurations or correct spelling to resolve the issue.
Once fixed, errors will disappear from the tab.
Open the Evaluations Tab to create test scenarios for your workflow.
Click the Autogenerate button to generate test cases automatically. You can provide additional context for test case generation, such as specifying genres, topics, or scenarios.
Specify how many test cases you want to generate.
Review the generated cases to ensure they cover a variety of inputs
Use the Run button in the top-left to execute the workflow multiple times with the test cases.
After execution, review the results in the right-hand column. Check whether the outputs align with expected behavior.
Once your AI Agent has been tested and evaluated to ensure it functions correctly, the next step is to publish it. Publishing makes your AI Agent available for use and integration.
Set Metadata
Navigate to the Explorer Tab and click on the root menu.
Add the following details:
Name: Give your AI Agent a descriptive and unique name.
Description: Provide a short explanation of what your AI Agent does.
Go through other sections and configure other settings.
Publish the Workflow
Click the Publish button to finalize your AI Agent.
Once published, the AI Agent will be accessible via the Published Tab in your workspace. Click on the confirmation message to view your AI Agent.
Integration Options
After publishing, you can integrate your AI Agent with external tools and platforms, such as:
Zapier: Automate workflows across apps.
Make: Build advanced automation scenarios.
API: Use the provided documentation to connect via custom APIs.
Test Request Integration
Use the Create Test Request feature to validate API calls.
Configure the API request by:
Selecting a workflow.
Providing launch variables (e.g., topic = "daily tech news"
).
Run the test and review the logs for successful execution.
View the AI Agent in the Workspace Overview
View the newly published AI Agent under the Home Tab.
Confirm it is listed alongside other published Agents.
Congratulations on creating and publishing your first AI Agent in MindStudio! We’ve covered everything from writing system prompts and building workflows to testing, debugging, and publishing. MindStudio is a powerful and versatile platform, enabling you to create AI Agents tailored to specific needs.
Remember, this guide is just the beginning. The possibilities with MindStudio are endless, and we’re excited to see what you’ll build. If you ever need help, our Support Chat is just a click away.
Learn about prompt engineering for AI tasks.
Prompts are the foundation of how your AI Agent understands and executes tasks. A well-written prompt ensures your AI Agent delivers precise and meaningful results.
A prompt is a set of instructions that tells the AI Agent what to do. It provides the context and parameters for generating the desired output. Just like giving someone directions, a good prompt ensures the AI knows exactly what to deliver. In MindStudio, prompts can be used at two levels: System Prompts and Block Prompts.
The System Prompt appears in the Prompt Tab of a workflow file. It serves as the AI Agent's core instructions, defining its role, capabilities, and constraints and acts as the foundation, guiding how the AI behaves throughout the workflow.
When you write a system prompt, you’re establishing the AI's "role" and general approach to tasks. Every action in the workflow will follow this overarching guidance unless overridden by specific block prompts.
You can write a system prompt manually by typing into the blank space of the Prompt Tab, or you can click on the Generate Prompt button at the bottom left to have the Prompt Generator to write the prompt for you.
Click on the Generate Prompt button at the bottom left of the prompt area. This will open the Prompt Generator Modal.
Using natural language, enter a brief description of what your AI Agent is supposed to do.
Example:
After entering your description, click the Generate button. The system will automatically create a structured prompt based on your input.
Carefully review the generated prompt to make sure it aligns with your requirements. You can edit the prompt directly in the editor if adjustments are needed.
A block prompt is used for specific tasks within the workflow. While the system prompt provides overall guidance, a block prompt gives detailed instructions for a particular step.
Example of a Block Prompt (within a Generate Text block):
Here, the block prompt tells the AI to focus on summarizing a specific piece of information (e.g., the result of a Google search). This allows you to customize how the AI performs at different stages of the workflow.
A good prompt is clear, specific, and provides enough context for the AI to understand what’s needed.
Clarity: Use straightforward language to avoid confusion. Clearly define the AI Agent's purpose and main tasks
Specificity: Be detailed about what you want. Specify any limitations or guidelines that your AI Agent should follow when executing its task
Context: Explain the purpose or provide a scenario to guide the AI. Context can be provided via direct instruction or by calling Variables within a prompt.
Output Format: Outline the expected format or structure of its output.
Markdown is a lightweight formatting language that can be used to structure prompts, making them clearer and easier for the AI to interpret. By using Markdown, you can organize your instructions in a readable and visually structured way.
Clarity: Break down complex instructions into digestible sections.
Readability: Using formatting like headers, bullet points, and code blocks to make prompts easier to follow.
Consistency: Ensure your prompt has a standardized structure, especially for complex tasks.
Debugging: Allows for easier debugging and refinement of prompts.
Here are some common Markdown features and how to use them:
Headers
Purpose: Use headers to introduce sections or tasks.
Variations:
#
for H1
##
for H2
###
for H3
####
for H4
Example:
Bullet Points
Purpose: Break down requirements into bullet points for clarity.
Example:
Numbered Lists
Purpose: Use for step-by-step instructions.
Example:
Code Blocks
Purpose: Use code blocks for examples or templates.
Example:
Bold and Italics
Purpose: Emphasize key instructions.
Example:
The Auto-Enhance feature helps polish and refine your prompts after you've made modifications. It analyzes your prompt for clarity, formatting, and potential ambiguities, then suggests improvements while maintaining your original intent.
Access the feature: Click the "Enhance" button in the bottom bar or use the keyboard shortcut Option + K
Review suggestions: The system presents an enhanced version of your prompt with improvements in:
Grammar and spelling
Markdown formatting consistency
Clarity and precision of instructions
Structure and organization
Accept or reject changes: Choose whether to implement the suggested improvements
This feature works both in the main Prompt Tab and in fullscreen Send Message blocks, making it easy to maintain high-quality prompts throughout your workflow.
Learn about choosing the right AI models for the right task.
MindStudio provides access to over 50 AI models from leading providers, allowing you to tailor your workflows for a variety of tasks and use cases.
The underlying model is the AI model that is inherited by the rest of the workflow. It’s configuration will be used as the default model for all blocks that use AI.
Access Model Settings
Navigate to the Model Settings Tab in your workflow editor.
Choose an AI Model
Click on the Model Card to view information about the AI Model.
NOTE: Some models may be locked and can be unlocked by adding a payment method in your Billing Settings.
Click the Use Button to confirm your selection
(Optional) Click the heart icon to add preferred AI Models to your Favorites section
There are two configuration options for AI models:
Temperature
Controls the randomness of responses:
Higher Temperature: More creative, but less predictable.
Lower Temperature: More consistent, but less diverse.
Max Response Size
Defines the maximum size of the model’s response in tokens.
Example: 400 tokens ≈ 300 words.
Start the slider in the center for temperature and adjust based on your needs.
Use larger response sizes for detailed outputs and smaller sizes for concise ones.
Every block that uses AI has the ability to override the underlying models settings and use its own unique AI model and configuration. This allows you to use different models in the workflow based on the task required at each specific step of the workflow.
Choosing the right AI model is critical to ensuring your workflow meets performance, cost, and quality requirements. MindStudio provides a variety of models with different capabilities, trade-offs, and configurations. When selecting a model, it's often necessary to balance these considerations to align with your workflow's goals and constraints.
AI models come with varying pricing structures based on usage, typically measured in tokens for prompt and response.
Use cost-effective models for high-volume, repetitive tasks (e.g., bulk summarization). Opt for premium models only when high output quality is critical.
Latency refers to the time the model takes to generate a response. Low-latency models are essential for real-time or interactive workflows.
Prioritize low-latency models for use cases like chatbots or live applications. For non-time-sensitive workflows (e.g., scheduled reports), higher latency models with better quality may be acceptable.
Different models vary in their ability to generate coherent, creative, or factual responses. Output quality depends on the model’s training and capabilities.
Choose advanced models for nuanced tasks like legal summaries or creative writing. Use simpler models for straightforward tasks like data extraction.
The context window determines the maximum amount of text the model can process at once. Larger context windows are essential for tasks involving lengthy inputs.
Use models with large context windows for summarizing lengthy documents or analyzing extensive datasets. For shorter inputs, a smaller context window may suffice and reduce costs.
Learn how to design automated workflows for your AI Agents
The Automations Tab is where you design the workflow for your AI Agent. This section allows you to define the sequence of actions your AI Agent will follow.
Workflows are sequences of automated actions that your AI Agents follow when they are run.
After opening the Automations Tab, you’ll see the Automations Canvas which displays all of the individual actions that your workflow will execute when run. These are represented by Blocks on the canvas.
Plan Before You Build: Outline the steps your workflow will perform.
Use Helpful Variable Names: Name variables descriptively to make the workflow easy to debug.
Test as You Go: Regularly test each block to identify and resolve issues early. To test out the specific output of blocks that use AI, you may use the Profiler to compare results.
The canvas is infinitely scrollable in all directions. At the bottom of the canvas, you’ll find several controls to help you navigate and annotate the canvas as you design your Workflow.
Using a mouse:
Scroll
to pan up and down the canvas.
Shift + Scroll
to pan left and right across the canvas.
Using the Pan Tool:
Click the Pan Tool icon or use the H
hotkey to activate the Pan Tool.
Click and drag in any direction to pan around the canvas. NOTE: You will not be able to select blocks while the Pan tool is activated.
Click the Select Tool icon or use the V
hotkey to deactivate the Pan Tool.
Using a mouse:
On Mac, use CMD + Scroll
to zoom in and out.
On PC, use CTRL + Scroll
to zoom in and out.
Using the Zoom Controls:
Click Zoom In icon or use the +
hotkey to zoom in.
Click Zoom out icon or use the -
hotkey to zoom out
If you navigate away from your blocks and can no longer find them, you can reset and center all of the blocks on the canvas by clicking the Reset view icon or using the R
hotkey
If your workflow becomes long and complex, you may want to consider tidying it up using Auto arrange. This tool will align all blocks vertically on the canvas.
You can annotate the canvas using the Note Tool to add text notes or label groups of blocks on.
Using the Note Tool Controls:
Click on the Note tool icon to create a new note
Select the color of your note
Click on the note and add text content to your note
Add an optional label to the top of your note. If no text is added in the label, then it will not show a label.
Click and drag the anchors found at each corner of the note to adjust the note size.
Between Existing Blocks:
Select the +
button between to connected blocks to open up the Block Menu.
Select the block that you’d like to add.
The new block will be automatically connected between the two blocks.
Anywhere on the Automations Canvas:
Right-click anywhere on the canvas to open the Block Menu.
CTRL + Click
anywhere on the canvas to open the Block Menu.
Click on the delete icon at the top-right corner of any block.
Confirm the delete action.
This block initializes the workflow. It can be triggered on demand or on a defined schedule, and allows you to define launch variables, which provide dynamic values that are passed through the workflow.
There are many kinds of blocks that you can add to your workflow. All blocks will have different configuration options depending on the block that you select.
Types of blocks include:
AI Tools: Generate text, generate images, analyze image, etc.
Context Blocks: Gather context for the AI. Context is saved to a variable.
Routing Blocks: Add conditional branches to route the workflow in various ways. </aside>
This block marks the end of the workflow. It has customizable end behavior, such as sending email notifications or returning a structured output.
Variables in MindStudio are dynamic placeholders that store data during workflow execution. They allow you to pass information between blocks and workflows seamlessly.
Launch Variables: These are defined in the Start Block of your workflow.
Runtime Variables: Some blocks, such as Generate Text Blocks or User Input Blocks, can generate new variables while the workflow is running. For Example, after performing a Google Search, the block can store the results in a variable called google_result
.
To use a variable in any block, reference it using double curly braces: {{variable_name}}
.
Generate Text prompt example calling a variable:
Learn how to properly use dynamic variables in your AI Workflows
Dynamic Variables in MindStudio enable workflows to generate and modify user inputs in real time, allowing for more adaptive and interactive experiences. This feature is particularly useful for scenarios where follow-up questions depend on previous responses, such as product recommendations, lead generation, and troubleshooting agents.
By leveraging variables, arrays, and Handlebars helpers, Dynamic Inputs allow workflows to progressively collect information and adjust questions based on user input.
Dynamic Inputs use variables to store and retrieve responses, allowing the workflow to adjust its next question dynamically. This pattern is useful in AI Agents that require multiple layers of questioning before generating a final result.
Inputs Array – myVariable[]
- Stores inputs generated into an array.
(optional) Handlebars helpers - Used to quickly find items in the array variable.
Start by gathering basic user input using a User Input Block. This might include general details like name, contact information, or the primary topic of inquiry.
Example: "What part are you looking for?"
The response is stored in a variable, such as partRequest
.
Once the user provides an initial answer, the workflow generates a follow-up question based on the input. This follow-up question is:
Added to the Questions Array variable: questions[]
.
Displayed dynamically as the next prompt using: {{lastItem questions}}
.
Example:
The user inputs "fire sprinkler".
The workflow generates "What type of sprinkler are you looking for?" and stores it in questions
.
Each response is stored in the Answers Array Variable (answers[]
), ensuring all collected data is saved for final processing.
Example:
The user selects "Pendant", which is added to answers
.
The Logic Block determines:
If more information is needed, the workflow loops back, dynamically generating another follow-up question.
If sufficient responses are collected, the workflow moves forward to generate the final result.
Example:
Second Follow-Up Question: "What temperature rating do you need?"
Third Follow-Up Question: "What finish or color scheme?"
Once all necessary inputs are collected, the workflow processes the information and generates a response based on the accumulated answers.
Example Output: "You are looking for a brass pendant fire sprinkler with a 135°F temperature rating."
This workflow dynamically adjusts to refine user requests:
Each follow-up question is stored in questions[]
, and the corresponding user input is stored in answers[]
.
Dynamic Inputs open new possibilities for AI-driven workflows:
Lead Qualification – Tailor follow-up questions based on user responses to better qualify leads.
Product Recommendations – Ask clarifying questions to suggest the best product options.
Troubleshooting Assistants – Adjust questions dynamically based on user-reported issues.
Dynamic Variables leverage arrays to adjust varaibles in real time.
The Logic Block determines when to stop questioning and proceed with generating results.
Handlebars helpers like {{lastItem}}
retrieve the most recent question for dynamic presentation.
This pattern is a powerful way to build interactive and intelligent AI Agents that adapt to user needs dynamically. While it requires a deeper understanding of arrays and variable management, the flexibility it provides makes it a game-changer for many AI-driven workflows.
Use an AI model to generate text in your AI Agents.
The Generate Text Block sends a text prompt to an AI model and returns the AI Model’s response.
Define the instructions sent to the AI model. Use {{variables}}
to make the prompt dynamic and context-aware.
Choose how the AI response is handled:
Display to User (Default): Shows the response to the end user.
Assign to Variable: Creates a new variable and saves the AI model’s response to it. Enter a variable_name
****to store the response for later use in the workflow.
Choose the format of the AI response:
Outputs plain or markdown-formatted text, suitable for display or emails.
Outputs the response in JSON format, ideal for structured data outputs required for integrations or further processing via code.
Example Schema:
Outputs data in CSV format, ideal for tabular data or spreadsheets.
Example Schema:
For both JSON and CSV outputs, you can explicitly define the output structure by providing a sample output schema.
The default setting is inherited from the underlying model configured in the Model Settings Tab. You can override this setting by choosing and configuring a different AI Model specifically for this block.
Model Selection: Choose a different model if needed for the block.
Temperature: Adjust randomness in the response:
Lower values = more predictable, consistent results.
Higher values = more creative, varied responses.
Response Length: Limit the output size in tokens to suit your needs.
The Generate Text Block supports conditional logic to dynamically adjust the text prompt based on workflow variables. This is done using {{#if}}
, {{else}}
, and {{/if}}
statements.
Example:
Combine multiple {{#if}}
statements for complex logic:
For more information on Markdown, see the .
For full details on specific AI models, visit our or browse all models in the Model Settings Tab.
(optional) User Input Block with Dynamic Prompts – Add {{myVariable[]}}
to the title of a
Purpose: Acts like the big picture instruction. It’s the AI’s mindset for the entire workflow.
Purpose: Adds specific, step-by-step instructions for individual tasks.
Example: "You are a tech-savvy assistant that simplifies complex topics for general audiences.”
Example: "Write a one-paragraph summary of the latest AI breakthrough using non-technical language."
Summarizing an Article
"Summarize this article in 3 bullet points, focusing on key trends in technology."
"Summarize this."
Writing an Email
"Write a professional email to a client, introducing our new AI tool and inviting them for a demo."
"Write an email about our product."
Data Analysis
"Analyze the provided sales data and list the top 3 trends for Q4 in bullet points."
"Analyze this data."
Creative Writing
"Write a 150-word humorous story about a robot learning to make ice cream."
"Write a funny story about a robot."
Initializes the Workflow. Every Workflow includes a Start Block that cannot be deleted.
Uses AI to generate text based on a prompt. Responses can be displayed to human or assigned to a variable.
Uses AI to generate an image based on a text prompt. Image URL is assigned to a variable.
Generates a chart based on JSON Schema.
Displays content to the user exactly as it is written. Can call variables.
Uses AI to generates a voice-over audio file that matches the provided text. Audio URL is assigned to a variable.
Uses AI to analyze the content of a provided image URL. Responses can be displayed to human or assigned to a variable.
Displays user inputs in a single-page form to gather context for the AI. Values from human inputs are assigned to variables.
Queries a Data Source and returns relevant chunks of text. Returned text is assigned to a variable.
Executes JavaScript code. Configurations and outputs will vary.
Uses a web scraper to gather text from a URL. Returned text is assigned to a variable.
Extracts text from PDF, CSV, HTML, and TXT files
Sends a message to a Slack channel
Creates branching paths in a workflow. Routes the workflow based on human selection.
Creates branching paths in a workflow. Routes the workflow based on AI selection.
Routes the workflow directly into another discrete workflow. Variables cannot be passed into the new workflow.
Runs a sub-workflow. Variables can be passed through the workflow as Launch Variables. Returns the output of the workflow.
Ends the workflow. Can be configured with different end behaviors such as a front-end chat interface, email notification, and more.
1
Fire Sprinkler
What type of sprinkler are you looking for?
2
Pendant
What temperature rating do you need?
3
135°F
What finish or color scheme do you need?
4
Brass
Final output is generated
Leverage Retrieval Augmented Generation (RAG) in your AI Agents
A Data Source in MindStudio is a repository of files that can be queried by AI workflows to supplement the AI’s responses with domain-specific knowledge. By integrating custom data into your workflows, Data Sources enable Retrieval Augmented Generation (RAG)—a method where the AI retrieves relevant information from external data sources to generate more accurate and contextually aware outputs.
When you upload files to a Data Source, they are converted into a vector database. This database allows the AI to query your custom data, retrieve relevant context, and incorporate it into its responses.
RAG combines AI models with an external knowledge base to enhance the AI's capabilities. Instead of relying solely on pre-trained data, the AI retrieves relevant information from the Data Source and integrates it into its response.
Before generating a response, the system retrieves relevant information from external sources, such as a knowledge base, database, or documents. This is typically done using vector embeddings and similarity search to find the most relevant pieces of information.
The retrieved knowledge is used as input or context for the AI model during the generation phase. This ensures that the output is informed by domain-specific information, increasing accuracy and relevance.
RAG combines the strengths of retrieval-based systems and generative AI. Instead of relying solely on the model's pre-trained knowledge, it integrates current and specific knowledge at runtime. RAG also reduces hallucinations (making up information) since the generated text is guided by real, retrieved data. Lastly it can handle large, external knowledge bases without the need to retrain all of the information directly as context for the model.
To make the AI use a Data Source, you must include a Query Data Source Block in your workflow. This block allows you to instruct the AI on how to query the uploaded data and integrate relevant information into its responses.
Upload Product Manuals: Create a Data Source with PDFs of your product documentation.
Query Data Block: Use the block to retrieve specific sections based on user queries, such as “How do I reset my device?”
Generate Text Block: Combine the retrieved data with the AI’s language capabilities to deliver a clear and accurate response.
Navigate to the Data Sources Folder from the Explorer Tab
Click the (+) icon to create a new Data Source.
Configure the Data Source with a name and description
Upload your files.
PDF (.pdf)
CSV (.csv)
Word Document (.docx)
Excel Spreadsheet (.xlsx)
Text File (.txt)
HTML File (.html)
500 MB per file
5 Million words per file
Maximum of 150 files per Data Source
Note: Uploading files does not automatically make the AI know everything in them.
You need to use the Query Data Block and in order to bring in the context from your Data Source into the workflow.
Once a Data Source is created, you can view and explore its individual files to verify the data or ensure that the uploaded content has been processed correctly. Double-clicking on a Data Source opens a detailed view of its contents. When you open a file within a Data Source, the interface provides multiple tabs for exploring different aspects of the data:
Preview: Displays the file preview, giving you a quick way to review the original content. This is useful for verifying that the file has been uploaded correctly.
Extracted Text: Shows the plain text extracted from the file, stripped of any formatting. This is the data the AI will query during workflows.
Raw Chunks: Displays the segmented "chunks" of text that the system uses to create the vector database. Chunks ensure the data is broken down into manageable, queryable parts.
Raw Vectors: Shows the numerical vector representations of the data. These vectors are used by the AI for retrieval during queries.
Force reloading ensures that the Data Source reflects the latest changes after you’ve uploaded new or updated files.
Open the Data Sources Folder in the Explorer Tab.
Right-click the Data Source you want to reload.
Select Force Reload from the drop-down menu.
The Data Source will remove all previous data not currently included and update accordingly.
You can rename a Data Source through the Data Source Editor or directly from the Explorer Tab.
From the Data Source Editor:
Open the Data Sources Folder.
Select the Data Source to rename.
In the editing screen, update the name in the Name field.
Changes are saved automatically.
From the Explorer Tab:
Open the Data Sources Folder.
Right-click the Data Source.
Select Rename and enter the new name in the form field.
Open the Data Sources Folder in the Explorer Tab.
Right-click the Data Source you want to delete.
Select Delete from the drop-down menu.
Confirm the deletion.
The Data Source Details panel provides an overview of a Data Source's configuration and its current state. This section also includes the Query Tester, a tool to verify that your Data Source is properly loaded and functioning as expected.
This section displays essential information about your Data Source, helping you understand its status and content.
Name: The name of the Data Source. This can be customized to make it easier to identify.
Description: A brief optional description of the Data Source. If no description is provided, it will display as "No description."
Status: Indicates whether the Data Source is loaded and ready for use. Includes a word count progress bar (e.g., "0.1k/5m words") to show how much data has been loaded relative to the word limit.
Documents: The number of files or documents currently included in the Data Source.
Words: The total number of words across all documents in the Data Source.
Vectors: The total number of vectorized entries in the Data Source. Vectors are created during the conversion of documents into a searchable format.
The Query Tester is a built-in tool to test queries and validate that your Data Source is working correctly.
Query Input: A text box where you can type your query. This is how you simulate an AI prompt or user question to test the Data Source’s retrieval capabilities.
Run Query Button: Clicking the play button sends the query to the Data Source and returns results, allowing you to verify the response.
If your Data Source contains product manuals, you can test a query like:
Data Sources enhance your AI workflows by integrating custom knowledge. Follow these best practices to maximize their effectiveness:
Group related files in the same Data Source for efficient queries and use clear, descriptive names like "Product_Manuals_2024."
Clean files to ensure relevance and accuracy. Use supported formats like PDF, CSV, DOCX, and stay within upload limits (500 MB per file, 150 files per Data Source).
Write specific prompts to guide the AI in retrieving the right information. Combine queries with variables for dynamic, context-aware responses.
Regularly refresh content when information changes and use Force Reload to apply updates after modifying or adding files.
Segment large datasets into smaller, focused Data Sources and avoid overloading Data Sources to maintain query accuracy and speed.
Analyze an image URL based on text instructions.
The Analyze Image Block processes and analyzes an image URL based on provided text instructions. It generates a text response based on the analysis that is assigned to a variable.
Provide text instructions for analyzing the image. Use {{variables}}
to make the prompt dynamic based on a previous step in the workflow.
Identify the objects in the image and describe their arrangement. Focus on colors and sizes.
Specify the URL of the image to be analyzed. Use a {{variable}}
to dynamically reference image URL generated or fetched earlier in the workflow.
Creates a new variable and saves the generated test response to it. Enter a variable_name
to store the response for later use in the workflow.
Choose the AI Model that you’d like to use for the image analysis. You may adjust the model’s Temperature and Max Response Size.
Creating a well-crafted prompt is essential for accurate and meaningful image analysis. Follow these guidelines to ensure your prompts guide the AI model effectively:
Be Clear and Specific: Clearly describe what you want the AI to analyze or identify in the image. Focus on specific elements or features relevant to your use case.
Example: "Identify the main objects in the image, including their colors, sizes, and positions.”
Define the Objective: Explain the purpose of the analysis to provide context for the AI. This helps tailor the response to your needs.
Example: "Analyze the image and describe how the objects are arranged for interior design recommendations.”
Break Down Complex Instructions: Use simple, step-by-step instructions when requesting detailed analyses.
Example: " 1. List all objects visible in the image. 2. Describe their positions relative to each other. 3. Highlight any unusual or standout features.”
Focus on Key Details: Avoid asking for unnecessary information to keep the analysis concise.
Example: "Describe the objects on the table, focusing on their materials and colors.”
Include Contextual Cues: Provide additional context for better results, such as the setting, purpose, or focus of the image.
Example: "This image shows a park scene. Identify all visible plants, trees, and animals.”
Presents messages or outputs directly to users
The Display Content Block allows you to present messages or outputs directly to users in your workflows. It is used for delivering dynamic content with support for markdown formatting.
Define the message you want to display to the user. The Display Content Block supports markdown formatting, allowing you to structure text, add emphasis, or include links for a polished presentation. Use {{variables}}
to make the message dynamic.
Learn more about Markdown formatting →
Learn more about markdown → [PAGE: Writing Prompts with Markdown].
Collect data directly from end users
The User Input Block lets your workflows to collect data directly from end users. Unlike AI-focused blocks, this block presents the forms or interfaces where users can provide input. When New User Inputs are automatically added to the User Inputs folder in the left-side Explorer tab.
A User Input in MindStudio is a form-based interface that collects data directly from your end users. This data serves as context or input for workflows, and is stored as a variable
and serves as context for downstream processes and AI-powered actions within your workflow .
When New User Inputs are created, they are automatically added to the User Inputs folder in the left-side Explorer tab.
Click on the + button at the bottom of the User Inputs Configuration Panel.
In the modal, choose from created User Inputs to add to the block.
(Optional) Click and drag User Inputs up and down to reorder them.
Click the Add button to confirm your choices.
Click on the (+) button at the bottom of the User Inputs Configuration Panel.
In the modal, click the Create New… button at the bottom left. A new User Input will be created in Explorer within the User Inputs folder, and will be automatically added to the block.
Configure the user input.
Hover over the User Inputs folder, then click on the + button to the right of the folder. A new User Input will be created in Explorer within the User Inputs folder.
Configure your User Input.
After configuring the User Input, you will need to add the User Input to the User Input Block.
Open the User Inputs folder from the Explorer tab.
Click on the User Input that you’d like to modify
Changes to configurations are automatically saved.
Open the User Inputs folder in the Explorer tab.
Right-click on the User Input to duplicate.
Select the Duplicate option.
The duplicated User Input displays in the User Input folder using the root name of the original with a number following it.
Deleting a User Input removes it from all User Input Blocks across all Workflows.
Open the User Input folder in the Explorer tab.
Right-click on the User Input to delete.
Select the Delete option from the drop-down menu.
Confirm your deletion.
The type of User Input determines its functionality and format. See the table below for a full list of User Input types.
Short Text
Collects small amounts of text for concise inputs.
Names, URLs, locations.
"Enter your city."
Long Text
Collects large amounts of text, such as detailed descriptions or pasted content.
Long-form responses, content uploads.
"Describe your project in detail."
Text Choice
Allows users to select one or multiple text-based choices.
Yes/No questions, multiple-choice selections.
"Which services do you use? (Select all that apply)"
Image Choice
Enables selection of one or multiple images, each labeled with text.
Visual comparisons or preferences.
"Select your preferred design."
Rating
Provides a 1–5 scale rating input.
Feedback or satisfaction scoring.
"Rate our service (1=Poor, 5=Excellent)."
Date
Displays a date picker for selecting specific dates.
Scheduling or logging events.
"Select your appointment date.”
Display
Shows static text or images for guidance or instructions. No user input required.
Providing information or directing users.
"Enter a URL to analyze the page content.”
Upload File
Presents an upload option for text-based files. Supported formats include Excel, CSV, Word, Text, PDF, HTML.
Uploading documents for AI analysis.
"Upload your report for review.”
Upload Image
Allows users to upload images for analysis or storage.
Collecting visual content or custom inputs.
"Upload an example visual."
The variable_name
is a unique identifier for the User Input. It is used to reference the collected data in downstream workflow blocks. Use a variable name that is unique to the data being collected. (Example: customer_goal
, client_industry
, first_name
)
Configurations define how the User Input behaves and what options are available. These settings vary depending on the input type.
An optional image displayed above the input field when presented to the end user. It enhances visual appeal and provides context to your end user.
Pre-fills the input field with a sample response when previewing workflows. This simplifies front-end testing when viewing a draft preview of an AI Agent.
Explore the diverse range of AI Agents you can build with MindStudio
MindStudio is a powerful and versatile platform that empowers you to create all kinds of AI Agents tailored to your unique needs. Whether you're streamlining operations, enhancing customer experiences, or boosting productivity, MindStudio provides the tools to design these intelligent solutions that fit neatly into your existing business or tech stack.
This article explores the diverse range of AI Agents you can build with MindStudio, showcasing real-world examples to spark inspiration and help you unlock the full potential of AI in your workflows.
The examples we've explored are just a small glimpse into what’s possible with MindStudio. From automating routine tasks to crafting sophisticated AI-powered solutions, the possibilities are endless. MindStudio puts the power of AI at your fingertips, allowing you to design workflows that are as unique as your bespoke business needs.
These examples are a starting point—what you build is entirely up to you. The possibilities are vast, and the direction you take is yours to define. With MindStudio, you have the flexibility and capability to build exactly what your business needs to thrive in the age of AI.
Learn how to properly leverage variables in your AI Workflows
Variables in MindStudio are dynamic placeholders that store data during workflow execution. They allow you to pass information between blocks and workflows seamlessly.
Example:
Variable Name: userName
Usage: "Hello,
{{userName}}
! Welcome to our app."
Variables are created automatically in MindStudio whenever:
A User Input collects data.
A block generates an output (e.g., Generate Text Block, Analyze Image Block).
You manually define them in the Start Block.
These are defined in the Start Block of your workflow. Values for these variables are passed in as arguments when a workflow is run via API or via the Run Workflow block.
Some blocks, such as Generate Text Blocks or User Input Blocks, assign values for the variable while the workflow is running. For Example, after performing a Google Search, the block can store the results in a variable called google_result
.
To use a variable in any block or prompt, reference it by enclosing the variable name in double curly braces: {{variable_name}}
.
Example Calling Variables in a Generate Text Block:
MindStudio provides tools for extracting specific values from JSON objects using the JSON Path syntax and the get
helper. This allows workflows to handle and manipulate structured data with precision, making them more dynamic and adaptable.
get
Helper - Query JSON VariablesThe get
helper allows you to query JSON variables for specific values using JSON Path expressions. This feature is especially useful when working with nested or complex JSON structures.
Given the following JSON assigned to myJsonVariable
:
Use this to extract the email address:
Output: alice@example.com
Given the following JSON:
Use this to extract the name of the first item:
Output: Foo
JSON Path also allows for querying multiple elements. Given the following JSON:
Output: ["Foo", "Bar"]
Use a JSON Path Tester: Tools like JSONPath Online Evaluator can help you refine and test your JSON Path queries.
Validate JSON Structure: Ensure your variable contains valid JSON data before attempting to extract values.
Handle Missing Values: Include fallback logic in your workflow to handle cases where the expected path does not exist in the JSON.
MindStudio leverages the Handlebars templating language to make working with variables intuitive and powerful. Handlebars allows you to include, manipulate, and conditionally render data directly in your prompts, outputs, and logic.
Handlebars supports if-else
logic for dynamic outputs:
For a full list of expressions, see Handlebars Documentation.
In addition to standard Handlebars features, MindStudio introduces two special methods for advanced functionality:
{{json varName}}
Converts a JSON object into a string format.
Example:
If userProfile
contains:
Usage:
Output:
{{sample varName number token}}
Extracts a portion of the variable's content based on specified parameters.
Parameters:
varName: The variable to sample.
number: The number of items (e.g., lines, words, or letters). If negative, starts from the end.
token: The type of unit to extract (line
, word
, or letter
).
Examples:
Extract the first 5 words:
Extract the last 3 lines:
Extract the first 10 letters:
Convert text into voice-over audio
The Text to Speech Block converts written text into voice-over audio, allowing you to dynamically generate audio content based on workflow inputs.
Define the text you want to convert to audio. Use {{variables}}
to dynamically insert text you’d like converted into audio.
Creates a new variable and saves the audio file URL to it. Enter a variable_name
to store the response for later use in the workflow.
Choose the AI Model you’d like to use to generate the audio. Different models will have unique settings to adjust the output’s characteristics. Review the selected model’s specific options for a full list of configurations.
Dynamically gather information from a user.
The User Context block utilizes AI to generate a series of questions to the user based on certain parameters.
Define the additional data you want to collect from the user. Like the Generate Text block, the prompt serves as a message to the AI, directing it to produce a specific output. In this instance, that output is a set of interview-style questions designed to gather more context from the user.
Specify the level of context gathered by the AI. This setting will inform how probing the questions are and how comprehensive the final report will be.
Three options are provided:
Quick (default)
Medium
Thorough
Set the amount of questions to be asked. By default, the max amount of questions will be set to 5.
Save the responses to a variable that can be used in other parts of the workflow. Example: My_Var
Select the output format for the response data:
Text (Default)
JSON
Extract text content from a file provided via a URL
The Extract Text from File Block allows you to extract text content from a file provided via a URL. This block is perfect for workflows that require processing text from uploaded files or external sources.
Provide the URL of the file to extract text from. You can enter the URL directly or use a {{variable}}
to make it dynamic.
Example:
Static URL: https://example.com/document.pdf
Dynamic URL: {{file_url}}
Plain Text (.txt)
HTML (.html)
PDF Document (.pdf)
Spreadsheet (.csv)
The maximum file size is 10MB.
Creates a new variable and saves the extracted text to it. Enter a variable_name
****to store the response for later use in the workflow.
Validate File URLs: Ensure the provided URL points to a valid file with readable text content.
Use Variables: Leverage dynamic variables to adapt the block to multiple use cases without manually changing the URL.
Set Clear Outputs: Choose meaningful variable names to make workflow debugging and customization easier.
Monitor File Size: Keep file sizes within the 10MB limit to ensure smooth processing.
Use AI models to generate images in your AI Agents.
The Generate Image Block sends a text prompt to an AI model and returns an image URL based on the description provided.
Define the text description of the image you want the AI model to generate. Use {{variables}}
to make the prompt dynamic based on a previous step in the workflow.
Creates a new variable and saves the image URL to it. Enter a variable_name
****to store the response for later use in the workflow.
Choose from MindStudio’s library of image-generation models, optimized for various styles and quality levels.
Each AI image model may have unique configuration options for fine-tuning outputs, such as additional style parameters, rendering quality, negative prompt or advanced filters. make sure you review the specific settings for each model you select.
Crafting effective prompts for image generation ensures that the AI model produces visuals that align with your expectations. A good prompt provides clarity, context, and enough detail to guide the model’s output without being overly restrictive.
Be Clear and Specific: Use precise language to describe the desired image. Focus on key visual elements, such as objects, environments, colors, and textures.
Add Context and Atmosphere: Describe the mood, setting, or story behind the image to provide creative direction.
Include Style and Aesthetic Details: Specify the art style or medium (e.g., photorealistic, watercolor, comic-style, abstract). Mention influences like "minimalist," "vintage," or "cinematic glow."
Use Action and Interaction: Incorporate actions or interactions to make the image dynamic.
Provide a Color Palette: Specify dominant colors to guide the tone of the image.
Specify Composition and Layout: Include details about positioning, perspective, and framing.
Add Creative Flourishes: Suggest unique or imaginative elements to include in the image.
Being Too Vague: "Create an image of a tree." → Lacks detail and context.
Overloading the Prompt: Avoid including too many unrelated elements, which can confuse the model.
Leaving Out Key Details: Forgetting to mention the desired style, mood, or setting can lead to generic results.
Generate a hyper-realistic image of a futuristic cityscape at sunset. The city should feature towering skyscrapers with glass facades reflecting the orange and pink hues of the sunset.
Include floating vehicles with soft neon underlights flying between the buildings, creating a sense of motion.
In the foreground, add a bustling street market with humans and humanoid robots interacting, showcasing a blend of futuristic technology and traditional market stalls.
Use a warm color palette dominated by orange, pink, and gold, with subtle accents of teal and blue in the neon lights.
The sky should be filled with soft clouds, and the setting sun should cast long shadows across the scene.
Ensure the image feels dynamic and alive, capturing both the grandeur of the skyline and the vibrancy of the street-level activity. The overall style should balance photorealism with a slight cinematic glow for a futuristic yet inviting atmosphere.
Create charts in your AI Agents
The Generate Chart Block creates a chart image based on JSON-formatted data, allowing you to dynamically generate visualizations in your workflows. The resulting image URL can be saved as a variable for further use in the workflow.
Provide the data for the chart in valid JSON format. This must follow the data
object structure as used in Chart.js.
Specify the dimensions of the chart in pixels:
Width: The horizontal size of the chart.
Height: The vertical size of the chart.
Default dimensions are 500x300.
Creates a new variable and saves the chart image URL to it. Enter a variable_name
****to store the image URL for later use in the workflow.
Visit QuickChart Gallery, where you can browse a variety of chart examples. Open a chart you like, then locate the data
object in the chart configuration.
data
ObjectOnly copy the portion labeled as data
from the chart’s JSON.
Some chart examples are provided in plain JavaScript objects instead of JSON. To ensure compatibility:
Paste the data
object into ConvertSimple: JavaScript to JSON Converter.
Review the output on the right side to confirm proper JSON formatting.
Copy to Clipboard.
JSON
into the Chart Data ConfigurationOpen the Manual Input field for the corresponding chart type inside the app.
Paste the converted JSON data into the Chart Data configuration.
View Chart.js Docs to learn more about generating JSON for your charts →
Extract text content from a webpage
The Scrape URL Block allows you to extract text content from a webpage and use it within your workflow. This block is ideal for gathering data dynamically from online sources to provide context for your AI Agents.
Provide the webpage URL you want to scrape. You can input a static URL directly or use a {{variable}}
for dynamic URLs.
Static: https://example.com/article
Dynamic: {{inputURL}}
Creates a new variable and saves the extracted text to it. Enter a variable_name
to store the response for later use in the workflow.
Select the scraping provider to process the webpage. Different providers will have different configuration settings and outputs. Choose the one that works best for your needs.
The Scrape URL Block supports multiple providers to extract webpage content, each offering different levels of customization and functionality.
The default provider extracts basic text content from the provided URL without additional configuration options. It is suitable for quick, straightforward scraping tasks that do not require advanced customization.
When enabled, this setting returns only the main content of the page, such as the body text, while excluding headers, navigation bars, and footers. Disabling it includes the entire page content, including headers and sidebars.
When enabled, captures and returns a screenshot of the top of the page you are scraping. When disabled, it does not include a screenshot.
Allows you to specify a delay (in milliseconds) before scraping begins to let the page fully load. By default, no wait time is applied. For example, setting it to 500
waits half a second before scraping.
Converts all relative paths in the scraped content to absolute URLs when enabled. This ensures that links and resources in the scraped content are fully qualified. When disabled, relative paths remain as they are.
Lets you include custom HTTP headers with your scraping request. This is useful for adding cookies, specifying a User-Agent
, or passing authentication tokens.
Allows you to define HTML tags to exclude from the scraped content. For instance, adding <footer>
removes footer elements from the output.
When enabled, this option uses an LLM (Large Language Model) to extract structured data from the page. When disabled, the block returns raw textual content without further processing.
Validate URLs: Ensure the URL points to a publicly accessible page with the desired content.
Monitor Structure Changes: Webpages may change structure over time, which could affect scraping accuracy.
Use Variables: Leverage dynamic variables to adapt the block to multiple use cases without manually changing the URL.
Set Clear Outputs: Choose meaningful variable names to make workflow debugging and customization easier.
Integrate custom data into your AI workflows using RAG
Select the Data Source you want to query. This is the collection of documents you have uploaded and configured in the Data Sources Folder.
Use the dropdown menu to select an existing Data Source.
Click New... to create a new Data Source if none are currently available.
Creates a variable where the query results will be stored. Enter a variable_name
to store the result of the query for later use in the workflow.
Define the number of results the block will retrieve from the Data Source. Each result will return a different chunk of retrieved text from the Data Source.
Enter the query prompt that instructs the AI on what information to retrieve. You can include {{variables}}
to make the query dynamic and context-aware.
Crafting effective queries is crucial for retrieving the most relevant and accurate information from your Data Sources. A well-written query ensures that your AI can efficiently locate and use the data needed for your workflow.
Familiarize yourself with the content of your Data Source. Knowing the structure, topics, and focus of the documents helps you write more precise queries.
Example:
If your Data Source contains product manuals, your queries should explicitly reference product names or sections like "warranty" or "setup instructions."
Write concise and focused queries to ensure the AI retrieves the most relevant results. Avoid overly broad or ambiguous prompts.
Examples:
Broad Query: Tell me about this product.
Specific Query: What are the warranty terms for the {{productName}}?
Include specific instructions or context in your query to guide the retrieval process.
Examples:
Tailor the query to focus on a specific part of the Data Source to improve accuracy. For large Data Sources, specifying a topic or section can yield better results.
Example:
Use actionable keywords like "retrieve," "explain," "summarize," or "list" to make the purpose of the query clear.
Examples:
Unlike traditional web scrapers, is equipped to handle dynamic content rendered with JavaScript. It offers advanced configuration options for greater control over how webpages are scraped.
The Query Data Source Block allows you to retrieve relevant information from a within your workflow. This block is essential for integrating custom data into your AI workflows through Retrieval Augmented Generation (RAG).
Test your query with the to ensure it retrieves the intended results. Adjust the wording, variables, or focus as necessary.
Execute custom JavaScript or Python code in a workflow
The Run Function Block empowers your workflows with the ability to execute advanced, custom logic, enabling deeper integration and greater flexibility. This block ensures workflows remain adaptable to your most specific needs. This block is ideal for performing complex calculations, integrating with external APIs, or processing data dynamically.
Select the Function you want to execute or create a new one.
Select a Function: Use the dropdown menu to choose an existing custom function from Explorer.
New Function: Click the New... button to create a new function in either JavaScript or Python. New Functions are automatically added to the Functions folder in your Explorer.
Clicking Browse community functions... to import functions submitted by members of the MindStudio community into your project.
Note: Community functions are not maintained by MindStudio. It’s essential that you test our any community functions before incorporating them into your workflows. For updates to the function, you’ll need to reach out to the creator of the function.
All Functions are custom coded and will have different configurations depending on the function that is selected for use in this block.
Route block paths based on human interaction
The Menu Block allows you to present users with a question or statement and multiple options to choose from. Based on the selection, the workflow will route to a corresponding block path. It is ideal for guiding user interactions, creating decision points, and defining different workflow paths.
Enter the question or statement to display to the user. Use clear and concise language to ensure the user understands the options presented.
Example: “What would you like to do next?”
Define the selectable options for the user and map each option to a specific route in the workflow.
Add options using the + button.
Each option can trigger a unique path or action in the workflow.
Example Options:
“View Account Details” → Route to the account information workflow.
“Contact Support” → Route to a support request form.
Add Options: Use the + button at the bottom of the Options list to add new options. Each option represents a distinct choice the user can select.
Click the Circle: Click the circle next to the option you want to route. The circle will highlight, indicating that the option is active for routing.
Click on the Target Block: On the workflow canvas, click on the block you want the option to connect to.
Confirm the Connection: Once the route is connected, the circle next to the option will fill in completely.
Keep Options Clear: Use descriptive labels for each option to help users make informed choices.
Test Routes: Verify that each option routes correctly to its intended workflow segment.
The Menu Block simplifies decision-making by providing clear options and directing users to the appropriate workflows, making it an essential component for interactive processes.
Route block paths based on AI decision making
The Logic Block uses AI to dynamically decide which route to take based on the most likely condition. Unlike the Menu Block, where users make the choice, the Logic Block evaluates predefined conditions and autonomously selects the appropriate path. This is ideal for automating decisions without direct user input.
Define the conditions the AI will evaluate to determine the most likely path. Each condition corresponds to a case that the workflow can route to.
Use the + Add Condition button to define multiple cases. Each condition should describe a unique scenario the AI can evaluate. Conditions should incorporate variables to make routing dynamic and context-aware.
Example 1: Customer Sentiment
Case #1: The customer is satisfied with the service based on the following interaction:
{{message_transcript}}
Case #2: "The customer is requesting support based on the following interaction: {{message_transcript}}
"
Example 2: Order Status
Case #1: {{order_status}}
== "complete"
Case #2: {{order_status}}
== "incomplete"
For each condition, click the Select Destination button and choose a block on the canvas where the Logic Block should route if that condition is selected. Once connected, the circle next to the destination button will fill in, confirming the route.
Configure how the AI evaluates the conditions.
The Logic Block uses AI engines to evaluate conditions and make routing decisions. Choose an engine based on your workflow's complexity and precision requirements:
Default Engine
Reliable and general-purpose engine for most use cases.
Optimized for simple workflows requiring consistent and predictable decisions.
Experimental Engine
Suitable for advanced use cases exploring cutting-edge capabilities.
May include beta features, offering innovative decision-making strategies.
Recommended for testing new workflows or unconventional logic setups.
Haiku Engine
Low cost, low latency engine for quick decision making
May not be as accurate as other engines
The Logic Block presents multiple conditions to the AI engine.
Based on the input and context, the AI evaluates the conditions and determines the most likely match.
The Logic Block routes the workflow to the destination associated with the selected condition.
Clearly Define Conditions: Use specific and distinct criteria for each case to avoid overlaps.
Test Scenarios: Simulate workflows to ensure the AI selects the expected routes.
Use Variables: Incorporate variables in conditions to dynamically adjust the decision-making process.
Execute a sub-workflow within a parent workflow
The Run Workflow Block allows you to execute a separate workflow within your main workflow. This block is ideal for running sub-processes and reusing common processes across multiple workflows.
Select the AI Agent containing the workflow you want to run.
Use the dropdown to choose the appropriate AI Agent.
Ensure the selected AI Agent contains the workflow you want to execute.
Choose the specific workflow within the selected AI Agent to execute.
Select from available workflows in the chosen AI Agent.
For modular workflows, ensure the selected workflow is designed for integration.
Define the values to pass to the required variables in the selected workflow.
Assign static values or use variables from the current workflow.
Variable: customer_name
Value: {{userName}}
Map the outputs of the sub-workflow back to variables in the parent workflow.
Assign the variable_name
you want to store the sub-workflow's output in.
Enter the variable_name
without curly braces.
Variable: customer_profile
Mapped to: customer_info
in the parent workflow
Before a sub-workflow can be used in the Run Workflow Block, it must be configured with:
Define the launch variables that the parent workflow will provide. Go to the sub-workflow you'd like to run and add Launch Variables in the Start Block of the sub-workflow.
customer_name
: The parent workflow will pass the customer name down to the sub-workflow.
order_id
: The parent workflow will pass the ID of the customer's order down to the sub-workflow.
Define the Return Data that the sub-workflow will return to the parent workflow. Add these variables to the sub-workflow’s Terminator Block under Return Data -> JSON Output.
customer_profile
: Returns customer details retrieved from the sub-workflow.
order_status
: Returns the status of the processed order from the sub-workflow.
Note: If a workflow lacks these configurations, you will see an error message:
"The selected workflow has not been configured with launch variables and structured outputs."
Ensure Variable Alignment: Make sure that required variables in the sub-workflow are properly assigned with values during execution.
Optimize Your Workflows: Use this block to simplify complex workflows by breaking them into manageable, reusable components.
Test Input and Output Mapping: Verify that all variables are passed correctly between the parent and sub-workflows.
Allow for user review of a specific step in the workflow.
The Checkpoint Block displays a prompt to the user, allows them to revise variables, accept or reject certain steps in the workflow and route them accordingly.
Alert: Allows users to accept or reject a workflow step before moving forward.
Revise Variable: Allows users to accept or reject a workflow step and make revisions to variables (i.e. generated content) before moving forward.
This section includes the actual prompt that is being displayed to the user.
Title: Review Assets
Description: Review your generated assets and and continue when you are ready to finalize them for download.
Create a custom label to approve the displayed prompt. If this section is left blank, the label will automatically display "Approve".
Route the approve label to the desired block by selecting the Select Destination button and then the desired block destination.
Create a custom lable to reject the displayed prompt. If this section is left blank, the label will automatically display "Reject".
Route the reject label to the desired block by selecting the Select Destination button.
This configuration is only visible if the Revise Variable mode is selected.
The variable that will be presented to the user to be accepted or revised. Example: {{My_Var}}
The name of the variable that will be displayed to the user. Example: My Variable
These are the types of content that can be revised
Text : includes any text based content generated using the Generate Text block.
HTML : includes images and pdfs generated using the Generate Asset block.
In this section, provide any additional context you might want to provide to the AI to use when making revisions to the content.
Select the AI model to be used when making revisions to any variables.
End the workflow with multiple end behaviors
The Terminator Block marks the endpoint of a workflow. It offers multiple behaviors to customize the user’s experience or finalize the workflow’s output. End behaviors define how workflow concludes, whether through interaction, document processing, or returning structured data.
Choose one of the following behaviors to define how the workflow ends:
Provides a frontend native chat experience, ideal for conversational workflows.
An introductory message displayed at the start of the chat session. This is for display purposes only and is not included in the AI prompt.
Choose how user inputs are processed before being sent to the AI. Options include no processing or custom strategies.
No processing: Does not pre-process the message in any way.
Data Source: Select or create a Data Source to provide the AI with additional context.
Max Results: Set the maximum number of relevant results retrieved from the Data Source.
Template: Define how retrieved context is combined with the user’s input.
{{queryResult}}
: refers to the result of the Data Source query
{{originalMessage}}
: refers to the message sent by the end user in the chat
Model Mixer: Allow users to use multiple LLM models during the chat. For example, a user can send a message and use model mixer to get a response from both Claude 3 Opus and GPT-4o.
Response Editing: Enable users to edit LLM responses in the chat after a response has been returned.
Leverages an AI-assisted Rich Text Editor to revise and enhance documents.
Define the instructions for document revision.
{{selectedText}}
: refers to the highlighted text in the document editor.
{{instructions}}
: refers to the prompt entered when using AI-assisted revisions.
An introductory message displayed at the start of the chat session. This is for display purposes only and is not included in the AI prompt.
Enables users to chat with documents from a Data Source. Only works for PDF files.
An introductory message displayed at the start of the chat session. This is for display purposes only and is not included in the AI prompt.
Finalizes the workflow and returns output values to the calling function or user.
Email: Enable email notifications when the workflow ends.
When enabled, click on + Add to enter the emails you’d like the notification sent to.
Slack: Configure Slack notifications for workflow completion.
Click Add to Slack to authenticate and link your Slack account.
Once connected, select the desired channel from your Slack workspace you’d like the notification sent to.
Retrieval-Augmented Generation (RAG): Incorporates external context from into the AI’s responses. Enabling this strategy opens the following configurations:
JSON Output: Define the final data to return using JSON format. Create key value pairs and use {{variables}}
for .
Integrations in MindStudio connect various apps and services to your workflows. You can find all kinds of integrations by adding new blocks in the tab.
Search for posts on Bluesky
Get general details of a Facebook page
Get data from a Meta Threads profile
Get Comments data from an Instagram post
Get data from mentions on Instagram
Get posts from an Instagram profile
Get data from an Instagram profile
Get reels from an Instagram profile
Make a new post on LinkedIn
Make a new post on X
Search for posts on X by keyword
Retrieve Google search results
Retrieve Google Image search results
Retrieve Google Trends keyword results
Retrieve Google News search results
Retrieve Gmail search results
Creates an email draft in Gmail
Sends an email via Gmail account
Update labels on Google email messages
Create a new Google Doc document
Retrieve content from a Google Doc
Update values of an existing Google Doc
Create a new Google Sheet document
Retrieve content from a Google Sheet
Update values of an existing Google Sheet
Get company & person data via email
Enrich company data using a domain
Find emails for a given website
Find a person's email for a domain
Verify a person's email address
Enrich person's data via email
Fetch captions from a YouTube video
Fetch details from a YouTube channel
Fetch all comments from a YouTube video
Fetch details from a YouTube video
Retrieve YouTube search results
Search for trends by a specified category
Create a page in Notion
Update a page in Notion
Run an Apify actor
Run a specified scenario on Make.com
Send a message to a specified Slack channel
Route the current workflow to a completely different workflow
The Jump Block enables you to route the current workflow to a completely different workflow. This block is ideal for modularizing workflows or reusing common processes across multiple workflows.
Select the Workflow: Use the dropdown menu to choose an existing workflow.
Create a New Workflow: Click the New... button to create and configure a new workflow directly. Once created, it will automatically be available in the dropdown.
Send messages from your AI Agent directly to a Slack channel
The Post to Slack Block allows you to send messages directly to a Slack channel as part of your workflow. This block is perfect for automating updates, team notifications, or alerts with customizable formatting.
Choose how the message will be formatted before sending to Slack:
Markdown Text: Use Slack-compatible Markdown to create formatted text messages
(e.g., *bold*
, _italic_
, >
for blockquotes).
Slack Block Kit Blocks: Design a highly customized message layout using Slack's Block Kit. Block Kit enables advanced formatting, such as interactive elements, sections, dividers, and more. Learn more at Slack Block Kit Documentation.
Connect to a Slack channel where the message will be posted.
Click Add to Slack to authenticate and link your Slack account.
Once connected, select the desired channel from your Slack workspace.
Compose your message in markdown or build your custom Slack block.
Slack blocks are modular and interactive components used to create rich, visually engaging messages in Slack. With the Slack Block Kit, you can stack and arrange blocks to design powerful message layouts that deliver information or enable user interactions.
Blocks are the building units for creating structured, visually appealing Slack messages. They are stackable components designed to display text, images, buttons, and other elements in a flexible, layout-friendly way.
Blocks are the core components used to structure and organize the content of your Slack messages.
Actions
Contains interactive elements like buttons and menus.
Context
Displays small contextual information with images or text.
Divider
Adds a horizontal line to separate content.
File
Displays information about remote files.
Header
Displays large, bold text for headings.
Image
Displays standalone images.
Input
Collects user input via various input types.
Rich Text
Allows formatted and structured text.
Section
Displays text alongside optional block elements like buttons or images.
Video
Embeds a video player.
Block elements are the interactive or visual components embedded inside blocks to enrich the functionality of your message.
Button
Provides users with a direct path to performing actions like confirming tasks.
Checkboxes
Allows users to select multiple options from a list.
Date/Time Pickers
Enables users to select a date, time, or both.
Dropdown Menus
Lets users choose from a list of options.
Plain Text Input
Allows users to enter freeform text.
Radio Buttons
Limits users to selecting one option.
Image
Displays an image as part of a larger block of content.
Overflow Menu
Provides a button that shows a list of additional options.
Multi-Select Menu
Lets users select multiple options from a dropdown list.
Composition objects allow you to enhance the structure of blocks and elements, enabling even more customization and interactivity.
Confirmation Dialog Object
Adds a confirmation step to interactive elements like buttons.
Conversations Filter Object
Filters the list of options in conversation selector menus.
Option Object
Represents a single item in a list of options for selection elements.
Option Group Object
Groups options in select menus for better organization.
Text Object
Defines text formatting for different blocks and elements.
Workflow Object
Contains workflow trigger information for running specific workflows.
Slack File Object
Represents a file for use in Image or File blocks.
Slack’s Block Kit Builder lets you visually design your blocks, test layouts, and prototype quickly. It provides:
Drag-and-drop elements to stack and organize blocks.
Previews to refine your design.
Click Copy Payload button to copy and paste JSON into your app.
Learn about general workspace controls and navigation
The MindStudio workspace is your central hub for managing AI Agents, integrations, and account settings. Here's a high-level overview of the workspace controls, as shown in the screenshot:
Displays all AI agents available to run. This includes AI agents that are automatically available to any MindStudio user as well as any agents you have built yourself.
Displays a history of all your runs made by an AI agent and their output.
Monitors the usage of your AI Agents to analyze performance and resource consumption.
Displays all published AI agents and drafts. This is the primary area to manage your workflows, edit drafts, and view published agents.
Allows you to manage any workspace settings such as workspace name, integrations, and team members.
Manage subscription plans, billing details, and payment methods for your account.
Access advanced features like API documentation and serverless function integrations.
Access MindStudio's help resources or contact support for assistance.
Links directly to our community forum.
This is where you can access and run all available AI agents.
This section displays the AI agents you frequently use, giving you quick and easy access to your favorites.
This section shows all the AI agents you've built and published. Drafts, however, will not appear here.
Other agents available to run will be displayed in the categories below:
Research
Analyze Content
For Creators
For Students
Youtube
For Venture Capitalists
Image Generation
For Developers
Chat with Web Pages
Miscellaneous
This is where you will find all of your published AI Agents and drafts.
Header Controls
Search Bar: Quickly locate specific AI Agents by name.
Filters and Sorting: Refine the displayed AI Agents based on activity, publish date, or name.
Create New Agent: Creates a new AI Agent project and opens the Editor.
Grid/List Toggle: Switch between grid view (for a visual layout of AI Agent cards) and list view (for a more compact, detail-oriented display).
Displays all published AI Agents, including their run history and statuses. Each card shows:
AI Agent Name
Recent Activity (e.g., last run date)
Total Runs
Agent Card Quick Actions:
Edit Agent: Opens the AI Agent editor, allowing you to modify the workflow or settings.
Copy Agent ID: Copies the unique identifier of the AI Agent to your clipboard, useful for API integrations or advanced configurations.
View Logs: Access detailed logs of the AI Agent's activity to troubleshoot issues or review performance.
Make a Copy: Creates a duplicate of the AI Agent, enabling you to reuse or modify it without affecting the original.
Delete Agent: Removes the AI Agent from the workspace. This action is permanent and should be used cautiously.
Click on your Workspace name at the bottom left of the side panel.
Select Create New Workspace.
Click on your Workspace icon at the bottom left of the side panel.
Select the Workspace that you'd like to view.
Manage, track, and publish new versions of your AI Agent in MindStudio.
Publishing is the last step in preparing an AI Agent for deployment. In this article, we’ll cover all of the AI Agent Settings you can configure before publishing your AI Agent.
Publishing creates a versioned release of your AI Agent, locking in its current configuration. Once published, this version becomes accessible to users and collaborators.
Review the AI Agent’s metadata and configuration for accuracy.
Click the Publish button in the top-right corner.
Open the current version of your AI Agent.
Note: Before publishing, ensure you've opened up the AI Agent Settings and verified all metadata fields are correctly configured. This includes checking the name, description, API function name, icons, usage limits, and sharing settings.
AI Agent Settings provide configuration options for customizing, controlling, and managing your AI Agent. These settings are organized into several key sections that allow you to define everything from adding basic metadata (like name and description) to usage limits and sharing permissions.
Properly configuring these settings is crucial for ensuring your AI Agent functions as intended and is accessible to the right users.
Navigate to the Explorer Tab on the left.
Click on the Root File at the top of the Explorer Tab to open the AI Agent Settings
The Details tab under the General section allows you to define key metadata and identifiers for your AI Agent. Make sure AI Agent is properly named, described, and configured for external integration.
Enter a clear and concise name for your AI Agent. This name is displayed throughout the MindStudio platform and in shared links.
Provide a brief description of your AI Agent’s purpose and functionality. This helps collaborators and end-users understand its role at a glance.
Specify a custom API function name if you plan to invoke the AI Agent programmatically via the MindStudio NPM package. This is particularly useful for integrating the AI Agent into larger systems.
A unique identifier automatically assigned to your AI Agent. This ID is used for backend and API integration purposes. Click the copy icon to copy the Agent ID for use in development or debugging.
The Icons and Media tab lets you customize the visual representation of your AI Agent, both within the platform and when shared externally. This section ensures your AI Agentis visually distinct and branded appropriately.
Upload an image to serve as the primary icon for your AI Agent. This icon is displayed in the MindStudio interface and associated with the AI Agent in all contexts.
Recommended Size: 500x500 pixels.
File Types Supported: PNG, JPEG.
Add an image to represent your AI Agent when it’s shared on social media platforms or messaging apps.
Recommended Size: 1200x630 pixels.
File Types Supported: PNG, JPEG.
The Usage Limits tab allows you to set financial and operational boundaries for your AI Agent and its users. This ensures that your AI Agent operates within defined budgets, avoiding unexpected costs or overuse.
Set a maximum spending limit for the AI Agent in a calendar month. If the Agent exceeds this budget, it will be suspended until the next month.
Example: Set this value to $100 to restrict the AI Agent’s operational costs to $100 per month.
Define a spending cap for individual users interacting with the AI Agent. If a user exceeds this limit within a calendar month, their access will be suspended for that period.
Example: Set this value to $10 to limit each user’s spending to $10 per month.
The Sharing tab allows you to configure how your AI Agent can be accessed and shared with others.
When enabled, others can create a copy of your AI Agent and modify it to build their own version. Remixing is enabled by default.
Note: Enabling this setting makes your AI Agent publicly remixable by anyone.
When enabled, restricts access to your AI Agent with a password. Only users with the correct password can interact with the Agent. Ideal for limiting access to specific teams or individuals.
The Transfer tab allows you to reassign ownership of an AI Agent to a different workspace. This is particularly useful when moving AI Agents between personal and organizational workspaces or consolidating assets under a specific team.
Select the destination workspace to which the AI Agent will be transferred. Once an AI Agent has been transferred, this action cannot be undone.
Note on Ownership: Each AI Agent is tied to a workspace, and the selected workspace will assume ownership and be billed for its usage.
The Versions tab provides a comprehensive history of your AI Agent’s lifecycle, allowing you to manage and track both drafts and published versions effectively.
Displays a list of all previously published versions, sorted chronologically.
Metadata:
Version Name: Helps identify specific releases (e.g., Version #1, Version #2).
Publisher's Name: Identifies who published the version.
Timestamp: Shows the exact time of publication.
Change log: AI generated list of changes
Controls:
Open: Click to view the current published version of the AI Agent.
Restore Version: Visible on hover. Replaces the live version with the selected version and deletes any changes made to the draft.
Lists the current draft version under development. Any changes you make to your project before publishing a new version are saved to this draft.
Delete Draft: Removes all changes made to the current draft and restores settings to current published version.
Drafts (yellow icon): Represents ongoing work.
Published Versions (green checkmark): Indicates live and accessible versions.
Older Versions (gray checkmark): Indicates previously published versions.
Manage your workspace identity, integrations, and settings
The Workspace Settings page allows you to manage general information and integrations for your workspace.
Update or edit the name of your workspace. This is the name that identifies your workspace across the platform.
Specify the name of the company associated with the workspace for display purposes.
Optionally upload a custom logo to personalize your workspace. Logos visually represent your workspace in the platform and can be changed or removed as needed.
After making edits to the workspace name, company name, or logo, use the "Save Changes" button to apply updates.
A unique identifier for the workspace is displayed at the bottom. This ID can be copied and used for API integrations or when connecting external applications.
Manage your personal information
The Account Settings page allows you to manage your personal information, preferences, and account-related controls efficiently. Here's a breakdown of what you can do in this section:
Navigate to the top-right corner of the screen.
Click on your profile picture.
From the dropdown menu, select Account Settings.
This will take you directly to the Account Settings page, where you can manage all aspects of your account.
Update your email address to ensure you receive important communications. Click the pencil icon to edit.
Secure your account by updating your password as needed. Click the pencil icon to make changes.
Change your username to better reflect your identity within the platform. Click the pencil icon to edit.
Customize your display name, which is visible to other users. Click the pencil icon to make changes.
Add a personal touch by uploading or updating your profile picture. Click the pencil icon to upload a new image.
Manage your email notification preferences. Use the toggle switch to turn email updates on or off.
Access the privacy policy request form to manage data-related inquiries or requests.
Permanently delete your account if you no longer wish to use the platform. Click the red Delete Account button to proceed with this action.
Get insights on usage and spend.
The Usage Explorer provides in-depth insights into your MindStudio workspace's usage and spending, helping you monitor and optimize your resources. The Usage Explorer is ideal for:
Identifying cost-saving opportunities.
Monitoring Agent or model usage.
Allocating spending across teams or workflows.
To access the Usage Explorer, navigate to the left sidebar in your workspace and click on the Usage Explorer tab.
The Usage Explorer consists of several components that offer detailed information about your workspace's activity and spending patterns:
A bar graph displays daily spending for the selected period, giving a clear visual representation of how usage fluctuates over time. You can change the time range using the dropdown at the top of the page.
A pie chart shows the spend distribution for the selected period, allowing you to quickly identify the biggest contributors to your costs.
Below the visualizations, you can choose filters to view usage and spend by Agent, User, or Model.
Agent name: The name of the AI Agent.
Runs: The total number of times the Agent was executed.
Users: The number of unique users who interacted with the Agent.
Spend: The total cost associated with that Agent.
Clicking the caret next to the AI Agent will open the AI Agent.
See which team members are generating the most activity and spending.
User Name: The name of the workspace member.
Runs: The total number of times the Agent was executed.
Spend: The total cost associated with that Agent.
Clicking the caret next to the User will open an accordion to show a drill down of usage for the selected user.
Analyze spending across various AI models to optimize your deployments.
User Name: The name of the workspace member.
Runs: The total number of times the Agent was executed.
Spend: The total cost associated with that Agent.
Clicking the caret next to the Model will open an accordion to show a drill down of usage for the selected Model.
Click the Export button in the top-right corner to download usage data for further analysis or record-keeping. The export includes all metrics visible in the current view.
Regularly check your Spend Breakdown to manage costs effectively.
Use the Usage by Agent table to identify high-cost workflows and optimize them.
Export data periodically to track spending trends over time.
Manage collaboration and sharing within your workspace
A MindStudio Workspace has different Roles and Access Controls to manage collaboration and sharing within your workspace. Whether assigning workspace-wide roles or sharing access to individual AI Agents, you can ensure every user has the right level of permissions for their responsibilities. This guide explains the available roles and access controls in MindStudio.
Workspace roles define access levels for managing both the workspace and its AI Agents. These roles are designed to support various levels of responsibility, from contributors to administrators.
Inviting people to your workspace in MindStudio is simple and ensures your team members have the correct roles for effective collaboration.
Navigate to the Team section of your workspace.
Click the Invite Button in the top-right corner
Enter the Email Address of the person you want to invite in the provided field.
Select the Role by using the Role dropdown menu.
Click Add Button to send the invitation.
Owner
All
All
All
Yes
Admin
All
All
All (except Owners)
Yes
Member
All
Only their own
No
No
Guest
Invite Only
Invite Only
No
No
On the Team Page, you can manage the permissions of guests for specific AI Agents, allowing you to adjust their access levels or remove access as needed.
Navigate to the Team Page of your workspace. Then find the guest whose access you want to manage in the list.
Click Manage Button to open the Manage User Access modal.
Adjust Access Levels
Review the AI Agents the guest has access to, listed in the modal.
Use the dropdown next to each AI Agent to update the access level:
Use Only: Limits the guest to viewing and using the AI Agent.
Edit Access: Allows the guest to edit the AI Agent.
(optional) To remove the guest entirely, click the red trash icon next to their role.
Navigate to the Team Page of your workspace. Locate the list of team members.
Find the Member to who’s role you’d like to modify
Use the dropdown menu next to the user’s current role to select a new role. Changes are applied immediately after selecting the new role.
The Member role is ideal for contributors who create and collaborate within the workspace but do not need administrative permissions.
Capabilities:
Create new AI Agents.
View all AI Agents in the workspace, including those they did not create.
Invite Guests to view and edit AI Agents that they create.
Restrictions:
Cannot edit AI Agents created by others unless explicitly invited.
Cannot share Edit Access for AI Agents they do not own.
Cannot access any workspace settings pages including:
The Admin role allows for broader workspace management, including user and workspace settings management, without full ownership authority.
Capabilities:
View and edit all AI Agents.
Access and modify all workspace settings.
Invite and manage Admins, Members, and Guests.
Downgrade or remove Admins, Members, and Guests.
Restrictions:
Cannot remove or downgrade Owners.
The Owner role has complete control of a workspace.
Capabilities:
View and edit all AI Agents.
Access and modify all workspace settings.
Invite and assign any role (Owners, Admins, Members, Guests).
Downgrade or remove any role (Owners, Admins, Members, Guests).
MindStudio also provides controls for granting Guest access to specific AI Agents without workspace-wide permissions.
You can add new guests with specific permissions using the Sharing & Access Controls modal. Here’s how to do it:
Open the AI Agent you want to share.
Click the Share button to bring up the Sharing & Access Controls modal.
Enter the Guest’s Email
In the text box labeled "Enter email address," type the guest’s email address.
Click Invite
Once the email is entered and the access level selected, click the Invite button.
Choose a Permission Level
Select the desired access level from the dropdown menu:
Edit Access: Allows the guest to edit the AI Agent.
Use Only: Limits the guest to viewing and using the AI Agent without editing.
Optional: Generate an Invite Link
Use the Invite Link section to generate a sharable link for the guest.
The default access level for the link can be set to Use Only or Edit Access using the dropdown menu.
Click Copy to share the link.
Once guests are added, their details appear in the modal under "People with Access."
Adjust Permissions: Use the dropdown next to a guest’s name to change their access level between Edit Access and Use Only.
Remove Access: Click the red trash icon to remove a guest’s access entirely.
A Guest with Use Only access is designed for users who are intended to use the AI only.
Capabilities:
View and use specific AI Agents they are invited to.
View the workspace they are invited to.
Guests can only see the specific AI Agents that they have been invited to.
Restrictions:
Cannot edit AI Agents.
Cannot view all AI Agents in the workspace.
Cannot view editor controls.
Cannot share links or manage permissions.
Cannot access other AI Agents or workspace settings.
A Guest with Edit Access role allows for collaborative editing of specific AI Agents without granting workspace-wide access.
Capabilities:
Edit AI Agents they are explicitly invited to.
View editor controls for the invited AI Agents.
Guests can only see the specific AI Agents that they have been invited to.
Restrictions:
Cannot view or access workspace settings.
Cannot edit or access AI Agents they have not been invited to.
Cannot invite or manage other users.
Cannot share links or manage permissions.
Cannot access other AI Agents or workspace settings.
Manage your workspace’s financial activities
The Billing Settings section in MindStudio provides comprehensive tools to manage your workspace’s financial activities, including monitoring balances, setting budgets, managing payment methods, and accessing receipts.
Your workspace's current account balance is shown prominently. You can add funds manually by clicking Add Funds.
Auto-recharge simplifies balance management by ensuring your workspace always has sufficient funds.
Trigger Threshold: Specify the balance level at which the auto-recharge activates (e.g., $25.00).
Recharge Amount: Set the amount to bring the balance back up to (e.g., $100.00).
Payment Method: Select a saved payment method for automatic transactions.Enable or disable auto-recharge with the corresponding controls: Enable/Disable Auto Recharge.
Make sure to click Save after making any changes to your auto-recharge settings.
Set a maximum budget for the entire workspace for a given calendar month. Once the workspace exceeds this limit, subsequent usage will be suspended until the next billing period. Enter the desired budget amount in the provided field and click Save to apply the changes.
Define spending limits for individual users within the workspace. If a user exceeds their personal usage budget for any app during the calendar month, their usage will be suspended until the next month. Adjust this value for tighter control of individual expenditures and click Save to confirm the updates.
The billing address associated with your payment methods is displayed at the top of this page. Ensure this information is accurate for compliance and invoicing purposes.
A list of saved payment methods is shown under the Payment Method section. Each method displays key details, including:
Card Information: Shows the last four digits of the saved card and its expiration date.
Default Method: The card designated as the default for auto-recharge is marked here.
Actions: Use the trash icon to delete a payment method, if needed.
To add a new payment method, click on the New Payment Method button and follow the prompts to enter your card details. Once saved, you can set the new method as the default.
Each transaction is listed with the following columns:
Date: The date when the transaction occurred.
Summary: A brief description of the transaction (e.g., "New Member Added").
Amount: The monetary value associated with the transaction. Positive values indicate credits, while negative values represent charges.
For each transaction, a Download link is available in the rightmost column. Clicking this link allows you to download a detailed receipt for the selected transaction. This feature is particularly useful for maintaining records or for financial reporting purposes.
With this streamlined layout, the Receipts page ensures all financial details are readily accessible, making workspace management simple and efficient.
Tools to identify issues and optimize workflows
MindStudio's Testing Suite provides a comprehensive set of tools designed to enhance the quality and performance of your AI Agents. By using these tools, you can identify issues, optimize workflows, and ensure that your AI Agents meet the highest standards of reliability and efficiency. The suite includes:
Evaluations allow you to systematically test the behavior of your AI Agents. By defining specific scenarios and expected outcomes, you can validate the functionality of your workflows and ensure consistent performance. This tool is particularly useful for quality assurance and regression testing during iterative development.
The Debugger helps you troubleshoot issues in real-time by providing detailed logs and insights during the execution of your workflows. Whether you're investigating a specific problem or monitoring the flow of data through your Agent, the Debugger gives you a step-by-step breakdown of what’s happening under the hood, making it easier to identify and resolve issues.
The Profiler enables side-by-side comparison of model outputs, helping you analyze and evaluate how different model settings perform under similar inputs. Comparisons helps you evaluate AI Models for different criteria including cost, latency, quality, and context.
MindStudio's Evaluations feature enables you to rigorously test the accuracy and consistency of your workflows. By creating structured tests, you can validate expected outcomes, identify areas for improvement, and ensure your workflows are functioning as intended.
To use the Evaluations tool effectively, your workflow must meet the following requirements:
Launch Variables: The workflow should be configured with launch variables to define inputs.
End Block: The workflow must contain an End block to return outputs.
Evaluations are structured test cases you can create to validate your workflow's output. There are two main ways to create evaluations:
Define each test case from scratch by specifying the input variables and expected results. This approach gives you complete control over each evaluation.
Click Generate button to automatically populate evaluation cases using default or existing input data. Choose the number of test cases and optionally give the generator additional context about the test cases you want generated.
Once you've created your evaluations, you can run them all at once or individually to test your workflow's output against the defined expectations.
Click Run all at the top left to run all of the test cases at the same time. Results may take more time to appear depending on the size of the workflow and how many test cases you created.
Hover over the left side of the test case row and click on the Play icon to run an individual test case. Ideal if you don’t want to rerun previous tests.
Each evaluation consists of three main components:
The set of variables or data points that the workflow will process.
The anticipated output for the given input. This can be configured for:
Literal Match: The output must exactly match the expected result.
Fuzzy Match: The output can vary slightly and still be considered correct if it meets specified criteria.
The actual output produced by the workflow, displayed alongside the expected result for comparison.
Evaluations can be exported to a CSV file by clicking on the Export button at the bottom right, sharing, or further analysis. The export includes all input, expected results, and actual results, making it easy to communicate Evaluations to team members or stakeholders.
In this example, we'll validate the accuracy of a Content Moderation workflow that classifies user-generated content. Each evaluation tests the workflow's ability to label inputs accurately:
Input: A piece of text or content submitted for moderation, such as:
"This review is just copied from another site. Plagiarism!"
"I hate this product and the company that makes it. They are the worst!"
Expected Result: The anticipated classification for each input, such as:
Plagiarism or Copyright Violation
Hate Speech or Offensive Content
Result: The actual classification provided by the workflow.
Evaluations will indicate whether the workflow correctly classifies content into categories such as "Clear," "Spam or Promotional Content," or "False or Misleading Information."
For example, the workflow should identify
"I hate this product and the company that makes it. They are the worst!"
as Hate Speech or Offensive Content
.
The page offers an overview of your current billing plan, balance, and auto-recharge settings. Here are the primary controls available:
Your current subscription plan is displayed here. If you're interested in upgrading, click to view available options.
The page allows you to manage your workspace's spending by setting monthly limits for the entire workspace and individual users. This ensures that you maintain control over costs and receive notifications if limits are exceeded.
The page is where you can manage the billing details and payment options for your workspace. Having a saved payment method ensures a seamless process for recharging your account balance and avoiding interruptions in service.
The page provides a detailed record of all billing transactions for your workspace. This section is designed to help you track payments, usage charges, and other billing activities, ensuring transparency and ease of record-keeping.
The Debugger is a critical tool in MindStudio for testing, troubleshooting, and optimizing workflows. It allows users to examine the execution of their workflows step-by-step, identify issues, and ensure that the expected results are achieved. This tool is especially valuable for understanding the flow of variables, analyzing billing events, and testing how various blocks interact within the workflow.
Open the desired workflow in MindStudio. Switch to the Debugger tab from the workflow interface.
Use test inputs or variables to trigger the workflow. The workflow execution will be logged in real time. You can run the workflow in many different ways outlined below.
Review the action logs to verify the workflow behaves as expected. Check the Billing Events section to ensure cost efficiency. Observe the Runtime Variables panel to confirm variable values are correctly updated at each step.
Identify blocks with errors or unexpected behavior. Adjust inputs, fix errors, and re-run the workflow to validate changes.
Use the Export button to download debugging data for documentation or collaborative troubleshooting.
There are several ways to run and test workflows in MindStudio's debugger, each suited for different testing scenarios and debugging needs. The debugger provides flexible execution options that let you validate your workflow's functionality, from testing the entire flow to focusing on specific components.
Follow these steps to test a workflow:
Click the Preview button at the top right corner of the screen to open the preview menu. There are 3 options depending on the block(s) you have selected:
Run in debugger: Executes the entire workflow from start to finish in the debugger.
Start from selection: Executes the workflow starting at the selected step (block), skipping all preceding steps.
Run Selection: Executes only the selected portion of the workflow, limited to one or more connected blocks.
Note: You can select one or multiple blocks by clicking and dragging to create a selection box around them, or by holding Shift while clicking individual blocks. This selection will define the scope of your test execution.
Enter the necessary variable data into the provided input fields within the modal. This information is required for the workflow's execution.
After entering the variable data, click the Run button at the bottom of the modal. This will execute the workflow starting from the selected block.
Once the test starts, the Run Log for the selected portion of the workflow will appear at the bottom of the Automations tab. Outputs generated during the test are automatically stored in the debugger, and these logs are accessible from the main Debugger panel, where all execution logs are stored for reference.
The Debugger interface is organized into distinct sections that work together to provide comprehensive insights into workflow execution. Each component serves a specific purpose in helping users monitor, analyze, and troubleshoot their workflows effectively.
The Runs Panel is located on the left side of the debugger interface and provides comprehensive information about workflow executions. This panel is divided into two main tabs: Runs and API Logs.
Runs: Displays all workflow executions initiated within the workspace. This includes executions triggered by users or systems, providing details like the workflow name, date, and time.
API Logs: Shows detailed API interactions for each workflow run, offering an in-depth look into HTTP requests and responses for debugging purposes.
The Run Logs on the Runs screen provides a step-by-step breakdown of the execution of a workflow. It includes real-time tracking of variables, system messages, and output generation. Here's a detailed breakdown of the components:
Timestamp: Indicates when the action started.
Run ID: A unique identifier for the run, useful for tracking and debugging.
Workflow: The name of the workflow being executed, clearly displayed for quick identification.
Duration: Total time taken for the execution of this action, shown on the far right.
Each action or event in the workflow is logged in order of execution. Key types of logs include:
Shows when variables are set or updated during the run.
Example: Setting {{currentDate}} to "Dec 8, 2024 7:29 PM"
.
Indicates updates to globally scoped variables, such as user ID, thread ID, or API keys. As the workflow executes, runtime variables are updated and displayed in real time in the right-hand panel.
Example: Setting {{global.username}} to "Luis"
.
Displays inputs provided during the execution, including prompts, parameters, or configuration values.
Explains actions for each step in the workflow such as loading a model, resolving a variable, or querying an external API, as well as the output of that step.
Example: Sending message: "Generate a name, function name, and description for the following prompt..."
.
Logs the invocation of custom functions, including function name and input parameters and displays results returned by the function.
Key updates are color-coded for clarity:
Green: Successful actions.
Red: Errors or failures.
Every action is broken down into Billing Events, including:
Token usage details for inference prompts and responses.
Costs associated with specific actions, including external services like image generation or language models.
A total cost summary for the entire workflow execution.
The Runtime Variables panel on the right-hand side shows. The current state of variables as the workflow progresses and how variables are updated at each step, offering transparency into how data flows through the workflow.
The API Logs tab captures and displays all API interactions related to workflow executions. in addition to the Run Logs each API log also includes:
POST or GET for each API request.
IP address, API Key, and User Agent used in the call.
Indicates success or failure of each API call (e.g., 200 for success).
Displays a sequential breakdown of actions performed during the run, including variable settings, programmatic messages, and system interactions.
Contains the raw payload sent during the API call, detailing the workflow ID and input variables used for the execution.
Shows the output returned from the workflow, highlighting success status, results, and any errors.
NPM Snippet: Ready-to-use JavaScript code that initializes the MindStudio client, executes the workflow, and retrieves results.
Raw Fetch Code: Shows a plain JavaScript fetch
example for invoking the API directly.
cURL Command: A command-line example for making API calls with all necessary headers and body.
You can export the debugging logs by clicking on the Export button at the bottom right of the Run Logs for further analysis or to share with your team.
Run AI Agents via API request.
With the MindStudio API you can enable the programmatically invocation of workflows. It allows for the integration of AI workflows as a step in larger automation processes.
Executes a specified app with given variables and optional workflow.
This endpoint requires Bearer token authentication.Request body
workflow
string
The workflow to run (without .flow extension)
No
variables
object
The variables to pass to the app
No
callbackUrl
string
The URL to receive the execution result
No
appId
string
The ID of the app to run
Yes
The response will be one of two possible formats:
1 - For asynchronous execution (when callbackUrl
is provided):
2 - For synchronous execution:
400 Bad Request
: The request was invalid or cannot be served.
500 Internal Server Error
: The server encountered an unexpected condition that prevented it from fulfilling the request.
Note: Replace YOUR_ACCESS_TOKEN
and YOUR_APP_ID
with your actual Bearer token and Agent ID, and adjust the request body according to your specific app requirements.
Loads an app by ID. If the app is not found, it returns a 404 error.
This endpoint requires Bearer token authentication.
The response will be a JSON object containing the organization information.
The organization for the specified token was not found.
Note: Replace YOUR_ACCESS_TOKEN
with your actual Bearer token.
The syntax for calling a launch variable will be different than calling a variable on runtime. Rather than using {{VARIABLE_NAME}}
in your prompt, launch variables are called using {{$launchVariables->VARIABLE_NAME}}
syntax.
This must be utilized anytime a launch variable is called within a workflow. In the following example, we will look at the body of an API request that passes a launch variable, topic
, when running a MindStudio workflow.
Within the body request, the launch variable topic
is assigned with the value “Dogs”.
Note: Replace YOUR_APP_ID
with your actual app ID.Example promptWithin a prompt area inside of Mindstudio, the topic
launch variable is called using the launch variable syntax.
Test and compare AI model outputs side-by-side
The Profiler in MindStudio lets you test and compare AI model outputs side-by-side. By experimenting with different models and configurations, you can evaluate AI Models for criteria like cost, latency, context, and quality, to ensure that you choose the right model for the right task in each step of your workflow.
Navigate to the Profiler tab within the desired workflow.
You can open the Profiler from a Generate text block by clicking on the Open in Profiler button above the prompt configuration.
You can open the profiler form the prompt tab by clicking on Test in Profiler button located at the bottom right of the tab.
Add models to compare using the dropdown menu. Each model profile is displayed side-by-side for easier comparison.
You can add as many profiles as you’d like. New AI Model profiles will appear to the right and you’ll need to horizontally scroll to view more than 4 profiles.
Click the AI model name in each profile to configure parameters like Temperature and Max Response Size for each profile.
Save the adjustments by clicking Done.
Type a test prompt into the input box and send it. You can also toggle Send System Prompt to include your workflow’s system-level instructions in the evaluation.
Compare the quality of outputs generated by each model.
Review the token usage, latency, and cost metrics for each response.
Choosing the right AI model is critical to ensuring your workflow meets performance, cost, and quality requirements. MindStudio provides a variety of models with different capabilities, trade-offs, and configurations. When selecting a model, it's often necessary to balance these considerations to align with your workflow's goals and constraints.
AI models come with varying pricing structures based on usage, typically measured in tokens for prompt and response.
Use cost-effective models for high-volume, repetitive tasks (e.g., bulk summarization). Opt for premium models only when high output quality is critical.
Latency refers to the time the model takes to generate a response. Low-latency models are essential for real-time or interactive workflows.
Prioritize low-latency models for use cases like chatbots or live applications. For non-time-sensitive workflows (e.g., scheduled reports), higher latency models with better quality may be acceptable.
Different models vary in their ability to generate coherent, creative, or factual responses. Output quality depends on the model’s training and capabilities.
Choose advanced models for nuanced tasks like legal summaries or creative writing. Use simpler models for straightforward tasks like data extraction.
The context window determines the maximum amount of text the model can process at once. Larger context windows are essential for tasks involving lengthy inputs.
Use models with large context windows for summarizing lengthy documents or analyzing extensive datasets. For shorter inputs, a smaller context window may suffice and reduce costs.
Integrate MindStudio AI Agents with Make.com
This guide provides step-by-step instructions on integrating Make.com with MindStudio AI Agents. With this integration, you can trigger workflows directly from Make.com using your MindStudio app and easily pass variables to your flows.
Begin by adding a new app connection to the Make canvas. Search for MindStudio in the app directory.
Select the desired action. Commonly, you'll choose Run an App, but you can also list all available apps or start an HTTP call.
Click on Create a Connection.
Insert your workspace's API key. This API key connects Make to the MindStudio workspace where your app is hosted.
To retrieve the API key, click on the gear icon next to your workspace name in MindStudio.
In the sidebar menu, select API and navigate to API Keys.
Generate a new key by clicking Create Key, or copy an existing one.
Name the API key for internal use and click Create to finalize.
Copy the API key using the copy icon.
Save the connection.
Once connected, you can run any app in your workspace using its corresponding App ID.
To find an App ID, click on the app name in MindStudio (top left corner) to reveal and copy the ID.
Alternatively, navigate to the app list in MindStudio. Click the three dots next to the app name and select Copy App ID.
After entering the App ID, you can specify which workflow to run. If it's different from the default main workflow, enter the workflow name (excluding .flow
).
To pass variables to the workflow, click on Add Item under the Variables section in Make.
Enter a Variable Name and assign a corresponding Value. The value can come from the output of a previous module or be formatted with Make's special syntax and operators.
Click OK to complete the setup.
Test the module by right-clicking on it and selecting Run this module only.
Verify that the workflow executes successfully. Note: In this example, no variables were initialized, so ensure you pass variables to the flow if required for execution.
This workflow was created with Supademo.
Integrate MindStudio AI Agents with Zapier
This guide walks you through the steps to integrate MindStudio AI Agents with Zapier, allowing you to trigger workflows and seamlessly connect them to other apps like Slack. By following this tutorial, you can create a Zap that triggers when a Slack channel receives a new message and responds with the output from your MindStudio workflow.
Trigger the Zap with your chosen app. For this guide, we'll trigger it when a new message is received in a Slack channel.
Select Slack as the app.
Choose a trigger event, such as New Channel Message.
Select the trigger event from the list.
Choose an existing Slack connection or set up a new one.
Select the connection and click Continue.
Choose the Slack channel you want to monitor. In this example, we'll use a channel named new-zapier-integration
.
Click Continue.
Test the trigger by selecting a sample message. For this guide, we'll use a message that says "hello".
Click Continue with selected record.
Add a new step to the Zap and select MindStudio as the integration.
Click Choose an Event and select Run Workflow.
Connect your MindStudio workspace. You can use an existing connection or create a new one by generating an API key.
To connect a new account, click Connect a new account.
Retrieve your API key from MindStudio:
Click on your workspace name in the top-right corner.
Navigate to Settings > API Keys.
Copy an existing key or create a new one, name it for internal use, and click Create.
Paste the API key into the Zapier connection popup and click Yes, Continue to MindStudio.
Once connected, all your MindStudio apps will appear under the App ID dropdown. You can search by App ID or app name.
Select the appropriate app from the dropdown.
Pass variables to the workflow:
For our example, pass the Slack message content to the message
variable in MindStudio.
Use the /
symbol or the + icon in Zapier to select properties from the Slack trigger.
Choose the flow to run in MindStudio. Avoid using .flow
unless it's the default.
Click Continue and test the step.
Add another step to send a response back to Slack.
Select Slack as the app and choose an action event.
Pick the same channel (new-zapier-integration
) where the original message was received.
For the message text, select the result of the MindStudio workflow from the previous step.
If you want the response as a reply in the same thread, add the ts (timestamp) value from the trigger step:
Select Custom Value from the three-dot menu.
Choose Ts from the list.
Click Continue and skip the test. Note that MindStudio workflows won’t execute during Zapier's test phase due to limitations, but the results can still be used.
Publish the Zap.
Now, whenever you send a message in the selected Slack channel, the MindStudio workflow will process the message and respond instantly in the same channel or thread.
Return to and paste the API key into the connection setup.
Congratulations! You've successfully integrated with MindStudio. With this setup, you can easily trigger and interact with MindStudio workflows directly from your Make modules.
Definitions of MindStudio Terms
An AI-powered automation or workflow created within MindStudio. AI Agents are designed to perform specific tasks, ranging from data processing to content generation, and can be deployed in various applications.
A modular unit within a workflow that performs a specific function (e.g., generating text, calling a function, querying data). Blocks are the building components of workflows on the canvas.
The visual interface where workflows are designed and structured. The canvas allows users to connect automation blocks and configure their interactions to build AI Agents.
An external or internal repository of information (e.g., databases, APIs) that workflows can query to retrieve data or send results for storage.
Unpublished versions of workflows or AI Agents within a workspace. Drafts allow users to iterate and test changes before making them live.
A custom code block written in JavaScript or Python that adds specific functionality to a workflow. Functions enable developers to extend workflows with tailored logic and processing.
A lightweight markup language used to format text. In MindStudio, Markdown is often used for prompts, documentation, or configuring UI elements within workflows.
A set of instructions or input text sent to an underlying model to guide its behavior. Prompts are critical for shaping AI-generated outputs and are often dynamic, incorporating variables.
The AI or machine learning model powering an AI Agent's functionality. Examples include GPT models for natural language processing or DALL-E for image generation.
A metric that tracks the consumption of resources (e.g., API calls, compute time) within a workspace. It provides insights into how AI Agents and workflows are utilized, helping monitor costs and efficiency.
Data or parameters provided by end-users or external systems that are used to guide the behavior of an AI Agent or workflow. User inputs can be configured in the workflow for flexibility.
A placeholder used within workflows to store and pass data between steps. Variables can represent inputs, intermediate values, or outputs, ensuring data flows smoothly through the workflow.
A dedicated environment in MindStudio for organizing and managing AI Agents, workflows, team members, and billing settings. Workspaces allow teams to collaborate and operate independently within the platform.
Integrate MindStudio's AI Agents into your Node.js projects.
The MindStudio NPM Package is your toolkit for integrating AI-powered workflows seamlessly into any application. This client library offers type-safe interfaces to help you execute MindStudio AI Agents with ease and confidence.
Create a new API key
All workflow executions return a consistent response type:
sync
Generate type definitions for type-safe usage:
test
Test a workflow from the command line:
list
List available Agents and workflows:
package.json
for automatic type generation:MindStudio requires an API key for authentication. You can provide it in several ways:
Never commit API keys to version control
Add .env
to your .gitignore
Use environment variables in CI/CD environments
Run npx mindstudio sync
to generate type definitions
Ensure MINDSTUDIO_KEY
is set in your environment or passed to the constructor
Run npx mindstudio sync
to create initial configuration
Store API keys in environment variables
Use .env
files only for local development
Never commit API keys to version control
Use secure environment variables in CI/CD
Use the type-safe pattern when possible
Commit .mindstudio.json
to version control
Run sync
after pulling changes
Always check success
before using result
Implement proper error handling
Use TypeScript for better type safety
MIT
Execute custom code in your AI workflow
Functions in MindStudio empower you to extend the capabilities of your workflows by running JavaScript or Python code directly within your automation. Functions are created within the Editor and are executed in a workflow via the Run Function Block.
When working with the Function Tab in MindStudio, the interface is designed to to write and test code, and support supporting dynamic inputs, configurations, and debugging features to ensure smooth execution.
There are two primary ways to create a new function in MindStudio. Both approaches will create a new function in your Functions Folder, and open a blank function editor where you can specify the environment, write your code, configure inputs, and test the function.
Navigate to the Explorer panel on the left-hand side of the editor.
Hover over the Functions folder.
Click the + button or right-click and select New Function.
A new function tab will open where you can begin writing your custom code.
Add a Run Function Block to your workflow.
In the block configuration, click New... to create a new function.
A new function tab will open, allowing you to define and configure the function.
Functions in MindStudio can be written in either JavaScript or Python, giving you flexibility to choose the language that best suits your needs. The function editor provides a modern development environment with syntax highlighting, auto-completion, and real-time error checking.
These details are displayed in the Function Details panel on the right. Provide a Name and optional Description for your function. Then select the programming environment for your function: JavaScript (Node.js) or Python.
Note: For Python functions, you can import external libraries to extend functionality.
Use the Code Tab to write the logic for your function. Utilize the available methods (see reference table below) to integrate your function seamlessly with configurations.
Use the Configurations Tab to define JSON for customizable settings that users can modify when implementing your function in their workflows. These settings can include input fields, drop-downs, toggles, and other UI elements that make your function more flexible and user-friendly.
The Code Tab is the central workspace for writing the logic of your function. Here, you can write code in either JavaScript or Python, depending on the selected environment. The editor features syntax highlighting, making the code more readable and easier to debug.
ai.config
Object containing configuration variables defined in MindStudio
ai.vars
Object containing runtime variables defined by other functions or blocks.
ai.getConfig(variableName)
Return the value of a configuration variable. If the configuration variable resolves to a runtime variable, resolve that value before returning
ai.log(value)
Update the progress text for the user. If your function takes a long time to run, this can be helpful in communicating what is happening to the user.
ai.scrapeUrl(url)
Scrape the contents of a URL and return an object containing the text extracted from the page, the raw HTML, and some structured metadata (page title, description, resolved URL, thumbnail image URL).
ai.searchGoogle(query)
Search Google for a query and return the first page of results. Returns an object containing all the results as a block of text, as well as individually as an array of objects containing the title, description, and URL for each result.
ai.queryDataSource(dataSourceId, query, numResults)
Perform a query against a data source defined in a project. Returns a string result. If numResults
is not provided, only one chunk will be returned.
ai.uploadFile(body)
Upload a file and return a URL. File must be a valid Base 64 data URL.
ai.crmLog(value)
For apps with logging enabled, log a value to the app's user logs.
The Configuration Tab enables developers to define a configuration JSON file for their function. This JSON allows you to set up customizable settings, such as text inputs, drop-downs, or other UI elements, that non-technical users can configure with when they add the function to the Run Function Block in workflows.
The Test Data Tab is a dedicated space for verifying the behavior of your function with predefined inputs. You can simulate runtime variables such as ai.vars
and ai.config
by defining mock data to test different scenarios. This feature allows you to ensure that your function behaves as expected without needing to integrate it into a full workflow.
Displays key information about your function. Here, you can set the function’s name and description, which will be used to identify it in the Run Function Block. You can also select the environment—either JavaScript (Node.js) or Python—for your function.
Note: If you choose Python, you have the added flexibility of importing external libraries to extend functionality. The panel also displays the current configuration and runtime variables available for testing.
Live preview of the configuration interface, giving you immediate feedback on how your JSON structure set the Configuration Tab will appear to users when they edit the block.
Built-in reference guide. Displays the guide relevant to what you are editing.