MindStudio University
  • Documentation
  • Video Courses
  • Newsroom
  • Video Courses
  • 1: Core Building Principles
    • Intro to AI Agents & MindStudio
    • AI Editor Overview
    • Building Your First AI Agents
    • Building an AI Agent for the Chrome Extension
    • Testing & Debugging Basics
    • Designing User Inputs & Forms
    • How to Scrape Web Data for AI Agents
    • Chaining Multiple Blocks Together in AI Agent Workflows
    • How to Generate Content & Media with AI
    • How to Choose the Right AI Model
    • Prompt Writing 101
    • Using Integration Blocks to Connect to External Services
    • Creating & Using Data Sources
  • 2: Workflow Mastery
    • Building AI Agents that Run on a Schedule
    • Using Launch Variables & Global Variables in Workflows
    • Routing, Logic, & Checkpoint Blocks
    • Advanced Testing Using Evaluations
    • Running Blocks in Parallel for Workflow Optimization
    • Working with Structured Data (JSON)
    • Running Sub-Workflows to Iterate and Process Data With AI
    • Creating Dynamic User Inputs
    • How to Generate HTML Assets for Your AI Agents
  • Masterclass Sessions
    • AI Agent Zero to Hero Masterclass (Beginner)
    • AI Agent Monetization Masterclass
    • AI for Content Marketing Masterclass
    • Deep Research Masterclass
    • AI Agents In-Depth Masterclass
    • AI Agents for Partnerships Masterclass
Powered by GitBook
On this page
  • Key Evaluation Criteria
  • 1. Price
  • 2. Latency
  • 3. Output Quality
  • 4. Context Window
  • 5. Max Response Size
  • Using the Profiler to Compare Models
  • Workflow Integration
  • Summary
Export as PDF
  1. 1: Core Building Principles

How to Choose the Right AI Model

This guide explains key evaluation criteria such as price, latency, quality, context window, and response size—along with how to use the Profiler tool to compare models side by side.

Last updated 1 day ago

Choosing the right AI model in MindStudio is essential to balancing cost, performance, and quality. This guide walks through the key considerations and demonstrates how to use the Profiler tool to compare models directly.

Key Evaluation Criteria

When selecting an AI model, consider the following factors:

1. Price

  • Each model has a different cost per token for input (prompt) and output (response).

  • Token cost is measured per million tokens (MTOK).

  • Tokens roughly equate to words (1 token ≈ 0.75 words).

Cheaper models are suitable for automations and utility tasks. More expensive models often yield better reasoning and generation quality, ideal for final outputs.

2. Latency

  • Latency refers to how long the model takes to generate a response.

  • Lower-latency models are preferable for interactive or real-time use cases.

3. Output Quality

  • Evaluate the coherence, tone, and style of responses.

  • Some models produce more creative outputs, while others are better for concise summaries or factual tasks.

  • Quality is best assessed by comparing outputs in the Profiler.

4. Context Window

  • Determines how much information the model can ingest at once.

  • Ranges from 4,000 tokens to over 1,000,000 tokens depending on the model.

  • Larger windows are useful for document summarization, legal analysis, or full-site scraping.

Examples:

  • GPT-4 Mini: 128K tokens

  • Claude 3.5 Haiku: 200K tokens

  • Gemini 2.0 Flash: 1M tokens

5. Max Response Size

  • Controls how long the model’s output can be.

  • Some models are capped at 4,000 tokens while others can produce 8,000–16,000 tokens or more.

  • Useful when generating long-form articles, reports, or stories.

Using the Profiler to Compare Models

MindStudio’s Profiler tool lets you test models side by side:

  1. Open the Model Settings tab.

  2. Click the Profiler button in the top-right corner.

  3. Select two or more models for comparison.

  4. Standardize settings like temperature and max tokens.

  5. Input your prompt (e.g., “Write a long-form blog post about space”).

  6. Observe:

    • Start and finish times

    • Output length and style

    • Token usage and cost

Example Comparison:

  • Claude 3.5 Haiku: More expensive, shorter output, faster start.

  • GPT-4 Mini: Slightly cheaper, longer and more detailed output.

  • Gemini 2.0 Flash: Fastest response, low cost, huge context window.

Workflow Integration

You can open any Generate Text block inside your AI agent and run its prompt through the Profiler to preview output differences across models without altering your workflow.

Summary

To select the best model:

  • Use cheaper models for fast, repetitive tasks.

  • Choose more capable models for final outputs, reasoning-heavy, or creative tasks.

  • Evaluate models across:

    • Cost per token

    • Latency

    • Quality of response

    • Context capacity

    • Output size

  • Use the Profiler tool to directly test and compare models in real time.

Choosing the right model ensures your AI agents are both effective and efficient—tailored precisely to your needs.