Back to Foundations

Pick the right mode. Pick the right product.

A 30-minute framework for deciding when to use a single prompt, a workflow, an agent, or a multi-agent system, and which ALCUB3 product matches each one.

Free Audience: Operators, founders, builders Prereq: AI 101 recommended Time: 30–40 min Units: 5
By the end of this path

You will have:

  • A clear mental model for the four modes: chat, workflow, agent, and multi-agent
  • Three tasks from your own work classified against the framework
  • A concrete product selection for at least one task
  • An understanding of the governance cost that comes with each mode

This is the path that gets you out of the “which AI tool should I use?” trap.

Unit 01 · Framing

A single prompt is not a system

Time: 5 min Objective: Explain where single-prompt thinking breaks down

The first thing most people try with any AI product is a single prompt. Ask for a draft. Ask for a summary. Ask for a plan. For some tasks, that is genuinely enough. The mistake is assuming that if a model can answer one good prompt, the prompt is the system.

Single-prompt thinking breaks in three predictable situations.

First, the task has multiple dependent steps. “Summarize this document” can be a prompt. “Summarize this document, then draft three follow-up emails, then check them against our style guide, then stage them in drafts” is not a prompt anymore. It is a process. Once steps depend on one another, you are in workflow territory whether you admit it or not.

Second, the task recurs with variation. Weekly reporting, inbox triage, vendor comparisons, customer research. These are not one-off asks. They are repeated patterns. If you rewrite the same setup every time, output quality drifts because your setup drifts. That is a sign you need a reusable structure, not a better one-liner.

Third, the task needs to act in the world. Drafting text is one thing. Reading files, searching, checking a system, routing work, staging an email, or posting an update are different capabilities. A prompt can describe those actions. A prompt by itself cannot reliably perform them.

If you have ever written a three-paragraph prompt because you were trying to cram a whole operating procedure into one message, you have already discovered the boundary. The problem is not your prompting skill. The problem is mode selection.

Action · Unit 1

Identify why each task fails as a single prompt.

For each task below, decide which failure mode makes it unsuitable for one-shot prompting: multi-step dependency, recurring pattern, or needs action.

  1. “Every Monday, review last week’s customer tickets, categorize them by severity, and surface anything that references a new bug.”
  2. “Draft a response to this email, then send it if the recipient responds affirmatively, otherwise flag it for my review.”
  3. “Take these three resumes, compare them against the job description, and produce a ranked shortlist using my hiring criteria.”
Common wrong turns
  • Thinking the answer is always “write a better prompt.” Better prompts help, but they do not make a one-shot interaction capable of multi-step work.
  • Assuming workflows are only for engineers. Most modern workflow systems are operational tools, not engineering projects.
  • Missing the action layer entirely because text generation feels like the whole job. It rarely is.
Unit 02 · Mental model

The four modes

Time: 6 min Objective: Describe chat, workflow, agent, and multi-agent clearly

There are four modes that matter here, and most confusion comes from people using the words interchangeably.

Chat is the simplest mode: one ask, one response, no real persistence, no planned execution. It is right for quick questions, one-off drafts, fast exploration, and low-friction back-and-forth.

Workflow is a defined sequence of steps. You know the steps in advance. The system runs them in order. Workflows are predictable, auditable, and relatively easy to debug because the structure belongs to you. They are also rigid. If the situation changes mid-run, the workflow does not improvise unless you redesign it.

Agent is a system that figures out the next step itself. You provide the goal, the boundaries, and the tools. The system decides what to do first, checks results, then decides what to do next. That flexibility is why agents are powerful and why governance matters more once you start using them.

Multi-agent is two or more agents working on parts of a larger problem. This becomes useful when one agent does not have enough context or when the work naturally breaks into specialized roles. It also adds handoff failures, coordination overhead, and much more debugging surface.

The useful rule is simple: if you can write a deterministic function, write the function. If you know the steps, use a workflow. If the steps are not knowable in advance, use an agent. If one agent is not enough, then and only then consider multi-agent.

Task Chat Workflow Agent Multi-agent
Translate this paragraph to SpanishBest fitMaybeNoNo
Every Monday, pull sales data, generate a summary, send it to my teamNoBest fitMaybeNo
Plan a research trip across flights, hotels, restaurants, and calendar constraintsNoMaybeBest fitMaybe
Monitor my inbox, draft replies I can review, flag anything urgentNoMaybeBest fitMaybe

You do not need to memorize the table. You do need to get comfortable asking one question: do I know the steps in advance?

Action · Unit 2

Complete the decision matrix.

For each example task, choose the best-fit mode and write a one-sentence rationale.

  1. Translate this paragraph to Spanish.
  2. Run the weekly sales summary and send it to my team.
  3. Plan a multi-stop research trip with constraints.
  4. Monitor my inbox and draft replies for review.
Common wrong turns
  • Choosing the most sophisticated mode by default. Fit matters more than sophistication.
  • Confusing workflow with agent. If you know the steps, it is probably a workflow.
  • Assuming multi-agent is always the answer for complex work. It is only the answer when one agent is genuinely not enough.
Unit 03 · Hands-on

Classifying your own work

Time: 7 min Objective: Apply the framework to three real tasks from your work

The framework only becomes useful when you run it against your actual work. Not a hypothetical future workflow. Not a toy example. The real tasks that took your time in the last month.

Pick three tasks. Make them recent. Make them real. Make them varied. If you pick three versions of the same task, you will learn less than you think.

Now classify each one using four questions in order.

Could this be handled as chat? If a single interaction would genuinely do the job, it is chat. People often over-classify because the task felt annoying by hand. Annoying does not automatically mean agentic.

If not chat, are the steps knowable in advance? If you could draw the sequence on paper right now, that is workflow territory. If the sequence depends on what the system discovers mid-run, it is not a workflow anymore.

If the steps are not knowable, can one agent handle the whole thing? If one role with the right tools could carry the task end to end, a single agent is probably enough. If the task requires distinct specialized handoffs, then multi-agent becomes reasonable.

Could this be a function instead? This is the most important honesty check in the whole path. Sometimes the right answer is “do not use AI for this.” That is not a failure. It is good mode discipline.

If all three of your tasks end up classified as agents, pause and re-check. That is the most common overreach. If all three end up as chat, you are probably missing the repeated or multi-step structure in the work.

Action · Unit 3

Classify three tasks from your own work.

  1. Pick three real tasks you handled in the last 30 days.
  2. Classify each one as chat, workflow, single agent, multi-agent, or “should be a function instead.”
  3. Add a one-sentence rationale for each classification.
Common wrong turns
  • Picking hypothetical tasks instead of recent work. The exercise only works on real constraints.
  • Over-classifying toward agent. This usually means you were not rigorous about whether the steps are actually knowable.
  • Under-classifying toward chat. This usually means you are ignoring repetition or action-taking.
Unit 04 · Tradeoffs

The governance cost of each mode

Time: 6 min Objective: Identify the controls each mode actually needs

Every mode has a governance cost, and the cost scales with capability. This is the part most people skip until something breaks.

Chat has the lowest cost because the scope is small: one question, one answer, usually no persistence and no actions. Governance here is mostly about what is allowed in the prompt and what is allowed in the output.

Workflow adds step state, tool use, and real-world actions. That means you need an audit trail for each step, approval gates for consequential actions, and a way to stop the process when something goes sideways.

Agent adds unpredictability. The system decides what to do next, so you cannot pre-authorize every step. That forces you to narrow permissions, introduce approval on higher-risk actions, and think much harder about memory boundaries because the agent carries state from one run into the next.

Multi-agent adds coordination overhead on top of everything else. Handoffs between agents become failure points. Audit surfaces multiply. Debugging gets harder because you are no longer just tracing one decision stream, you are tracing interactions.

The useful operating principle is brutally simple: choose the lowest-capability mode that solves the problem. It is cheaper to govern, easier to debug, and harder to misuse. The boring answer is often the correct one.

Action · Unit 4

Mark the minimum governance controls for each task.

Take the three tasks you classified in Unit 3. For each one, mark which controls are required:

  1. Audit trail
  2. Approval gate
  3. Kill switch
  4. Memory boundary
  5. Permission scope
Common wrong turns
  • Assuming governance is only for enterprise. Even personal workflows benefit from explicit controls.
  • Treating governance as a block to shipping. It is the thing that makes shipping safe.
  • Over-governing low-capability modes. Chat does not need the same control surface as multi-agent orchestration.
Unit 05 · Decision

Pick your mode, pick your product

Time: 10 min Objective: Turn one classification into a real product decision

This is where the framework becomes a product decision. Go back to the three tasks you classified in Unit 3. Pick one. Ideally it is the one you are most ready to act on this week.

If you picked chat, the landing spot is usually AI Agent. It is the right surface for quick drafts, one-off summaries, personal research, and low-friction daily work with memory and tools.

If you picked workflow, the answer is often AI Workers, especially when the work is recurring, team-shaped, or approval-sensitive. If it is a tiny solo workflow, AI Agent may still be enough, but most operational workflows want the structure Workers provides.

If you picked a single agent, the answer is usually AI Workers in team or business contexts, because the role-shaped execution, approvals, and auditability matter more as stakes rise.

If you picked multi-agent, the answer is either AI Workers for well-defined team systems or Secure AI when the work touches regulated data, enterprise-grade controls, or organization-wide coordination.

If you picked “should be a function instead,” honor the answer. Do not use AI just because you can. Mode discipline includes choosing not to use AI when a deterministic tool is the better fit.

Now add the governance layer from Unit 4. Heavy audit requirements, strict approval gates, or sensitive data will push the answer up the ladder from Agent to Workers to Secure AI. Product choice is not separate from governance. Governance narrows the choice.

Action · Unit 5 · Decision

Pick the product and open it.

  1. Pick one of your classified tasks from Unit 3.
  2. Review its governance requirements from Unit 4.
  3. Select the right product: AI Agent, AI Workers, Secure AI, or no AI.
  4. Open the product page that matches the decision.
Open AI Agent Open AI Workers Open Secure AI
Common wrong turns
  • Picking the product first and working backward to justify it. Work forward from the task.
  • Over-weighting Secure AI because it sounds more advanced. If you do not need that control surface, do not pretend you do.
  • Choosing Workers for a task AI Agent would handle cleanly. The ladder exists because the tasks are different.

You picked a mode. Now use the product.

If your answer was AI Agent, stay close to the product and start using it. If your answer was Workers or Secure AI, jump straight to the live product surface. The dedicated follow-on path is publishing next.

Use AI Agent Explore AI Workers See Secure AI