AI workflow patterns are repeatable arrangements of decisions, actions, and human checkpoints that turn an input into an output. In organizations, the workflow is the unit you can actually inspect, automate, and improve: not the model, not the agent brand, not a vague use case, but the path work takes from trigger to result.
This article is part of our The Org Age of AI series, It is co-written by Will Schenk (TheFocus.AI) and Ksenia Se. Previous episodes: #1: AI Feels Powerful. So Why Is the ROI Still Missing?, #2: The Unsexy Truth of AI Adoption, #3: How to Build an AI-Native Startup from Day One, #4: There Are No AI-Native Enterprises Yet.
If you need an unbiased view on your transition to becoming AI-native, you can schedule a 1-on-1 consultation with Will here. Will Schenk is a co-founder of TheFocus.AI, where he works directly with companies navigating these transitions.
What's in today's episode:
What an AI workflow actually is
The seven primitives of AI workflows
Eight AI workflow patterns that recur in production
Which AI workflows should you automate first?
How AI workflow patterns chain into pipelines
What AI workflows mean for AI adoption
What an AI workflow actually is
This is the fifth article in the series, and we've used the word "workflow" in every one of them without saying what we mean by it. Time to fix that.
The thesis underneath this whole series is that AI adoption conversations keep happening at the wrong unit. People debate models, agents, frameworks, use cases. The thing you can actually point to and change is smaller. It's the workflow.
A workflow is a repeating sequence of decisions and actions that turns an input into an output, with points along the way where a human exercises judgment.
Judgement points! Strip those out and you have a pipeline – a cron job, a script, plumbing. Have them in and you have a workflow.
When we think about a workflow, it's something you have to tease apart through conversations about how information and processes move through the organization. So many of them are informal and under-specified, and in some ways happen in spite of the company. There are mid-level heroes every day finding ways to make something work.
These informal and under-specified parts can now be tackled with LLMs of various sophistications and specificities of prompting. A lot of the gray area can now be addressed. We have intelligence on tap. Things that used to have to be extremely tightly fitted can now be loosely coupled.
The interesting question is always: which judgment points can an agent handle, which ones still need a human, and how does the human know when to step in?
Most organizations have dozens of workflows running in every department – support, finance, engineering, sales, etc. Most have never been written down, because the humans running them absorbed the complexity years ago. The first step is discovering them: not the automated pipelines, but the decision processes made of people working around flawed interactions.
That is the L0-to-L2 problem from Article #2: making the organization legible to itself. And once you can see the workflows, the next question is which ones to pick.
Across roughly thirty production systems we operate at TheFocus.AI – content pipelines, financial reconciliation tools, engineering automation, deal-scoring platforms, API monitors, newsletter delivery – the same eight compositions keep appearing. Each one is a specific arrangement of primitives with a specific shape of human involvement.
But to understand them, we first need to learn the vocabulary →
The seven primitives: what an agent actually does in a single step
When you strip away the domain language – the invoices, the tickets, the pull requests, the deals – what an agent actually does in any single step reduces to seven actions:
Primitive | What it does | Simple test | Example |
|---|---|---|---|
Watch | Waits for a trigger or condition. | Has something happened yet? | A file appears, a threshold is crossed, a schedule fires. |
Validate | Checks against known criteria. | Is this correct? | File headers match, invoice matches the purchase order, tests pass. |
Classify | Assigns a category or route. | What kind of thing is this? | Billing vs. technical issue, data problem vs. code problem. |
Enrich | Adds useful information to existing data. | What can we add to make this more useful? | Tag a transcript, score a deal, calculate spending from invoices. |
Generate | Produces a new artifact. | What should be created? | Draft an email, write a report, build a slide deck. |
Execute | Takes an action with consequences. | Should this action happen now? | Send the email, post the tweet, load data, deploy code. |
Elicit | Asks a human to reduce ambiguity. | What do we still need to know? | Confirm scope, choose an approach, decide whether to include historical data. |
These seven show up in every workflow we have built or seen built. None of them are sufficient on their own. A single "validate" call is not a workflow. But chain a few together with branching logic and a human checkpoint or two, and you have one.
Eight recurring patterns: how primitives compose into real work
These eight patterns recur across every production system we run. We originally thought there were five – but a deeper audit of our codebase, plus operational patterns from Bloomberg, Zapier, Cursor, and OpenRouter, surfaced three more. Each one is a specific arrangement of primitives with a specific shape of human involvement.
Pattern | Shape | Human |
|---|---|---|
Triage | Classify → route | Usually none |
Investigation | Validate + enrich → recommend | Decides |
Draft & review | Generate → review | Edits/approves |
Approval | Propose → execute | Gates |
Monitoring | Watch → escalate | Handles exceptions |
Elicitation | Ask → refine | Supplies context |
Sync | Transform → load | Usually none |
Curation | Collect → synthesize → deliver | Receives |
It all comes from the real use cases. And it’s gold. Let’s discuss each in detail →
Don’t settle for shallow articles. Learn from those who work directly with companies navigating these transitions.
Join Premium members from top companies like Microsoft, Nvidia, Google, Hugging Face, OpenAI, a16z, plus AI labs such as Ai2, MIT, Berkeley, .gov, and thousands of others to really understand what’s going on in AI.
How did you like it?
← Previous: There Are No AI-Native Enterprises Yet | Next → When the Loop Closes: What Happens When Workflows Run Themselves
FAQ
What is an AI workflow?
An AI workflow is a repeatable sequence of decisions, actions, and human checkpoints that turns an input into an output.
AI workflow vs pipeline: what is the difference?
A pipeline runs predefined steps with little judgment. An AI workflow includes judgment points, ambiguity, and explicit decisions about when humans or agents should intervene.
What are the main AI workflow patterns?
The article identifies eight recurring patterns: triage, investigation and recommendation, draft and review, approval, monitoring, elicitation, sync and transform, and curation and scheduled delivery.
When should companies automate AI workflows?
Companies should start with workflows that are frequent, reversible, verifiable, and have a manageable exception rate.
Where should humans stay in the loop?
Humans should stay at high-consequence approval gates, ambiguous specification points, and exception-handling moments where judgment matters more than throughput.



