AI Agents vs RPA: A Simple Guide to Choosing the Right Automation (2025)

Profile picture of the article's author, Bartosz Mróz
Bartosz Mróz
June 15, 2025
17 minutes

In boardrooms across the world, a familiar scene unfolds: executives excitedly discussing AI agents while implementation teams exchange knowing glances. It's the gold rush of 2025, and everyone's panning for AI gold—but not all that glitters is actually valuable for your business.

The pressure to "go agentic" has become so intense that I've occasionally nodded along in meetings and said: "Yes, we're using an AI Agent for that"—when in reality, good old-fashioned RPA was quietly doing the job behind the scenes.

In fact, I believe AI Agents are the best fir only for 10-15% of business processes. The truth is, most processes are better served by less headline-worthy but infinitely more reliable approaches. These won't make your security team develop eye twitches or drain your budget on unnecessary complexity—they'll just work.

This article is the guidance I wish I'd had three years ago. I'll demystify the four types of AI-driven workflows I've developed through years of separating AI hype from business reality.

You'll learn when to use:

  • Rule-based workflows (yes, boring old rules still work great)
  • Embedded AI (for when you need just a sprinkle of AI magic)
  • Knowledge Assistants (your digital research buddy)
  • …and when you might actually need those much-hyped Autonomous Agents

Think of it as "AI right-sizing"—because sometimes the most innovative choice is deliberately choosing the simplest path.

Codeshift AI Workflow Classification Framework™

This classification framework represents my own methodology developed through years of implementing various AI solutions. While these aren’t scientific terms, they provide a practical mental model that has proven invaluable in my work.

The categories exist on a spectrum from fully deterministic (rule-based) to highly autonomous (agents), with two valuable middle grounds that often provide the best balance of capability and control.

The Critical Divide: Scripted vs. Agentic Approaches

The most important concept to grasp before we explore the specific categories is what I call the “scripted vs. agentic divide”. This distinction has saved my clients countless hours and millions in avoided costs from overengineered solutions.

Think of it this way: imagine you’re planning a road trip. With a scripted approach, you’d map out your entire route beforehand — every turn, every stop, every highway exit. Your GPS might help you navigate tricky intersections, but the journey itself is predetermined. With an agentic approach, you’d simply tell your self-driving car “Get me to San Francisco”, and it would figure out the route, adjust for traffic and make decisions about when to stop for gas.

Scripted approaches

Rule-Based Workflows and Embedded AI ones follow a predefined path set by you, the designer. The sequence of steps is known in advance. While Embedded AI workflows incorporate AI for specific tasks, the AI doesn’t get to choose which step comes next or decide what tools to use — it simply handles its assigned part within the larger predetermined flow.

Agentic approaches

Knowledge Assistants and Autonomous Agents, by contrast, grant the AI system the ability to influence the order and nature of actions. The AI decides what needs to happen when, what information to retrieve, or what tools to employ based on its understanding of the goal and context. This autonomy is powerful but only appropriate in certain conditions where flexibility outweighs predictability.

Why the approach matters

I’ve seen companies spend months building complex Autonomous Agents when a simple rule-based workflow would have solved their problem in less than a week. I’ve also seen teams struggle with rigid rule-based approaches that constantly break when faced with real-world variability, when an agentic approach would have gracefully handled the complexity.

Understanding this fundamental divide will help you avoid the common and costly mistake of implementing an overly complex agentic solution when a more reliable scripted approach would suffice — or vice versa.

Now, let’s explore each of the four categories in detail, starting with the simplest form.

Rule-Based Workflows

A rule-based workflow is the classic automation — the friendly robot that never gets creative. Think of it as an “if-this-then-that” sequence of steps. The workflow runs on predefined rules and logic set by humans, without any learning or flexibility. It’s like an assembly line machine that does the same steps every time as long as the inputs meet the expected format.

Rule-based workflows have been around for a long time (like traditional RPA — Robotic Process Automation). They excel at handling structured, repetitive tasks where the procedure never changes.

When to use

Use a rule-based workflow when your task is well-defined, repetitive and predictable. If you know the exact steps and decision points (e.g. “Whenever a customer fills out a form, send a welcome email and add their info to the database”), a rule-based approach is ideal.

These workflows are reliable and fast for routine processes because they don’t involve any guesswork — they literally follow the rules you set, step by step.

A particularly important use case for rule-based workflows is handling sensitive operations where AI hallucination would create unacceptable risk. For example, in payment processing, customer data handling, or privacy-sensitive workflows, the predictability and reliability of rule-based systems provide essential guarantees that LLM-powered approaches simply cannot.

Input types

Rule-based systems work best with structured input data. This could be form fields, spreadsheet rows, sensor readings, or any data in a consistent format. They struggle with unstructured inputs like long free-text or images.

In fact, traditional automations require clearly defined inputs — for example, an RPA script can copy data from one field to another, but if faced with a paragraph of text it doesn’t understand, it will fail. It’s like trying to feed a vending machine something other than coins — it just won’t know what to do with it.

Use of LLMs

None. By definition, a purely rule-based workflow does not use Large Language Models or any AI that “thinks”. It’s all deterministic logic (the moment you introduce an AI model into the sequence, it becomes an embedded AI workflow — which we’ll cover next).

Rule-based workflows might use simple condition checks (e.g. “IF amount ≥ 1000, THEN route to Manager”), but they won’t do things like interpret natural language or generate content.

Autonomy level

Very low. Rule-based workflows have no autonomy in the AI sense. They only do exactly what they’re programmed to do — no more, no less. If something unexpected happens (like a missing data field or a slightly different input format), they won’t “figure it out” — they’ll either follow the default rule (which might be wrong) or stop with an error.

There is no reasoning or learning involved. You can think of it as a train on fixed tracks: it can’t deviate from the laid-down route.

Real-world example

Imagine a customer onboarding process at a tech company’s sales department. A rule-based workflow could automatically take a new client’s submitted information, create an account in the CRM, send out a welcome email template, and notify a salesperson. Each step is straightforward and based on set rules (e.g. “WHEN new client signs up, THEN create account and send email”).

Another example: an e-commerce order processing workflow might automatically flag orders over a certain amount for review or route them to a priority queue. These actions don’t require any creative thinking — just cut-and-dry rules.

Analogy

A rule-based workflow is like a vending machine — you press the buttons (inputs/triggers) and it delivers a specific snack every time (outputs/actions) following a fixed internal mechanism. If you press the wrong combination or use the wrong coin, it just won’t work. It’s reliable for the exact scenarios it’s designed for, but it’s not going to improvise or handle surprises.

Embedded AI

An embedded AI workflow is like giving your rule-based workflow a brain upgrade in specific spots. It’s a hybrid approach that mixes fixed process steps with AI-driven components. In these workflows, AI is “embedded” as one or more steps within a larger, structured sequence.

You still have a predefined pipeline or playbook of what the workflow should do, but at certain points you rely on an AI (like an LLM or another model) to handle tasks that are too complex for rigid rules.In other words, the overall flow is orchestrated by set logic, but within that, the AI provides flexibility or intelligence. It’s like an assembly line where most stations are machines doing fixed actions, but you’ve placed a smart robot or a human specialist at one station to handle something nuanced (say, quality checking or creative work) that the regular machines can’t do.

When to use

Use embedded AI workflows when you have a mostly structured process but need some AI “brainpower” for specific parts.

Common scenarios include:

  • Understanding or generating text within a flow: e.g. automatically drafting a reply to a customer using an LLM, then proceeding with sending it.
  • Classifying or extracting info from unstructured data: e.g. an incoming support email is routed based on issue type by using an AI to read the email content.

In these cases, the surrounding structure (triggers, flow and actions) is predetermined by you, the designer. The AI is invoked to handle complexity or variability at a step, but it doesn’t drive the process entirely on its own.

Input types

Embedded AI workflows can handle both structured and unstructured inputs, depending on where the AI is used. The trigger or overall input might be structured (e.g. an event like “new email received” or a record in a database), but when it comes to the AI component, that step shines at dealing with unstructured data like free-form text, images or audio.

For example, the workflow might feed the free-form text of a document into an LLM for summarisation, then take the structured summary output and route it via a rule to the next step. The AI acts as a translator between unstructured info and structured actions.

Use of LLMs

Yes. This category by definition involves AI models, often Large Language Models or other ML models embedded in the flow. You might use a model like GPT within a step to understand language or generate text, what translates to a simple chat completion.

Think of it as calling an expert within your workflow whenever needed: the LLM can interpret a sentence, compose a message, classify an image and then the workflow continues.Importantly, the LLM is guided by the context you give and the place in the sequence — it’s not deciding which step to take next, only doing its part within a step.

For instance, an insurance claims process might automatically approve straightforward claims via rules, but if a claim description is long-winded, an LLM could be used to parse the description and extract key details as structured JSON that can be programmatically mapped to existing systems. This transforms unstructured language into structured data while keeping the AI’s autonomy low — it interprets content but doesn’t decide what happens next in the process.

Autonomy level

Low. The workflow as a whole is not autonomous — it still follows a set path that you built. However, the AI components have micro-level autonomy in how they interpret or generate content.The AI might, for example, choose how to phrase a summary or which category a support ticket falls into, but it won’t decide to introduce entirely new steps.In these workflows, the surrounding structure follows what I call the Trigger-Flow-Action™ framework: a trigger initiates the process, the flow orchestrates the steps (including AI-powered ones) and predefined, conditional actions are taken based on the results.

Real-world examples

  • Email triage: Suppose your team gets dozens of emails daily. An embedded AI workflow could read each incoming email (using an AI step to understand the content), then categorise it as lead, support question or personal. Based on that, the workflow (with set rules) forwards the email to the right person or triggers the next action. Here, AI does the content understanding, but the routing rules are fixed.
  • Content creation pipeline: Think of a content team generating product descriptions. A workflow could have steps for research, drafting, editing, and publishing. An LLM can be embedded at the drafting step to create a first draft from bullet points and maybe another LLM at an editing step to proofread or optimise for SEO. The process (research -> draft -> edit -> post) is preset, but AI is doing the heavy-lifting in the creative parts.
  • Data enrichment: A sales team might use a workflow that takes a company name from a sales lead and calls an AI service to summarise contents of the website to enrich their database, then continues with storing that info. The overall task (creating a record in the CRM) is known, but AI is used to determine some of the missing pieces.

Analogy

An embedded AI workflow is like a GPS-guided assembly line. You have a fixed route (the assembly line), but at one station, instead of a simple machine, you have a smart GPS or smart operator that can handle variations.

Imagine a package sorting system: most packages go through conveyors (rule-based), but at some checkpoint, an AI-powered camera scans and decides if an item is fragile or needs special handling. If it’s fragile, maybe a different path is taken. The system is largely fixed, but the AI “eye” gives it the ability to interpret something complex (like reading a handwritten label or recognising a product type) and adjust accordingly within the allowed options.

It won’t suddenly ship the package to a new destination on its own — it will choose one of the predefined bins based on what it sees. That’s how embedded AI adds intelligent flexibility to an otherwise scripted process.

Knowledge Assistants

Now, it’s time to leave scripted workflows behind and learn more about agentic ones. First, let’s clarify how Knowledge Assistants differ from Autonomous Agents, as this distinction is often confused. A Knowledge Assistant helps you find or understand information and an Autonomous Agent performs tasks for you. The Knowledge Assistant is focused on answering questions and providing insights but doesn’t take independent actions beyond the conversation. The Autonomous Agent, by contrast, executes tasks in the background to achieve goals.

A Knowledge Assistant is an AI system designed to retrieve information and answer questions through natural language interaction. In my framework, Knowledge Assistants are strictly limited to “reads” — they can access and process information but never “write” or perform actions that modify systems or data. They typically use retrieval-augmented generation (RAG) and may incorporate multiple information sources, but they respond only within the conversation context.

Think of them as your super-smart research buddy who can find and explain things but won’t go off and change stuff in your systems.

When to use

Use a Knowledge Assistant when the primary need is to provide information or guidance to users. This is ideal for:

  • Answering questions: e.g. an internal HR assistant that employees can ask about company policies
  • Research and reference: e.g. a marketing assistant that can summarise website traffic statistics
  • Content drafting help: e.g. an assistant that helps compose emails or generates content drafts for review (as long as the drafts are returned in the conversational interface and not created in 3rd party systems)
  • Simple customer support: bots that answer product questions, but won’t process refunds or make changes to accounts
  • Lead nurturing: assistants that provide information to potential customers while building interest in your products, but won’t schedule a call with a sales representative

Input types

The input is typically unstructured natural language from the user. The assistant parses this input and may query various knowledge sources to formulate a response, but from the user’s perspective, they’re simply having a conversation.The beauty here is that users don’t need to learn a special syntax or fill out structured forms — they can just ask questions naturally, the way they’d ask a human colleague.

Use of LLMs

Knowledge Assistants heavily rely on LLMs to understand queries and generate responses. They may answer based on general knowledge or retrieved information from specific sources.

The LLM acts as both the interpreter of the question and the composer of the answer, often working with a retrieval system to get the most relevant information.

Autonomy level

Medium. While Knowledge Assistants don’t take actions beyond the conversation, they do have autonomy in how they retrieve and process information. They decide which queries to construct, which sources to prioritise when multiple are available, and whether additional clarification is needed before providing a final response.

This internal decision-making grants them a medium level of autonomy, even though their external impact is limited to information provision. They’re like a research assistant who decides which books to check and how to summarise them, but doesn’t make business decisions based on what they found.

Real-world examples

  • Internal Knowledge Base Assistant: Employees ask questions about company policies and the assistant retrieves answers from HR documents.
  • Customer Information Bot: Answers product questions but refers the customer to a human for making purchases or account changes.
  • Research Assistant: Helps analysts find and summarise information across multiple sources without taking actions based on that research.

Analogy

A Knowledge Assistant is like a research librarian. They can help you find information, explain concepts and even organise knowledge for you, but they won’t go to the bookstore to buy new books or reorganise the library shelves without explicit instruction.

They’re incredibly helpful when you need to quickly find and understand information, but they stay in their lane — they inform you, and then you decide what actions to take based on that information.

Autonomous Agents

Definition

An Autonomous Agent is the most independent member of our AI workflow family. It’s an AI system that can take goals and independently plan and execute actions to achieve those goals, with step-by-step instructions from humans passed within the system message (prompt).

It can be fully autonomous or include human-in-the-loop validation for critical decisions. The key difference from other approaches is that an agent follows instructions as guidelines but ultimately decides which tools to use and in what sequence, adapting its approach based on intermediate results.

When to use

Consider an Autonomous Agent when you need to delegate a complex task or project to AI, especially if the path to the solution isn’t straightforward or is too dynamic for a fixed workflow. This might include:

  • Multi-step problem solving: e.g. “Research our competitors and draft a report with opportunities”.
  • Automated decision-making: e.g. monitoring systems and implementing fixes based on changing conditions.
  • Tasks involving external tool use: e.g. finding leads, compiling data from multiple sources and drafting communications.
  • Dynamic environments: e.g. managing response to changing market conditions or customer behaviour patterns.

Use an Autonomous Agent when you would otherwise need a human to monitor and decide each next step in a process because the conditions can change unpredictably.

Input types

The input is often a goal or instruction, plus whatever environment data the agent can access. Agents often work with mixed data types and convert between structured and unstructured information as needed.

The beauty of agents is that you can give them high-level goals rather than detailed steps :  “Find me the best flight options under $500” instead of spelling out each search step.

Use of LLMs

Most current Autonomous Agents use LLMs as the “brain” for reasoning and planning. The LLM takes the objective and current context to decide on actions, often updating its plan as it receives new information from executed actions.

Think of the LLM as the agent’s cognitive engine — it’s what allows the agent to understand goals, formulate plans, adjust to new information and decide what to do next.

Autonomy level

High. Autonomy is the defining characteristic of AI agents. They can operate without continuous human intervention, making decisions as circumstances change. This includes both fully autonomous systems and human-in-the-loop approaches where the agent recommends actions but waits for approval on critical steps.

Building Autonomous Agents requires significant security considerations since they use tools that “write” or modify systems. The ability to take actions makes proper safeguards essential, far beyond what’s needed for read-only assistants.

Real-world examples

  • Automated Research Agent: Analyses competitors, compiles findings and drafts reports with minimal guidance.
  • IT Support Agent: Monitors infrastructure, diagnoses issues and implements fixes, only escalating when necessary.
  • E-commerce Management Agent: Adjusts product pricing, inventory levels and marketing spend based on real-time performance data.

Analogy

An Autonomous Agent is like a skilled freelancer you hire for a project. You provide the objective and some guidelines and they determine how to accomplish it, making judgment calls along the way and only checking in when truly necessary.

You don’t need to micromanage them — they figure out the steps needed to achieve the goal, using whatever tools and approaches make sense. This makes them powerful but also means you need to trust their judgment (and build in appropriate guardrails).

Choosing the Right Approach: A Decision Guide

Knowing the differences is one thing; deciding which type of AI workflow fits your scenario is the real goal. Here’s a practical guide to help you choose.

Questions to identify the right approach

Ask yourself these questions about your use case:

  1. Is the task well-defined with a consistent procedure? If you can draw a clear flowchart of the steps and decisions needed and it rarely changes, a Rule-based workflow is likely sufficient.
  2. Are the inputs and criteria highly structured? If yes, rule-based is great. If you’re dealing with free-form text, images, or anything that doesn’t fit neatly into boxes, you’ll likely need Embedded AI for interpretation.
  3. Do you primarily need to retrieve knowledge or provide answers to people? If the goal is to answer user queries, guide them or generate content upon request, a Knowledge Assistant is appropriate.
  4. Does the solution require taking actions in the real world or across systems? If the actions can be arranged in a known sequence (even if some steps use AI), an Embedded AI workflow might do the job. If the actions needed might change or you need the system to choose what to do next, you might need an Autonomous Agent.
  5. How critical is precision vs. flexibility? Rule-based systems are precise, whereas LLM-powered assistants or agents might sometimes produce incorrect outputs. If a mistake is unacceptable, stay rule-based or put strict checks in place.
  6. Does the process involve sensitive operations? For payment processing, privacy-related tasks or anything where hallucination would create significant risk, favour rule-based approaches.
  7. Do you have the ability to monitor and refine the solution over time? Autonomous Agents require ongoing tuning and oversight. If you want a “set it and forget it” solution, lean towards simpler approaches.

Common mistakes to avoid

  1. Jumping to an Autonomous Agent too soon: Agents are shiny and exciting, but they’re harder to control and predict. If your problem could be solved with a more straightforward workflow, doing it with an agent might be overkill. It’s like hiring a creative chef when all you need is someone to follow a recipe!
  2. Overcomplicating a rule-based solution with AI: Sometimes people add AI “just because we have it”, even when rules would do. For basic arithmetic or well-defined logic, traditional code or rules are more reliable.
  3. Ignoring data structure and quality: If you feed garbage to an AI, you get garbage out. Each approach has its requirements: rule-based needs structured inputs — AI-based needs good context or training.
  4. Lack of human oversight: With autonomy comes responsibility. If you deploy an Autonomous Agent that can send emails or make purchases, put in checks or at least monitor its activity initially.

Conclusion

Before jumping into complex agent architectures, ask yourself: 

  • Could a simpler rule-based workflow or embedded AI approach solve this problem more reliably? 
  • Does this truly require the flexibility of an agent, or am I just attracted to the cutting-edge appeal?

By understanding the full spectrum of options in this framework, you’ll make wiser investments in AI automation — ones that deliver real value rather than unnecessary complexity. In a world obsessed with AI agents, sometimes the most innovative choice is deliberately choosing a simpler path.

Want to discuss which approach is right for your specific challenge? Let’s talk.

Bartosz Mróz
Founder @ Codeshift

Book My Free Discovery Call

Let's have a straightforward conversation about your current AI setup and what's actually possible to automate in your specific situation. No sales pressure, no generic demos—just honest answers about what will and won't work for your team.

Grainy gradient background with green, cyan, black and white colors

Book My Free Discovery Call

Let's have a straightforward conversation about your current AI setup and what's actually possible to automate in your specific situation. No sales pressure, no generic demos—just honest answers about what will and won't work for your team.

By subscribing you agree with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Grainy gradient background with green, cyan, black and white colors