Intelligent Automation: What Happens When RPA Gets a Brain

I spent two years watching an RPA bot process invoices at a logistics company. It clicked through SAP screens, copied numbers into spreadsheets, and filed documents into SharePoint folders. It worked great until someone updated the SAP interface. Then it broke, and it kept breaking every few months, and eventually the team spent more time babysitting the bot than they'd spent doing the work manually.
That's the dirty secret of RPA. The pitch is "automate repetitive work." The reality is you've built a brittle screen-scraper that assumes the world never changes.
What RPA actually is (and isn't)
RPA -- robotic process automation -- records and replays human actions on a computer. Click here, type this, copy that, paste it there. Think of it as a macro on steroids. The bot doesn't understand what it's doing. It follows a script. If the button moves three pixels to the left, the script fails.
This works fine for stable, rule-based processes. Payroll calculations where the inputs are always in the same format. Data entry between two systems that haven't changed their UI since 2019. Report generation from a fixed template.
It falls apart the moment anything requires judgment. An invoice comes in as a scanned PDF instead of a structured file. A customer name is spelled differently than what's in the database. An exception doesn't match any of the five scenarios the developer anticipated. The bot stops, throws an error, and someone has to fix it manually.
Bolting AI onto the front
This is where "intelligent automation" comes in. The idea is straightforward: keep RPA as the execution layer (it's still good at clicking through screens), but add AI components that handle the messy parts.
The typical stack looks something like this:
OCR and document understanding pull structured data from unstructured documents. Not the OCR from 2015 that gave you garbled text -- modern document AI models from Google, AWS, or Azure that understand layouts, tables, and handwriting. They can read an invoice and extract the vendor name, line items, amounts, and due date regardless of the format.
NLP for classification and routing. Emails come in, and instead of routing based on keywords (which works until someone writes "I'd like to cancel" without using the word "cancellation"), a language model classifies intent and routes accordingly.
Decision engines that handle the gray areas. When the invoice amount doesn't match the PO by 3%, is that a rounding error or a discrepancy? A rules engine handles the obvious cases. A trained model handles the ones that fall between rules.
RPA as the last mile. Once the AI components have extracted, classified, and decided, the bot does what bots are good at: entering the result into whatever legacy system doesn't have an API.
Where this actually works
Invoice processing is the poster child, and for good reason. A mid-size company gets invoices in dozens of formats -- PDF, email, paper scan, EDI. An intelligent automation pipeline can process 80-90% of them without human intervention. The remaining 10-20% get flagged for review, which is still a massive reduction in manual work.
Insurance claims follow a similar pattern. The claim comes in as a mix of forms, photos, and free-text descriptions. Document AI extracts the relevant fields. A classification model determines the claim type. A decision model checks it against policy rules and flags anomalies. The RPA bot enters the approved claim into the system of record.
Customer onboarding is another one. ID verification, KYC checks, account creation across multiple systems -- lots of steps that are mostly automated but need AI for the document verification and risk assessment pieces.
The common thread: high volume, semi-structured inputs, multiple systems involved, and a tolerance for some error rate as long as exceptions get caught.
Why half of RPA projects fail
McKinsey, Forrester, and basically every consultancy have reported that 30-50% of RPA implementations don't deliver the expected ROI. I've seen this firsthand, and the reasons are pretty consistent.
Automating a bad process. If the manual process has fifteen unnecessary steps, automating all fifteen doesn't help. You've made the waste faster. The boring answer is that you need to fix the process before you automate it, but nobody wants to hear that because process redesign is harder than buying software.
Underestimating maintenance. Every UI change, every system update, every new exception type requires bot maintenance. Companies budget for the build and forget about the ongoing care. A year in, they're running a small development shop just to keep the bots alive.
Picking the wrong processes. RPA vendors love to count "bots deployed" as a success metric. So companies automate everything they can, including processes that run twice a month and take ten minutes. The automation costs more than the labor it replaced.
No exception handling strategy. The bot handles the happy path. Everything else goes to a human via email, which means someone is now doing the hard version of the original job while also managing a queue of bot failures.
Intelligent automation changes the math on some of these. AI-powered bots are more resilient to input variation, which reduces maintenance. They handle exceptions better, which reduces the human fallback load. But they don't fix the fundamental problem of automating a process that shouldn't exist in its current form.
Process mining: let the data tell you what to automate
One of the more useful developments is process mining -- using system logs to reconstruct how work actually flows through an organization. Tools like Celonis, UiPath Process Mining, and Microsoft's process advisor analyze event logs from ERP systems, CRM tools, and ticketing systems to build a map of what actually happens versus what the process documentation says happens.
The gap between those two things is usually enormous. You discover that 40% of purchase orders go through an unofficial approval step that nobody documented. Or that the average invoice touches seven people when the official process says three. Or that a particular exception type accounts for 60% of the processing time.
This is where AI earns its keep. Instead of a consultant interviewing people and drawing flowcharts (which capture what people think they do, not what they actually do), you get data-driven analysis of real process execution. Then you can make an informed decision about what to automate and what to redesign.
Build vs. buy
The enterprise RPA market is dominated by UiPath, Automation Anywhere, and Microsoft Power Automate. All three have bolted AI capabilities onto their platforms -- document understanding, conversational interfaces, AI-assisted bot building.
On the other end, some teams are building custom automation with LLMs. Instead of an RPA bot that clicks through a UI, they write an agent that calls APIs directly, uses a language model for decision-making, and handles exceptions through conversation rather than error codes. This approach skips the screen-scraping layer entirely, which eliminates the biggest source of fragility.
The tradeoff is predictable. The platforms give you visual bot builders, governance dashboards, audit trails, and enterprise support. Custom LLM-based solutions give you flexibility and fewer moving parts but require real engineering effort and don't come with compliance certifications out of the box.
For most large enterprises, the answer is both. Platform RPA for the stable, high-volume, compliance-sensitive processes. Custom AI automation for the newer, less structured workflows where the platform's limitations get in the way.
What happens to the people
I'm not going to pretend this isn't a real concern. Automation does eliminate jobs. The data entry clerks, the document processors, the people who manually route emails -- those roles shrink when automation works as intended.
The standard corporate line is "we retrain people for higher-value work." Sometimes that's true. I've seen invoice processing teams transition into exception management and vendor relationship roles, which are genuinely more interesting jobs. But I've also seen companies lay off the team six months after deployment and call it efficiency.
The honest answer is that it depends on the company and the leadership. The technology doesn't decide what happens to people. The people who deploy it do.
What I will say is that the jobs most at risk are the ones that were already pretty miserable. Copying data between systems eight hours a day isn't a career anyone aspires to. If automation can absorb that work and the organization reinvests in its people, everyone's better off. That's a big "if," and I don't think technologists should pretend it's automatic.
Where this is going
The line between RPA and AI agents is blurring fast. Current RPA bots follow scripts. Current AI agents follow goals. The next generation of automation will probably look less like "bot clicks through screens" and more like "agent figures out how to accomplish a business outcome using whatever tools are available."
That shift makes process-specific bots less relevant and general-purpose agents more relevant. But it also raises the trust and control questions I've written about before -- how much autonomy do you give an automated system that's making business decisions?
For now, the practical advice is boring: fix your processes first, pick the right things to automate, budget for ongoing maintenance, and treat the people affected with respect. The AI capabilities are real and getting better fast. The organizational challenges haven't changed much in decades.


