I’ve had to rewrite this post a few times because things are moving so fast in this space, and my frustration with bad product marketing regarding Agents keeps making me want to throw my laptop out the window. This post is the first of two parts: a harsh, realistic take on what’s been happening. The next post will take an optimistic look at what’s possible, thanks to a ton of open-source activity keeping this field a really fair game for most people.
The “Agent” Problem
This is going to start with a pretty basic hot take: “Agent” is a dumb name. I don’t think anyone asked for a name like this, and it doesn’t communicate what they are in any meaningful way to the user. It’s about as descriptive as “AutoPilot” was when Tesla first launched it (not to be confused with “Full Self-Driving Mode,” which isn’t fully self-driving either). We’re essentially forcing this concept down people’s throats before it’s even fully ready.
I’ll spare the rant about how a once-useful-in-moderation culture of “shipping it fast” in the SF Bay Area has now led to a ton of terrible externalities for paying customers. Let us focus on what an Agent is supposed to be.
What the hell is Agentic AI?
Agentic AI is supposed to describe an autonomous system that can make decisions and perform tasks without human intervention. Seems simple enough. So, what do we have out in the market these days?
If I had written this post eight months ago, I’d have said...not that much in terms of agents or their usefulness. Barely anyone knew what MCPs were, let alone what they could be integrated with to create an agent.
If I had written this post at the beginning of 2025, I’d have some helpful things to write about, but honestly, it would have been pretty early days. Maybe I would have referenced one of 50 Satya Nadella interviews to show where a possible future was.
Well... it’s early April 2025, and almost every LinkedIn post, half my algorithmic Instagram feed, and every other Chad-bro podcast would have me believe that not only is the Agentic AI future here, but apparently I’m quickly lagging behind if I don’t use Agents immediately! Apparently now it’s possible to run your entire business using agents managing other agents to do all the “boring stuff of running the company.” Seriously, this link above is the epitome someone cheering for late-stage capitalism in the worst way.
I suppose it’s too bad you can’t have Customer Agents who can just give you money in just as deterministic a manner as Worker Agents. It’s only a matter of time, I suppose?
We’re going to see less people filing tedious reports or quietly spending late-nights doing research for a big initiative so they can get their evenings back. I hope it brings the virtue of living more life doing literally anything other than “working long hours.”
The Current Reality
Here are a few things actually possible right now:
Local Chain-of-Operations: Zapier and IFTTT started this trend a while ago for work and personal use using cloud services. Now, we’re definitely seeing some potential for non-engineers to build sophisticated automated workflows. AI transcription services like Otter or Abridge are everywhere, but with security and privacy concerns over where a service is hosted, it’s a hell of a lot safer to have recordings done locally on a computer when everyone is in a room collaborating. The automation isn’t fully there yet, but it is now possible as soon as you save a recording to use Aiko to kick off a workflow on your computer to transcribe the audio and turn the transcription into a distilled Notion or Confluence doc that ignores the less-official parts of the conversation and doesn’t track the linguistic profile of employees. This was historically a responsibility of Scrum Masters and Project Managers, but now it’s a complex responsibility that can be largely offloaded.
Knowledge Management Tools Offering the “Easy Button”: So many knowledge management providers (like Notion or Confluence) have their own “AI” offering that (with your permission) is more than happy to connect with your work messaging apps and other sources of information to give you more helpful context than no ordinary search bar could do in the days of yore. In this situation, you as a customer have no idea that there may be “Agents” (or likely pub/sub configurations) in the background handling all this indexing, RAG-ing, and re-tuning for you. You just get the benefit of getting a much more substantive answer to your question than before.
Automated Review/Comment Moderation: For SMBs, you can have an automated configuration that monitors social media, Maps, or Yelp listing comments and does the first level of interactions with customers before bringing only the most important ones your way. EmbedSocial or Reviews.io are examples of such offerings, but honestly, as someone with a small business, we don’t actually use these to fully automate anything. We stop the automation at drafts-to-review before a human actually presses Send or Publish. It doesn’t have to do with trust or hallucinations (maybe a little bit), but it’s more to do with the importance of handling people relationships. Automation here does kill the last vestiges of authenticity available on the internet, and our customers are relatively quick to pick on it. We’re constantly editing the final review comment before we post to make sure the right intention is communicated.
It’s all just fu#$king Workflow Automation with a more hip name.
There are a few more concrete examples, but this is all still quite early days.
The Danger of Removing Agency: A Recipe for Disaster
It’s all just fu#$king Workflow Automation with a more hip name. The difference this time is that way more people are going to be able to configure way more of their workflows. They’ll be able to treat more of their complex work life like configuring a go-to-bed routine in their Apple Shortcuts app. (Also, let’s be honest, most iPhone users haven’t even touched the Shortcuts app. If you have, nice!)
The whole problem I have with the label “agent” is the concept of agency itself. These “agents” often lack the context and nuance to make truly informed decisions.
I want to be clear: what this is leading to is a good thing—but not for the reasons that 10x productivity podcasters are telling you about or the Equities Analyst at Goldman Sachs is thinking about. This isn’t about reducing the number of expensive workers or only focusing on the “fun” parts of the business. We’re going to see less people filing tedious reports or quietly spending late-nights doing manual deep research for a big initiative so they can get their evenings back. People will likely become more educated about more topics a lot faster than ever before. I hope it brings the virtue of living more life doing literally anything other than “working long hours.”
That being said, decisions related to bookkeeping, back office work, task allocation, marketing campaigns or software architecture just simply aren’t going to be given over to AI Agents regardless of the fetish many have right now. The whole problem I have with the label “agent” is the concept of agency itself. Liability lawsuits aside, no one is going to allow the most important decisions to be made by an AI agent, and likely, only some of the less important decisions will be handed over.
I wish I was being alarmist. Really, I hate this feeling of having a tin-foil hat on my head, but even handing over the less important decisions can be a recipe for disaster if we’re not careful. As technology becomes more integrated into our workflows, it’s tempting to cede control to automated systems, thinking they’ll be less biased or impartial. However, research suggests that removing human agency, even in seemingly minor decisions, can have negative consequences:
Erosion of Skills: Relying too heavily on automated systems can lead to a decline in critical thinking and problem-solving skills. When we outsource our decision-making to machines, we risk becoming overly reliant on those systems and losing our ability to think for ourselves.
Loss of Context and Nuance: These “agents” often lack the context and nuance to make truly informed decisions. They may be able to process data and identify patterns, but they can’t understand the subtle human factors that are often crucial to making the right choice.
Unintended Consequences: Even well-designed automated systems can have unintended consequences. By removing human oversight, we increase the risk of unforeseen problems and negative outcomes. Air Canada accidentally offered a customer a discount they didn’t intend on providing. Now try being the Customer Support rep having to deal with that issue on the phone after the fact.
So what will Agents be used for? Grunt work. In fact, if there was a job at risk for Agentic Automation, it’s probably that Equities Analyst gig at Goldman that I mentioned earlier whose sole job is to do repetitive spreadsheet math and financial analysis. On the operator side, an AI Agent could handle asynchronous communication and coordinate different company functions. But do you really think any leadership team is going to cancel their executive offsite because of it? No! They’re going to go on that retreat...and they should! Even though it means being away from their families for a week, it’s likely going to be fun, and any chance people have of building a more cooperative relationship with others should be explored. No AI Agent is going to by default be privy to the good conversations at that offsite, and I want to discuss that irony for a little bit.
This whole thing is going to have to be a two-way street. Each of those offsite conversations about AI strategy are driven by a premature decision to utilize AI to cut costs first, and it appears decisions related to using these new technologies to generate an upside are secondary. I’ve been in enough board-level discussions for the last year in a handful of companies and non-profit institutions where this is a very real discussion. We can’t ask AI to handle the undesirable work, then turn to our own staff and have fewer of them to manage more of the AI deployments. We’re already writing more code with security issues the more we have AI assistants helping engineers.
Yes, I can buy the argument that all of these “agents” will be a step function better in a few short months than they are today, not just incrementally better. But the world is literally filled with products, services, and companies being brought to their knees based on edge case scenarios or what were once considered low-probability events. The more we let AI “agents” opaquely automate tasks, the more likely we are to face unexpected and potentially catastrophic consequences throughout the organization. If at that point we have less human capacity to go and solve the issue, almost every company that over-optimized on labor cost will actually end up paying significantly more to correct the issue, let alone continue staying in business for years to come.
Counterarguments: The Allure of Automation
Of course, not everyone agrees that Agentic AI is just workflow automation. As I did my rant with friends and colleagues over the last few weeks, there were some compelling arguments in favor of its potential:
Increased Efficiency and Productivity: Agentic AI could automate many time-consuming and repetitive tasks, freeing up human workers to focus on more creative and strategic activities. (This assumes you’re retaining a good chunk of that human capacity, though.)
Improved Decision-Making: By analyzing vast amounts of data, Agentic AI could help us make better, more informed decisions with less blindspots. (This one feels right? I imagine there will be less excuses of not knowing enough when making a strategic decisions. Research is way easier to conduct now.)
New Possibilities: Agentic AI could enable us to do things that were previously impossible, opening up new opportunities for innovation and growth.
However, even if these arguments hold their merit, it’s important to approach “Agentic AI” with a healthy dose of skepticism and a clear understanding of its limitations.
So...that’s why I think Agents are a dumb name for what is otherwise just another flavor of “workflow automation.” And yes, I get it. It’s hard to raise money on “workflow automation,” though. It’s hard to promote your LinkedIn profile by having “workflow automation” in your title instead of “Generative AI @ {techCompany}.” That Equities Analyst asking questions on earnings calls wants to hear a somewhat believable case that your company is investing in AI capabilities to streamline cost centers, and saying “workflow automation” isn’t gonna be enough for him anymore. He needs to hear about that delicious flavor of the week: Agentic AI. Wishful thinking, but I really hope we stop playing this game.
In the next post, I’ll explore the potential for open-source activity to create a better future, where this helps actually individuals and small businesses, rather than just reinforcing the power of larger institutions. These things are finally reducing barriers to entry for smaller players to compete with everyone from the slightly larger players to the really large players. I’m envisioning independent restaurants being able to offer a much better service to their customers by having the owner/operators manage their digital life a lot better in this future—something that was only possible by having a larger restaurant group before or by having really expensive SaaS subscriptions. Similarly, I think a boutique consulting firm is going to be able to take on more ambitious projects which would have been only possible for larger systems integrators like Accenture or E&Y before. That’s a world I want to live in, where larger players lose more of their pricing power and means to execute to give way to smaller players to compete (obviously, not by their choice 😉)