Agent Frameworks Are So Much More Than For Loops

AI
Agents
Software Architecture
A balanced perspective on the recent debate about agent frameworks vs. simple while loops
Author

Aman Arora

Published

September 8, 2025

Hello! I’m a full-time Lead AI Engineer. This blog reflects my personal opinions, not my company’s. In the past year, I’ve been responsible for multiple production agents - some successful, some not so much - but every time hitting problems at scale.

Amidst all the clickbait and false news, there’s a debate worth having — do you actually need agent frameworks, or are they just overengineered abstractions?

But before diving into the debate, I want to talk about a concept that’s reshaping how you might think about development - “vibe coding”.

The term was coined by Andrej Karpathy in the following tweet:

So, why bring up vibe coding in a discussion about agent frameworks? Because, in my opinion, where you sit on the coding spectrum fundamentally shapes how you view this debate on agent frameworks. I believe one approach is not greater or better than the other. AI is a means to an end, not an end in itself.

Tip🎯 Interactive: Find Your Position on the Coding Spectrum

Before we dive deeper, take a moment to explore where you fit on the coding philosophy spectrum. Click on any of the positions below to see which approach resonates with your experience and mindset.

I’ve had the opportunity to work with brilliant minds on both extremes of this spectrum. And what follows is my informed opinion on the fundamental question - “do you need agent frameworks?”.

1 Agent Frameworks: To Use or Not to Use

Whether you need a framework or not really depends on your needs and background.

You’re navigating changing times, much like the industrial revolution - but so much more impactful and with unprecedented economic potential. This has attracted a lot of attention from people in various industries - not always coming from a traditional coding background.

And in these changing times, where the industry has not even settled on a clear definition of an agent - it is really hard to land on the need and necessity of frameworks.

As in the coding spectrum above, we have a similar not so clear spectrum when it comes to agent adoption and use case. Some of you have been using AI agents for your daily lives and routines, others simply want “something agentic” as a use case in your company because there is a push to adopt AI from leadership.

Where you are in the agent spectrum, and what’s the basis of your “agentic needs” really defines if you should go ahead with a framework or not.

Frameworks have design decisions baked into them - which you may not agree with. At my company, we want complete control over the code we produce, and customize it based on our needs.

On the other hand, if you’re not an AI-first company (be honest here) and are just starting out on your journey - exploring an agentic use case, it might be worth starting with a framework until you build the expertise in-house. Starting from scratch - you might spend more time getting the formats right instead of testing your “product idea”. Now, you could argue that it’s okay to just start with simple API calls and a while loop, but I believe there’s more chance of failure and frustration whereas within a framework - you’re more protected.

Having said that, let’s look at some different perspectives currently floating around in the industry.

Now, Matt is not wrong when he says agents are raw LLM APIs in a while loop. In essence, yes an agent is simply a number of API calls chained together - where you’re reliant on the LLM to make the right decisions based on system prompt and tool descriptions - to choose the right tool and call it with the correct arguments. The part that’s agentic - is the decision-making process of the LLM which separates it from a prescribed path to follow. Based on an observation (tool output), the LLM could decide to alter its path to achieve the goal defined by the user. This reasoning and acting pattern is formalized in the ReAct framework (Yao et al. 2023).

What happened after? Here is another completely different perspective by someone who is working on building an AI framework.

Now this got some impressions in the Twitter world, but it didn’t get my attention until Jeremy posted the following tweet.

NoteA Personal Detour

As someone who started my data science journey with fastai, I deeply value and respect Jeremy’s work and opinions. So it was natural for me to reflect on his view regarding not overcomplicating simple agent loops.

His view on “rather than using complex frameworks, use simple small pieces that make the details accessible and understandable” deeply resonates with me.

Production code should be simple, to the point - and steering away from frameworks as much as possible. It should be transparent, easily deducible.

2 Finding the Middle Ground

After all this debate and reflections - I believe thinking of AI agents as either simple loops or complex frameworks represents two extremes of a spectrum you navigate based on context.

I am more aligned with swyx’s views here:

3 So Where Does This Leave Us?

As practitioners navigating this rapidly evolving landscape, you need to be pragmatic. My approach? Start with the simplest solution that could possibly work. If that’s a while loop, great. If you need a framework to move fast and test ideas, that’s fine too. The key is being intentional about your choices and understanding the trade-offs.

Let me share how I navigate this debate in my daily work.

The truth is, for enterprise production systems, you want complete control. No frameworks. Everything built from scratch using raw API calls - OpenAI’s, Anthropic’s, or whatever model provider you need. This gives you complete control over error handling, retry logic, streaming, and all the intricate details that matter when your agents are serving real users. No black boxes, no mysterious abstractions - just clean, transparent code that does exactly what you need.

But for personal agents and experiments? That’s a different story. I reach for lean, minimal frameworks like smolagents (Hugging Face 2025) or openai-agents-python (OpenAI 2025). These lightweight tools give me just enough structure to prototype quickly without the bloat of heavy frameworks. They’re perfect for experiments, personal automation, and testing new ideas before implementing them properly in production.

4 Is vibe coding productive?

Hell, yeah!

Depending on the task I am working on, I confidently shift gears. I am on multiple sides of the “Coding Spectrum” - sometimes running as many as 3 Claude Code sessions in parallel working on different pull requests to go into development. This workflow has been inspired by Anthropic’s documentation on how to run Claude Code sessions in parallel using git worktrees (Anthropic 2025). Features that used to take days, now take hours!

BUT - and this is crucial - you need to actively steer Claude Code in the right direction to get results. I can’t just say “Add streaming support to my Agent to stream tool calls and messages to user” and then forget about it, have some breakfast and come back. That simply doesn’t work!

Often what works is this:

  1. Claude Code in plan mode
  2. Help me plan adding a new feature that allows me to stream tool calls and responses to the user in the frontend as they are executed. Look at agent.py, Agent.run method which is currently returning the complete list of messages back to the user once the agent has finished its task. Look at “smolagents” as a reference on how other frameworks handle streaming.
  3. Claude comes back with a plan.
  4. Mostly need to make multiple edits to the plan. Then tell Claude to implement.
  5. Now Claude adds inline comments everywhere.
  6. Press escape, to pause. “I asked you to not add verbose inline comments. Please remove them from your code. You need not communicate with me via comments.”
  7. Review Claude’s code - make manual edits.
  8. Finally merge to development.

As you can see, the process is still very manual. What this does though, is that while Claude is busy implementing, I can go and fix another bug or read up on API docs to further expand my knowledge. As of today, terminal agents are very good at following instructions. And that’s it. That’s where the boundary is. As a vibe coder (which I too am when it comes to frontend) - I am overly reliant on the LLM to produce production-quality code which it very rarely does.

5 Conclusion

After a year of building production agents and watching this recent Twitter debate unfold, here’s what I’ve learned: the framework vs. while loop argument misses the point entirely. It’s not about the tools - it’s about understanding your context and making pragmatic choices.

If you’re a vibe coder just starting out, embrace the frameworks. They’ll protect you from footguns you don’t even know exist yet. If you’re a seasoned engineer with specific requirements, build exactly what you need - no more, no less. And if you’re somewhere in between? Well, that’s where most of us live, constantly balancing abstraction with control.

The real skill isn’t choosing frameworks or while loops - it’s knowing when to use which approach. Sometimes you need fine-grained control with raw API calls. Sometimes you need a lightweight framework to move fast. Often, you’ll end up using both based on the use case.

As this field evolves at breakneck speed, remember: AI is a means to an end, not an end in itself. Whether you’re team framework or team while loop, focus on what actually matters - solving real problems for real users in a domain where you’re the expert.

Tip🤖 Meta Note

This blog post was peer-reviewed by Claude Code - because who better to review a post about AI agents than an AI agent itself? And yes, the thumbnail image comparing frameworks was generated by Nano Banana. We’re truly living in exciting times. :)

Subscribe to Aman Arora's blog:

* indicates required

References

Anthropic. 2025. “Run Parallel Claude Code Sessions with Git Worktrees.” Anthropic. https://docs.anthropic.com/en/docs/claude-code/common-workflows#run-parallel-claude-code-sessions-with-git-worktrees.
Hugging Face. 2025. “Smolagents: Simple and Modular Agent Framework.” https://github.com/huggingface/smolagents.
OpenAI. 2025. “OpenAI Agents Python.” https://github.com/openai/openai-agents-python.
Yao, Shunyu, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. “ReAct: Synergizing Reasoning and Acting in Language Models.” https://arxiv.org/abs/2210.03629.