Human in the Loop, Human IS the Loop: Redesigning AI for Strategic Advantage
Why Forward-Looking Leaders Reimagine Collaboration, Not Just Automation
"Human in the loop" has become the go-to phrase in AI implementation discussions. It sounds reassuring, responsible, even strategic. But here's the problem: we're using this term without defining what the loop actually is.
As leaders across industries navigate AI transformation, I've observed organizations make costly mistakes by treating "human in the loop" as a binary choice—either humans are involved, or they're not. The reality is far more nuanced and strategically critical.
The Loop Isn't Just One Thing
When we implement AI systems, we're not dealing with a single loop. We're orchestrating a complex workflow where human judgment becomes the variable that determines success or failure. Let me break this down using a framework that transforms how you think about AI-human delegation/collaboration.
The Six-Stage Decision Loop:
Identify Problems (e.g. 50% human judgment, 50% AI automation)
Explore Options (e.g. 30% human judgment, 70% AI automation)
Evaluate Solutions (e.g. 55% human judgment, 45% AI automation)
Make Decisions (e.g. 70% human judgment, 30% AI automation)
Implement Actions (e.g. 65% human judgment, 35% AI automation)
Monitor Results (e.g. 20% human judgment, 80% AI automation)
Notice something? The human isn't just "in" the loop—the human IS the loop. The weight of human involvement varies dramatically depending on the task and your specific situation. The numbers here are illustrative, but pay special attention to Problem Identification and Decision Making—these stages demand the heaviest human involvement because they require uniquely human capabilities like contextual understanding, strategic intuition, and moral reasoning. Equally important, these are the accountability moments where humans take responsibility for defining what matters and authorizing action.
When systems fail, it's not the algorithm that faces the consequences—it's the human who chose to trust it.
Why This Matters for Your AI Strategy
Picture this scenario: An organization proudly implements AI with "humans in the loop" for quality control. On the surface, it sounds responsible and well-designed. But when you look closer at the actual workflow, you discover a common problem—they're over-automating strategic decisions (where human judgment should dominate) and under-automating routine pattern recognition (where AI excels).
The result? Their AI is making critical business decisions while humans are manually processing obvious patterns. They have the percentages backwards.
⚠️ Real-World Warning: The Klarna Case
In 2024, fintech giant Klarna claimed their AI chatbot could "do the work of 700 full-time agents." By 2025, they were quietly hiring humans back after net losses more than doubled to $99 million in Q1 2025 and credit losses rose 17% to $136 million. CEO Sebastian Siemiatkowski admitted that focusing on cost savings led to "lower quality" customer service that hurt satisfaction and trust.
Klarna's story shows what happens when organizations treat customer service as simple replacement rather than understanding where human judgment creates value.
This isn't uncommon. Most organizations implement AI without deliberately designing where human judgment should carry the most weight. They treat "human in the loop" as a checkbox rather than a strategic design decision.
Think about your own organization or work processes. Are you automating the parts that require nuanced judgment while leaving humans to handle tasks that AI could do better? Or worse, are you creating bottlenecks by requiring human approval for decisions where AI clearly outperforms human speed and accuracy?
The Process Redesign Imperative: Beyond Simple Automation
Here's where most organizations get AI implementation fundamentally wrong: they use AI to automate existing processes instead of redesigning processes for human-AI collaboration.
This connects directly to something I explored in my previous piece on "Ordinary AI, Extraordinary Humans"—the question isn't just where to put humans in the loop, but what makes humans extraordinary in your specific context, and how do you design processes that amplify those extraordinary capabilities (see my other post on Embracing Human-Centric Values in the Age of AI)?
Think about your own work for a moment. Where do you add the most value? Is it in processing routine information, or is it in those moments where you see patterns others miss, make connections across disparate data points under unique specific context, or sense something (such as emotions, reactions, behaviors, etc.) that doesn't show up in the objective metrics?
Most AI implementations miss this entirely. They focus on replacing human tasks rather than redesigning workflows to make both humans and AI more effective.
Consider any process in your organization where you're thinking about adding AI. The default approach is usually: identify repetitive human task → replace with AI → measure efficiency gains. But what if instead you asked: How can we redesign this entire process so that AI handles what it does best while amplifying what makes our humans extraordinary?
The difference between these approaches isn't just philosophical—it's the difference between automation that makes you more efficient and transformation that makes you more valuable.
Why the human-in-the-loop approach drives adoption success: Companies that embed meaningful human oversight and collaboration into their AI systems see dramatically higher adoption rates and value realization. When employees understand their role in the AI process—rather than feeling replaced by it—they become active participants in making the technology successful.
This strategic emphasis on human collaboration isn't just theoretical—BCG research shows that successful AI companies follow the "10-20-70 rule": 10% of their AI effort goes to designing algorithms, 20% to building the underlying technologies, and 70% to supporting people and adapting business processes. The data backs this up: only 26% of companies have developed the necessary capabilities to move beyond proofs of concept and generate tangible value, with 74% of companies struggling to achieve and scale value from AI primarily due to adoption and change management issues, not technical limitations. The companies that succeed are those that design AI systems where humans feel empowered as collaborators, not threatened as potential replacements.
The Agentic AI Reality: Why This Framework Matters More Than Ever
The process redesign imperative becomes even more critical with agentic AI systems that can take autonomous actions across multiple workflows—from booking meetings to processing claims to managing customer relationships.
Unlike traditional AI that provides recommendations, agentic AI takes actions. The stakes for getting your loops wrong multiply exponentially. When AI systems can take hundreds of actions while you're in a single meeting, the cost of poor loop design compounds rapidly.
For agentic AI, the framework becomes your guardrail system:
Problem Identification becomes your "first firewall" against agents solving the wrong problems. AI excels at pattern recognition in large datasets, but humans must define what constitutes a "problem" worth solving. Organizations need human oversight here to ensure AI agents aren't optimizing for the wrong objectives.
Option Exploration becomes your "creativity checkpoint" ensuring agents don't optimize in isolation. This is where human creativity and domain expertise become invaluable. AI can suggest possibilities, but humans must evaluate feasibility, cultural fit, and strategic alignment. Without this stage, agents may find efficient solutions that completely miss the strategic context.
Evaluate Solutions becomes your "quality gate" where agents must demonstrate reasoning and humans validate trade-offs. The sweet spot for collaboration where AI provides analytical horsepower while humans bring wisdom about implementation realities and stakeholder concerns. With agents, this becomes your "reality check" stage—agents need human input to understand that optimal performance might conflict with other business priorities.
Decision Making requires "approval thresholds" for high-stakes autonomous actions. Counterintuitively, this is where you can often rely heavily on AI—if you've done the earlier stages well. Clear criteria and good data make automation powerful here, but simple rules can prevent costly automated mistakes.
Implementation needs "escalation paths" when agents encounter edge cases. Humans must lead because implementation involves change management, communication, and adaptation—fundamentally human challenges. This is where you design guardrails: When should the agent pause and ask for human guidance?
Monitoring becomes your "early warning system" for catching drift before it becomes expensive. You need human judgment to interpret what the metrics really mean and whether the results align with strategic intent. Regular human review isn't just good practice—it's essential for preventing costly autonomous drift.
The "Extraordinary Decision": What Matters Most to You?
This brings us to the critical question every organization—and individual—must answer: What is extraordinary about your human capabilities, and where do you want to maintain human control for strategic reasons?
Take a moment to think about this in your own context. What do you do that consistently surprises people? Where do colleagues turn to you when standard approaches aren't working? What would your organization lose if you fully automated a process, even if the AI performed perfectly?
For some, extraordinary might mean pattern recognition across complex data sets. For others, it could be creative problem-solving when standard solutions fail. For many, it's the ability to read between the lines in customer communications or sense shifts in market sentiment before they show up in metrics.
This applies whether you're running a Fortune 500 company or a solo consulting practice. A freelance financial advisor's extraordinary capability might be understanding clients' unspoken concerns about retirement. A small business owner's edge could be knowing the local market well enough to spot opportunities that larger competitors miss.
But here's what many miss: you also need to identify where AI errors are most costly to you specifically. For a large organization, an AI that misclassifies customer communications could expose the company to regulatory violations. For a small business owner using AI to manage client relationships, a misunderstood message could lose a key account.
Your "extraordinary" becomes your north star for process redesign, and your "error tolerance" determines your safety boundaries.
Here's how to find both:
What do you (or your best performers) do that seems almost magical to others?
Where does your intuition consistently outperform data-driven approaches?
What capabilities would you never want to lose, even if AI could theoretically do it better?
Where would an AI error cost you more than human processing time?
What edge cases appear rarely but carry disproportionate risk when mishandled?
Whether you're designing AI strategy for a large organization or deciding how to use AI tools in your own work, these answers should drive your implementation decisions.
The Redesign Framework: Divide and Conquer Strategically
When redesigning processes for human-AI collaboration, use this approach to systematically think through your own AI implementation:
Step 1: Map Your Current Process Don't just document the official workflow—understand what actually happens, including all the informal judgment calls and workarounds you make.
Step 2: Identify Your "Extraordinary Moments" Where do you currently add value that goes beyond simple task completion? These become your design anchors.
Step 3: Apply the Human-in-the-Loop Criteria Use the framework to determine the requirement of human involvement at each stage—but remember, these requirements should support your extraordinary capabilities, not diminish them.
Step 4: Design for Amplification, Not Replacement Ask: "How can AI make me (or my team) even more extraordinary?" instead of "How can AI replace what I do?"
Step 5: Build Error Handling and Edge Case Protocols Design specific workflows for when AI confidence drops, unusual circumstances arise, or potential errors are detected. This isn't optional—it's essential for high-stakes processes.
Step 6: Test and Iterate Start with pilot processes and adjust the human-AI balance based on real outcomes, not theoretical efficiencies. Pay special attention to error rates and edge cases during pilots.
Consider how this might apply to your own work. If you're in financial services, you might redesign fraud detection so AI handles pattern recognition in transaction data, while you focus on the complex behavioral analysis that prevents sophisticated schemes. Critically, you'd build escalation protocols for when AI confidence scores drop below thresholds or when patterns fall outside normal parameters.
For individuals and small businesses, the same principles apply. A financial advisor using AI to analyze client portfolios might set up the system to flag unusual market conditions or client life changes, but keep the critical conversations about risk tolerance and goal adjustments firmly in human hands. A small business might use AI to process routine tasks but ensure that any decisions affecting key clients trigger human review.
The key is matching your automation strategy to your unique value proposition and risk tolerance, regardless of your organization's size.
The Governance Question Every Leader Must Answer
This framework forces these critical questions: Where in your AI implementation are you building human expertise, where are you creating dangerous dependencies, and who is accountable for it?
These questions become existential with agentic AI, where over-automation can lead to organizations losing institutional knowledge and becoming completely dependent on systems they don't fully understand.
This risk isn't limited to large enterprises. Small business owners who fully automate customer communications might lose the ability to sense when client relationships are deteriorating. Solo practitioners who rely entirely on AI for research might gradually lose the expertise that originally differentiated their services.
I see three common failure patterns with agentic AI:
The Automation Trap: Organizations automate everything they can without building human expertise in the areas where judgment matters most. Over-automation without context can lead to costly mistakes.
The Control Obsession: Leaders who insist on human involvement in areas where AI clearly outperforms humans, creating bottlenecks. Requiring human approval for routine tasks defeats the purpose of automation.
The Agent Proliferation Problem: Organizations deploy multiple agents without considering how they interact. Multiple AI systems can conflict with each other when their objectives aren't aligned.
The strategic leaders who thrive with AI use this framework to make deliberate choices about where to invest in human development and where to trust AI systems.
Building Your Human-Centric AI Strategy (Agent-Ready Edition)
The process redesign mindset fundamentally changes how you approach these strategic questions. Whether you're a CEO planning enterprise AI strategy or a professional deciding which AI tools to adopt, ask yourself:
Where does your expertise matter most in your workflow? These are your 70%+ human judgment zones. For agentic AI, these become your "human override" checkpoints. More importantly: these are the areas where you should redesign processes to make your expertise even more impactful.
Where is pattern recognition at scale your competitive advantage? These are your 70%+ AI automation opportunities. Perfect candidates for agent deployment. Key insight: don't just automate existing pattern recognition—redesign the process so AI handles routine patterns while you focus on anomalies and edge cases.
Where do strategic context and judgment create the most value for you? These are your balanced collaboration zones. Design AI that enhances your decision-making here, not replaces it. Critical question: how can AI make you more strategic, not just more efficient?
What happens if an AI agent makes the wrong decision repeatedly before you notice? This is your "blast radius" assessment—critical for agentic systems that can compound errors at machine speed.
What would you lose if you fully automated this process, even if the AI performed perfectly? This is your "extraordinary value" check. Sometimes maintaining human involvement isn't about performance—it's about preserving capabilities that define your unique value.
Your answers will be unique to your role, industry, and strategic position. But the framework gives you a systematic way to think about AI governance that goes beyond simple automation and toward true process innovation.
Pro tip for agentic AI: Start with "training wheels." Deploy agents with tight constraints and gradually expand their autonomy as you validate their decision-making patterns. But more importantly, use this pilot phase to discover how the human-AI collaboration changes your work itself. Organizations might require their AI agents to get approval for high-stakes decisions—but also redesign their processes so the agent's pattern recognition helps humans identify trends they might have missed.
For small businesses and individuals: The same principle applies at smaller scale. A consultant might start by having AI draft initial proposals but maintain human control over final recommendations and pricing. A small business might use AI to identify potential issues but keep the sensitive client conversations in human hands.
The Leadership Imperative in the Age of Agentic AI
As AI capabilities accelerate toward full autonomy, the organizations that thrive will be those who intentionally and thoughtfully design the interaction between human judgment and machine intelligence. This isn't just about technology—it's about building organizational capabilities that amplify human expertise rather than replacing it.
The recent surge in agentic AI makes this framework not just useful, but essential. When AI systems can take hundreds of actions while you're in a single meeting, the cost of poor loop design multiplies exponentially.
I'll leave you with this thought:
The companies that succeed with agentic AI won't be those that deploy the most agents—they'll be those that deploy agents with the most thoughtful human oversight.
The question isn't whether to put humans in the loop. The question is: How do you design loops that make both humans and AI agents more effective—and how do you ensure those agents enhance rather than erode human expertise?
That's the conversation we should be having in boardrooms and strategy sessions. Because in the end, human-centric AI leadership isn't about keeping humans involved—it's about redesigning processes to make human judgment more powerful and AI automation more effective, together.
The most successful AI implementations don't just put humans in better loops—they create entirely new loops that couldn't exist without both human extraordinary capabilities and AI scale.
What processes in your organization are ripe for redesign rather than simple automation? And what makes your humans extraordinary enough to build those processes around? I'd love to hear your thoughts and experiences in the chat.
Nan Li is a human-centric AI leadership consultant helping organizations implement AI strategy and governance that amplify human expertise. Connect with her insights on building strategic AI capabilities at Nanalytics AI or follow her on LinkedIn.
References:
BCG's "10-20-70 rule": 10% algorithms, 20% technology, 70% people and processes Moxie Insights, BCG Global
Only 26% of companies successfully scale AI beyond proofs of concept AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value | BCG
74% of companies struggle to achieve and scale AI value AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value | BCG