What is human in the loop?

Human in the loop (HITL) emphasizes active human involvement in developing, training, and operating AI systems.

It acknowledges that, while powerful, AI can make mistakes, lack context, or perpetuate hidden biases. The human in the loop approach reframes an automation problem as a human-computer interaction (HCI) design problem.

In machine learning, HITL involves humans strategically guiding the computer to make better decisions during the model-building process. This targeted approach improves upon random sampling by having humans identify and provide the most valuable data to refine the model, increasing accuracy and efficiency.

This interaction between humans and machines creates a continuous feedback loop where:

  • Humans guide AI: They provide labels, training data, real-time corrections, or domain-specific knowledge, helping AI models learn and improve.
  • AI assists humans: AI systems process vast datasets, detect patterns, and generate insights, potentially augmenting human decision-making capabilities.

For most generative AI insights, a human must interpret them to have an impact. The notion of a human in the loop is critical.

Alex Singla

McKinsey senior partner and QuantumBlack leader

Human in the loop training stages

There are three essential stages when it comes to human in the loop AI training:

Data annotation: Human data annotators label the original data, including input and corresponding expected output data. If you want an AI to extract line items from receipts, you will provide thousands of receipts where data annotators have carefully highlighted each line item.

This labeled data then becomes the foundation for training the AI model.

Training: Then, human ML teams input the correctly labeled data to train the algorithm. The algorithm can discover the dataset’s insights, patterns, and relationships based on this data. The algorithm aims to make accurate decisions when presented with new data.

Testing and evaluation: In this stage, humans focus on correcting any inaccuracies that the machine produces.

Benefits of human in the loop

Practical benefits of human in the loop design include:

Improved accuracy: Humans excel at tasks like understanding nuance, interpreting ambiguous data, and applying common sense reasoning. These skills can refine AI models that might produce incorrect or incomplete results.

Bias mitigation: AI systems can inherit biases from the data they’re trained on or the algorithms they employ. Human in the loop processes help identify and address these biases, promoting fairness and ethical AI.

Enhanced explainability: While AI systems can produce impressive results, their decision-making processes are often seen as a “black box.” HITL enables humans to understand the reasoning behind AI predictions, which is essential for building trust.

Domain expertise: Humans possess valuable knowledge and insights about their respective domains. HITL ensures this expertise helps shape and guide AI systems, leading to more relevant and actionable solutions.

Adaptability: Real-world situations are dynamic. Human input allows AI systems to handle new scenarios or adjust to changing environments that may differ from their training data.

A comparison table of the three different workflows:

ManualAutomatedHITL automation
Cost reductionNoneHighMedium
Average accuracy90-98%80-97%96-98%
Turnaround timeSlowFastFast
TechnologyPeopleAIAI and people
Risk of errorsHighMediumLow

How does HITL work?

There are various ways HITL is implemented, depending on the specific AI system and the task at hand:

  • Active deep learning: An AI model identifies examples with low confidence in its predictions. These are then presented to a human for annotation or labeling. This feedback helps the model learn continuously, ultimately enhancing its accuracy.
  • Human oversight: AI systems generate outputs or recommendations, but a human expert reviews and verifies them before any real-world action is taken. This is common in sensitive domains like healthcare or legal settings.
  • Human in the loop control: Humans directly intervene to adjust the AI system’s behavior or modify its output in real-time. This could involve refining the input data or providing corrective guidance during a task execution.
  • Collective intelligence: HITL can facilitate a collaborative process where humans and AI models work together interactively. Humans may provide training data, refine results, and help AI understand the larger context, leading to a true symbiosis.

Real-world examples of HITL

Image recognition: AI models trained to detect specific objects in images (like tumors in medical scans) may flag uncertain cases for further review by a human radiologist. This helps avoid misdiagnosis and ensures accuracy.

Natural language processing: Sentiment analysis algorithms classifying customer reviews might present borderline cases to humans for validation. This helps ensure that the model captures subtle nuances of language and avoids misinterpretations.

Self-driving cars: While self-driving technology has advanced, HITL is critical for safe deployment. Humans might monitor the car’s decisions in real-time to intervene or gradually transfer more control to the AI system as its capabilities improve with human feedback.

Content moderation: Social media platforms rely on AI to flag potentially harmful content, but human moderators often make the final judgment calls. This balanced approach helps protect users by addressing the limitations of automated systems.

Fraud detection: HITL intelligent systems can combine AI anomaly detection with human expertise. Suspicious transactions flagged by AI are reviewed by analysts who can leverage their investigative skills and contextual knowledge to verify or dismiss the alerts.

Designing for HITL success

Intuitive user interface: HITL relies on seamless interaction between humans and machines. User interfaces (UI) must be clear, efficient, and provide meaningful information to allow humans to make informed decisions.

Feedback mechanisms: Establish clear channels for humans to provide feedback to the AI system. This may involve simple labeling, explanations for their choices, and even direct modifications to the model’s output.

Trust and explainability: Explainable AI (XAI) techniques help humans understand how the AI system arrived at a decision. This transparency builds trust and facilitates productive collaboration.

Training for humans: Humans involved in HITL processes need to understand the strengths and limitations of the AI system they’re working with. This includes training on how to provide optimal feedback and avoid inadvertently introducing their own biases.

Human at the beginning or the end of a loop?

HITL can be strategically integrated at different points in a workflow. The best placement depends on your specific goals and current automation level.

The success of any HITL system hinges on the quality of human input. Training and clear guidelines for your human team are essential.

HITL at the beginning of a loop

HITL at the beginning of a loop is ideal when you need more pre-existing AI models and need to create a solid data foundation for future automation. Humans are crucial in curating and labeling raw data, ensuring quality training sets for your AI.

Best suited for:

  • Building custom datasets tailored to your specific use case.
  • Creating and training your in-house AI models.
  • Starting from minimal automation and aiming for significant increases.
  • Having available data annotators and AI expertise.

Example: You have thousands of unstructured invoices and want to train an AI model to extract essential data. HITL at the beginning, would involve humans labeling key data points (invoice number, date, total, etc.) within invoices. This labeled data forms the basis for training your AI model.

HITL at the end of a loop

This approach maximizes existing automation and uses humans to control quality and refine results. It’s ideal for fine-tuning or tasks where maximum accuracy is critical.

Even if you initially choose HITL at the end, as your AI models mature, you may gradually require less human intervention.

Best suited for:

  • Achieving near-perfect accuracy in critical tasks.
  • Reducing costly errors and minimizing human intervention over time.
  • Improving efficiency and speed while maintaining high quality.
  • Leveraging readily available off-the-shelf automation solutions.

Example: Your workflow already uses an AI model to extract invoice data with 80% accuracy. Placing HITL at the end means humans review, verify, and correct any errors by the AI, aiming for >99% accuracy.

Hybrid approach

Combining HITL at the beginning (data creation) with HITL at the end (quality control) can achieve maximum benefit.

Humans initially curate raw data and create well-labeled datasets. This ensures that the AI model learns from high-quality, relevant examples.

The labeled data trains the AI model, allowing it to automate tasks and generate initial outputs or predictions.

Then, humans review the AI’s output, primarily focusing on uncertain cases or potential errors. They provide corrections, additional context, or refine the AI’s decisions.

Feedback from humans is used to retrain the AI model continuously. This creates a powerful loop where AI and humans improve at their respective tasks over time.

External vs. self-managed HITL

When incorporating HITL into your operations, a critical decision is to leverage an external provider or build an internal team. Each approach offers distinct advantages and trade-offs:

Data stays within the organizationRequires a team of experts
You can build your own data setsTraining and overhead costs
The increase in human capital

Self-managed HITL

24/7 availabilityNo complete control over data
No need to train your staffData security dependencies
Often cheaper

Challenges of human in the loop

Scalability: Incorporating human input can create bottlenecks, especially when dealing with large volumes of data or real-time decision-making scenarios. HITL design should carefully balance the need for human input with efficiency.

Latency: Adding human involvement can introduce delays. Choosing HITL strategies that minimize response times is crucial, particularly in safety-critical or time-sensitive applications.

Potential disagreement: Involve multiple human annotators to reduce the influence of individual perspectives and mitigate issues with subjective judgments.

Cost: Human involvement adds cost and complexity to AI projects. Organizations must weigh potential gains in accuracy or ethical robustness against the resources required for HITL implementation.

What is the difference between a human in the loop and a human on the loop?

Here’s a breakdown of the key differences between human in the loop (HITL) and human on the loop (HOTL):

FeatureHuman in the loopHuman on the loop
Human roleActive participant in AI development/operationOversight and intervention as needed
FocusImproving AI accuracy and reducing biasEnsuring safety and maintaining control
Level of integrationTightly coupled human-AI collaborationAI operates more independently

Human out of the loop

Human out of the loop (HOOTL) describes a system where artificial intelligence (AI) operates with minimal or no human intervention.

Humans may have designed the system and set initial parameters but are not actively involved in ongoing decision-making processes. AI makes decisions and takes actions largely independently, based on its programming and the data on which it has been trained.

More advanced HOOTL systems could continuously learn and adapt their behavior without human input.

Benefits of HOOTL

  • Speed and efficiency: Processes can be executed much faster than human decision-making.
  • Scalability: HOOTL systems can handle enormous data and tasks that would overwhelm humans.
  • Potentially reduced bias: Theoretically, a well-designed AI could make decisions free of human biases.

Risks and concerns of HOOTL

  • Opacity: Complex AI systems (especially deep learning) can become “black boxes,” making it hard to understand their reasoning and potentially leading to unexpected outcomes.
  • Errors and unintended consequences: AI can make mistakes without human oversight, which could have serious implications in sensitive domains.
  • Accountability: In case things go wrong, determining who or what is responsible becomes a significant challenge.
  • Ethical dilemmas: HOOTL systems might make decisions that clash with human values, especially in morally complex situations.

Examples of HOOTL (or near-HOOTL) systems

  • High-frequency trading: Algorithms make lightning-fast trading decisions based on market data, with humans mostly monitoring and setting overall strategies.
  • Algorithmic content recommendations: AI systems on social media or e-commerce platforms analyze user behavior to suggest content or products, with minimal human intervention in individual cases.
  • Certain autonomous systems: While not fully HOOTL, self-driving cars or drones, in some conditions, may operate with limited human control.

The future of HITL

As AI systems grow increasingly complex and impact more aspects of human life, the importance of HITL will only increase. Here’s what the future holds:

Augmented intelligence: The focus will shift from replacing human judgment to augmenting it. HITL will become a framework for humans and AI to work together, each contributing their unique strengths for optimal outcomes.

Adaptive feedback loops: AI systems will better understand the type of human feedback most helpful and when to solicit it. This could lead to more dynamic and targeted HITL processes.

Human-centered design: Ethical AI and responsible innovation will make HITL a core component of the AI development process. This ensures that humans are always considered vital stakeholders.

New tools and platforms: We’ll see the emergence of specialized tools and platforms designed to facilitate seamless HITL interactions, providing tailored interfaces, feedback mechanisms, and explainability features.

Cross-domain collaboration: HITL success requires expertise in AI, machine learning, human factors, and user interface design. Interdisciplinary collaboration will be essential for creating compelling and human-aligned AI systems.

Beyond the supervised learning algorithms: Ethics and responsibility

HITL isn’t just about better technology – it’s about the ethical use of AI. By intentionally integrating humans into AI processes, we ensure that critical decisions don’t solely rely on algorithms that may have blind spots or unintended consequences. While AI learns to understand the world in its data-driven way, humans must continuously remind these robust systems of our values, our unique capacity for empathy, and the importance of fairness.

HITL is a testament to the belief that the most significant advancements come not from humans versus machines but from harnessing distinct strengths in a continuous feedback loop. It empowers us to proactively shape the future of AI, ensuring that the powerful systems we create are trustworthy, aligned with human values, and ultimately serve to enhance our lives rather than control them.


Related content:

Mar 28th, 2024
11 min read