Who is the Human-in-the-Loop?

Every AI discussion invariably reaches a point where someone says, “There has to be a human in the loop!” The need for human oversight is clear; thus, it is often left at that. But who, exactly, are these humans in the loop? What skills do they need? Are there enough of them? Where can we find more? How do we ensure a sustainable supply? It’s time to consider these issues and create a plan to avoid being left behind.

 The “human-in-the-loop” imperative presents another opportunity for HR to elevate its role in leading organizations’ adoption of AI. I have extensively discussed this opportunity in both writing and speaking. Organizations do not adopt AI; individuals do. HR facilitates organizational change through its influence on leadership, culture, and learning. The chief people officer is in a better position to spearhead AI adoption initiatives than the chief technology officer.

Ironically, during the previous revolution—the Industrial Revolution—we wanted the opposite: humans out of the loop so that we could have full automation without humans introducing errors and delays. Control theory and systems engineering in the Industrial Revolution paradigm viewed humans in the loop as a stopgap measure until technology caught up.

 Technology, particularly in the realm of AI, has finally caught up. Early forms of AI, such as robotic process automation (RPA), served as a bridge from the industrial paradigm. A fully automated process represented the ideal focus when reducing workforce costs and minimizing errors were paramount. However, advancements in machine learning, natural language processing, and their offspring—generative AI—have reversed the polarity of the human-in-the-loop.

 As AI systems become more ubiquitous and complex, it’s essential to unpack human-in-the-loop AI and describe these specialized workers. Human-in-the-loop (HITL) AI is already a term of art that refers to integrating human expertise into AI systems to ensure unbiasedness, precision, reliability, safety, ethical compliance, and adaptability. In contrast to Autonomous AI, HITL AI requires human oversight and input at different stages of its lifecycle.

 Who are these humans in the loop?

People are involved at every stage of the AI lifecycle: data preparation and annotation, model training, model validation and testing, deployment, and steady-state operation. The roles for the first four stages are relatively well-defined and recognized: data engineers, data scientists, analysts, statisticians, econometricians, ML specialists, and others who develop analytical and generative AI models. However, the roles and activities associated with the steady-state operation of AI systems—such as ongoing monitoring and refinement—are still largely unexplored.

 Arguably, any worker who uses AI in their daily work or manages an AI system is a human-in-the-loop. AI systems are built on data, but these workers add human expertise and context to ensure accuracy, ethical adherence, and adaptability. Humans in the loop check for quality, confirm accuracy, monitor for bias, provide feedback, and make final decisions based on AI recommendations. It seems straightforward, but the mix of skills required to be a good human-in-the-loop is a tall order.

 What skills do they need to have?

A good human-in-the-loop needs a combination of technical, soft, and meta-skills. Domain expertise in a specific field is essential. Deep domain expertise is crucial since AI operates at the fundamental data level. Humans-in-the-loop must integrate their domain expertise with familiarity with the specific AI system they are using or monitoring. The new and rare technical skill that everyone needs is a better understanding of AI—how it works, along with its capabilities and limitations.

 Since AI feeds on data, humans-in-the-loop must be data literate and have basic statistical knowledge. They also need proficiency with relevant software tools and interfaces for managing AI systems. Cybersecurity awareness and familiarity with the physics of human-machine collaboration and the intricacies of intelligent automation are also helpful. The ability to write effective prompts for generative AI and instructions for agentic AI are table stakes. It’s remarkable that well beyond the Big Data and Information ages, data literacy and analytical skills remain scarce in most organizations.

 Soft skills are recognized as a critical advantage for humans over AI in the context of labor replacement. These skills prove essential for effective human-in-the-loop interactions as well. They encompass critical thinking, analytical abilities, attention to detail, decision-making under pressure, communication skills for documenting issues and providing feedback, and the capacity to maintain focus during repetitive tasks. Additionally, emotional intelligence and empathy significantly contribute to success in various situations.

In addition to technical and soft skills, meta-skills that enable individuals to learn and apply new knowledge rapidly are increasingly vital in the age of AI. We observe that those who swiftly adopt AI technologies and acquire AI credentials position themselves for higher pay and accelerated career advancement. Essential meta-skills for effective humans in the loop include adaptability as AI systems evolve, comfort with ambiguity, learning agility, systems thinking, ethical reasoning, and knowledge management. Furthermore, verification skills that allow humans-in-the-loop to critically assess AI-generated responses for errors, biases, and inconsistencies are also essential.

 Ensuring ethical outcomes is a primary reason for having humans-in-the-loop; therefore, they hold significant ethical responsibilities. They must guard against bias and discrimination in AI systems, ensure compliance with privacy and data protection requirements, make judgment calls regarding edge cases and ethical dilemmas, advocate for human values and societal interests, balance efficiency with fairness and safety, report potential risks or harmful outcomes, and maintain professional boundaries and emotional well-being, particularly in content moderation roles.

 Are there enough of them, and where can we find more?

The first question—are there enough humans in the loop?—is difficult to answer. Although analytical AI has been around for decades, generative AI is only a couple of years old. We are still in the early stages of research, development, experimentation, and adoption. There aren’t enough people using AI in their daily work, which means we lack sufficient humans-in-the-loop. However, we have not yet entered the era of widespread, sophisticated, “steady-state” operational AI, so perhaps the shortage isn’t dire.

 The second question—where can we find more?—is easy to answer: through learning and training initiatives fueled by organizations’ hunger for workforce planning and the realization that there is a chasm, not a gap, between the demand for AI-related skills and the available supply. Organizations want to avoid the expense and reputational damage of layoffs and tap their internal labor market. Improved matching of candidates and opportunities and inference of skills via specialized machine learning models have finally made strategic workforce planning within most organizations’ reach.

 Basic technical skills for humans-in-the-loop will need to be delivered initially through formal learning via classes and courses for scale and validating learning transfer. The current state of AI learning is murky. Content has been rushed to production, and often, there’s an assumption that you are already familiar with the terminology. For instance, many courses mention model training as if everyone already understands what a model is and what it means to train an AI model.

 Industry, Academia, and Government

Existing domain-specific certification and degree programs must be updated, and new programs that integrate technical AI knowledge with domain expertise and ethical training must be developed. HITL AI skills, certifications, and degrees will become increasingly important. This will require partnerships between industry and academia, as well as contributions from independent content creators and learning platforms. Training programs must align with real-world AI applications and challenges. Continuing education and re-certification will also be necessary to keep HITL professionals informed about the latest AI advancements and ethical considerations.

 Supporting infrastructure to sustain HITL training includes industry-wide guidelines and standards for HITL professionals to ensure consistent and responsible AI oversight; metrics to assess HITL effectiveness; clear career paths for HITL professionals that recognize the importance of their role and offer opportunities for advancement; collaboration among AI experts, domain specialists, and ethicists to create a comprehensive approach to HITL implementation; public understanding and awareness of the significance of HITL in AI systems to build trust and support for this vital role; a regulatory framework that requires the involvement of qualified human oversight in critical AI applications; and ongoing R&D to enhance HITL methodologies, tools, and best practices, ensuring that the human role evolves alongside AI advancements.

 Economic growth is driven by productivity growth. AI is increasingly seen as the prime source of productivity growth. The more AI in operation, the more humans-in-the-loop we need. To have effective humans-in-the-loop, we need a massive investment in training to bridge the skills chasm for the technical, soft, and meta HITL AI skills. Only well-thought-through and well-designed learning vehicles will be worthy of that investment. People Analytic Success will be at the forefront of developing the learning content and platforms and supporting the infrastructure build to ensure we quickly close the HITL skills chasm.

 Conclusion

The human-in-the-loop is not merely a temporary solution until AI becomes more advanced or a safeguard against AI errors—it’s a crucial component of responsible AI deployment for more intelligent, ethical, and effective AI systems.

 As AI systems become more widespread, the role of humans in guiding and monitoring these technologies becomes increasingly vital. The demand for skilled humans in the loop will only grow. Achieving success in this field requires a coordinated effort from educational institutions, businesses, policymakers, and technology developers.

 Organizations that implement AI must recognize that investing in human oversight is as crucial as investing in technology. By understanding the HITL role’s requirements and providing appropriate support, we can ensure that those involved are effective guardians of safe, ethical, and beneficial AI deployment.

 The future of AI is not about replacing humans but creating synergistic partnerships between human judgment and machine capabilities. The human-in-the-loop is a crucial component of this partnership, ensuring that AI systems align with human values and societal needs while delivering on their transformative potential.

 References

Mohindra, A. B. (2024). HR and the adoption of AI. In Hahn, K. (Ed.). AI and the future of work. Intelligent Enterprise Leaders Alliance. https://www.intelligententerpriseleaders.com/downloads/market-study-ai-the-future-of-work

Mohindra, A. B. (2024). A gentle introduction to AI and its applications in people analytics and HR [Conference Keynote]. Maven Insights’ People Analytics Forum, Riyadh, Saudi Arabia.

Amit MohindraComment