Humans in the Loop: Algorithmic Power, Invisible Labour, and the Politics of Digital Vision
This blog is written as part of the academic assignment given by Dr. Dilip Barad Sir, engaging critically with the film Humans in the Loop, directed by Aranya Sahay. While artificial intelligence is often celebrated as autonomous and objective, the film exposes the deeply human infrastructures that sustain it. Through its exploration of algorithmic bias, invisible digital labor, and cinematic form, Humans in the Loop challenges the myth of technological neutrality. It reveals how AI systems are shaped by global power structures, epistemic hierarchies, and economic inequalities. By foregrounding the workers whose cognitive labor trains machines, the film invites viewers to rethink the relationship between technology, knowledge, and justice. This blog analyses how the documentary transforms spectatorship into critique, making visible the human lives embedded within digital systems. Click Here

Directed by | Aranya Sahay |
Written by | Aranya Sahay |
Produced by | Mathivanan Rajendran Shilpa Kumar Sarabhi Ravichandran |
Starring | Sonal Madhushankar |
Cinematography | Harshit Saini Monica Tiwari |
Edited by | Swaroop Reghu Aranya Sahay |
Production companies | Storiculture Museum of Imagined Futures SAUV Films |
Distributed by | Netflix |
Release dates | 2024 (MAMI) 5 September 2025
|
Running time | 72 minutes |
Country | India |
Languages | Hindi Kurukh
|
I. Task 1: Algorithmic Bias and the Politics of Knowledge Production
Mainstream conversations about artificial intelligence often describe algorithmic bias as a technical malfunction—an unfortunate flaw that can be corrected through improved coding or cleaner datasets. However, Humans in the Loop reframes this issue by suggesting that bias is not an accidental glitch but an inevitable outcome of the social, economic, and political systems that produce AI. Algorithms do not emerge from a vacuum; they are shaped by human labor, corporate interests, and global inequalities. Therefore, algorithmic bias is less a computational problem and more a structural condition embedded in systems of power.
Algorithms as Cultural Artifacts
From a cultural studies perspective, AI systems are not neutral tools; they are cultural artifacts that encode the values of their creators. Every dataset is curated, filtered, and structured according to particular assumptions about what matters and what does not. When data workers classify images, transcribe speech, or moderate content, they are not merely performing mechanical tasks. They are participating in the construction of meaning.
For instance, when workers are asked to identify “dangerous” behavior or “appropriate” dress, they rely on socially conditioned norms. These norms are shaped by race, gender, class, and geography. An algorithm trained on such categorizations does not develop an objective understanding of the world—it internalizes the dominant cultural logic embedded within the dataset. Thus, AI becomes a reflection of prevailing power structures rather than an impartial decision-maker.
Invisible Labour and the Illusion of Automation
The phrase “artificial intelligence” suggests autonomy, yet Humans in the Loop exposes the hidden human labor that sustains AI systems. Data labelers, often located in economically vulnerable regions, perform repetitive tasks under strict deadlines and low wages. Their work is essential, yet they remain invisible in public narratives about technological innovation.
This invisibility contributes to epistemic hierarchy—a system in which certain forms of knowledge are valued over others. The expertise of engineers and developers is celebrated as creative and intellectual, while the interpretive labor of data workers is treated as unskilled. However, labelling data requires contextual judgment, cultural literacy, and ethical decision-making. The erasure of this intellectual contribution reinforces global inequalities between the so-called “knowledge economy” of the Global North and the outsourced labor markets of the Global South.
The Standardization of Perception
AI systems function by reducing complexity into quantifiable categories. Human experience, however, is ambiguous and context-dependent. When workers are instructed to choose between limited labels, they must compress nuanced realities into rigid classifications. This standardization simplifies the world in ways that privilege dominant perspectives.
For example, language models trained primarily on Western English sources may misinterpret idioms, dialects, or cultural references from other regions. Facial recognition systems trained on homogeneous datasets may misidentify individuals from underrepresented groups. These outcomes are not random accidents—they reveal whose experiences were prioritized during training.
Thus, algorithmic bias can be understood as a by product of epistemic exclusion. When certain communities are underrepresented in datasets or excluded from decision-making processes, their realities become distorted or erased within AI systems.
Power and Platform Capitalism
The film also gestures toward the broader economic framework in which AI operates: platform capitalism. Large technology corporations control both the infrastructure and the narrative of innovation. By presenting AI as efficient and objective, they obscure the labor and ideology embedded within it.
The authority to define categories, design interfaces, and set evaluation metrics remains concentrated in corporate centers of power. Data workers have little agency in shaping these systems. Their role is to comply, not to question. As a result, AI reproduces the worldview of those who control its architecture.
This dynamic illustrates a hierarchy of knowledge production: those who design the system determine what counts as legitimate data, while those who supply the labor remain subordinate. The flow of value moves upward—from the cognitive labor of marginalized workers to the profit margins of multinational corporations.
Reimagining Responsibility
If bias is structural rather than accidental, then technical fixes alone are insufficient. Addressing algorithmic injustice requires a rethinking of responsibility. Transparency in dataset construction, fair labor practices, and inclusion of diverse epistemologies are necessary steps toward more equitable AI systems.
Moreover, recognizing data workers as knowledge producers rather than disposable laborers challenges the myth of fully automated intelligence. AI is always “human in the loop.” The question is not whether humans are involved, but whose humanity is acknowledged and whose is erased.
II. Task 2: Digital Labor, Extraction, and the Politics of Visibility
In contemporary digital capitalism, the smoothness of technological experience depends on the concealment of human effort. Platforms market AI as frictionless, autonomous, and intelligent, yet Humans in the Loop dismantles this illusion by exposing the laboring bodies behind algorithmic systems. Rather than presenting AI as a miraculous innovation, the film reframes it as a site of extraction—where cognitive, emotional, and cultural labor is mined from vulnerable populations.
The Aesthetics of Confinement
The film’s cinematography deliberately constructs a sense of enclosure. Workers are frequently framed within narrow compositions, surrounded by screens, cables, and artificial light. The repetition of rectangular frames—monitors, windows, digital grids—creates a visual metaphor for containment. The worker is not only performing classification; they are themselves classified within the global hierarchy of digital production.
This visual strategy reflects Marx’s concept of alienation. The labourer is separated from the product, from the broader meaning of their work, and ultimately from their own agency. The film makes this alienation visible. By lingering on gestures—scrolling, highlighting, clicking—it transforms what might appear to be effortless digital action into embodied strain.
Extraction of Cognitive Labour
Unlike traditional factory work, digital labor operates at the level of perception and judgment. Workers must evaluate images, interpret language, and anticipate cultural nuance. This is not mechanical repetition but cognitive extraction. The algorithm feeds on human discernment, converting it into datasets.
However, the ownership of this intellectual contribution is never attributed to the worker. Instead, it is absorbed into the brand identity of technology corporations. The film critiques this dynamic by foregrounding the worker’s thought process. Moments of hesitation—when a label feels morally ambiguous—reveal that categorization is never neutral.
By highlighting these tensions, the film challenges the narrative that digital work is “low-skilled.” It reveals that what is dismissed as routine tagging is in fact a complex act of meaning-making.
Breaking the Myth of Automation
The ideology of automation depends on invisibility. Consumers interact with polished interfaces without confronting the human labor embedded within them. The film disrupts this ideology by refusing to aestheticize technology as magical. Instead, it presents the algorithm as dependent, fragile, and incomplete without human intervention.
Through slow pacing and extended observational shots, the film denies viewers the satisfaction of technological spectacle. There are no triumphant montages of innovation—only the quiet persistence of workers whose names rarely appear in corporate narratives. In doing so, the film reassigns value. It suggests that the true engine of AI is not the machine, but the human.
III. Task 3: Film Form and the Ontology of the Digital
Beyond its political critique, Humans in the Loop uses formal cinematic strategies to question the very nature of digital knowledge. The tension between human embodiment and algorithmic abstraction becomes visible through lighting, framing, editing, and sound.
Organic Versus Programmed Vision
The film contrasts two visual regimes. Scenes depicting workers in physical spaces are textured and layered. Background noise, uneven lighting, and subtle bodily movements emphasize material reality. These sequences suggest that human understanding emerges from lived context.
In contrast, scenes focusing on digital interfaces are sterile and flattened. High-contrast graphics and rigid geometries create a sense of reduction. The bounding box—a recurring visual motif—symbolizes how AI translates fluid human existence into measurable units. A face becomes coordinates. A gesture becomes data.
This contrast articulates a philosophical divide: human knowledge is relational and situated, whereas algorithmic knowledge is extractive and categorical.
Editing as Political Argument
The film’s editing constructs an implicit argument about causality. Ordinary acts of labelling are juxtaposed with images of advanced technologies autonomous vehicles, predictive policing systems, automated weapons. This linkage suggests that mundane micro-decisions ripple outward into global consequences.
The viewer is encouraged to see continuity between the click of a mouse and the operation of large-scale systems. This editing strategy disrupts the illusion that AI outputs are spontaneous. Instead, they are revealed as accumulations of countless human judgments.
Sound and the Erasure of Voice
Sound design plays a critical role in conveying hierarchy. Mechanical noises keyboard taps, mouse clicks, server hums—often overpower human speech. The worker’s voice is subdued, sometimes barely audible beneath the technological atmosphere.
This sonic imbalance mirrors epistemic inequality. The system amplifies data while muting subjectivity. Workers speak, but the algorithm records only their categorical input. By making this dynamic audible, the film emphasizes how identity is reduced to functionality within digital infrastructures.
IV. Conclusion: Toward Ethical Reconfiguration
Humans in the Loop ultimately asks viewers to reconsider their relationship with digital systems. It does not frame workers as passive victims; instead, it reveals them as central yet unacknowledged contributors to technological progress. The ethical problem is not merely bias within the code, but the structural arrangement that renders certain labor invisible and disposable.
The film argues that algorithmic injustice cannot be addressed without confronting economic inequality. As long as AI development relies on precarious global labor markets, technological “innovation” will remain entangled with exploitation.
By shifting the viewer’s gaze from the interface to the worker, the film performs an act of dehumanization. It restores depth where the algorithm sees only surfaces. In doing so, it transforms spectatorship into awareness.
The central insight is clear: artificial intelligence is never purely artificial. It is a layered construction of human choices, cultural assumptions, and economic structures. To imagine ethical AI, we must begin by acknowledging the people who make it possible.