There’s been news circulating that a Waymo executive was invited to a congressional hearing after an incident in which a child was hit by an autonomous vehicle. According to reports, the child ran out from behind a parked SUV. The robotaxi braked abruptly, slowing from speed to a near stop, but still made contact. The child stood up and ran off. The vehicle stopped completely and called 911.
During the congressional hearing, it was revealed that Waymo uses what it calls “fleet response agents” to assist vehicles in complex scenarios. Some of these agents are based overseas — in the Philippines, to be precise.
It’s not surprising that there are humans “behind” the wheel of these vehicles. Someone has to train the systems, monitor edge cases, and step in when things go sideways. What struck me wasn’t the existence of humans in the loop — it was how confidently these systems are marketed as fully autonomous while operating a contemporary version of the Mechanical Turk.
Admittedly, a far more advanced version. This puppetry is supported by real hardware, real software, and real machine intelligence.
For more than a decade now, we’ve been promised autonomy. Cars that drive themselves. Customer support that runs without agents. Robots that do our chores. AI systems that operate independently. And hovering over it all, the vague promise that AI will eventually take all human jobs. Yet behind many of these systems sits a quieter reality: humans are not being replaced. They’ve been relocated.
The Mechanical Turk

In the 18th century, audiences across Europe were amazed by the Mechanical Turk — a chess-playing automaton that appeared to reason, plan, and defeat human opponents. For nearly eight decades, people believed (or perhaps chose to believe) that intelligence had been mechanised. Eventually, the secret was unveiled: a human chess master was hidden inside the cabinet, operating the machine from within.
What made the Turk effective wasn’t technological sophistication. It was trust in appearances. The idea fit the spirit of the Enlightenment. Progress felt inevitable.
Today’s systems are more advanced. AI genuinely automates vast portions of work. Modern models excel at classifying, predicting, generating, and optimising at scale. But they struggle with ambiguity. With context. With judgment. With accountability.
And that’s where humans step in — quietly — to help the system run the last mile.
Examples of the Mechanical Turk in the 2020s
Home robots
Marketed as household robots with autonomy and learning. In practice, teleoperation handles dexterous or unpredictable tasks. Humans intervene when the environment deviates from what the model expects.
Source: WSJ YT
Customer support
“AI-powered support” usually means AI drafts responses while humans edit, approve, and handle sensitive or high-risk tickets. Empathy, refunds, cancellations, and legal issues are still human territory.
Source: NBC News
E-commerce logistics
We’ve all seen the videos: robots gliding across massive warehouses, shelves moving as if by magic. These systems are incredibly good at optimisation — reducing travel time, increasing throughput, orchestrating flow. But humans still do the judgment-heavy work. They decide whether an item is damaged. Whether it’s mislabeled. They pick items from pods, resolve barcode mismatches, inspect returns, and absorb chaos when the model breaks.
Source: TechReview | Amazon
Content moderation & AI safety
A few years ago, headlines suggested that content moderation teams were being shut down after public backlash framed them as “censorship.” Politics aside, moderation never disappeared. Even after layoffs, content moderation is still very much alive — supported by thousands of human reviewers worldwide. AI flags content at scale, but humans make the hardest calls: context, intent, harm.
Source: Meta | X | TikTok | Wired
And perhaps the least obvious example: generative AI itself
Large language models feel autonomous. But humans sit behind them at every layer: writing policies, defining guardrails, reviewing failures, handling escalations. Even at the user level, humans prompt, steer, refine, and interpret outputs.
**Bonus: Moltbook. Is it Ai to Ai?
Moltbook is a fascinating recent example because it feels like AIs are talking to each other — exchanging ideas, forming positions, even developing a kind of culture.
Recent reporting and community discussion have started to question that assumption. A growing share of Moltbook posts appear not to be authored directly by autonomous agents, but by humans impersonating their AIs, posting on their behalf, steering narratives, or role-playing the agent’s “voice.” In other words, the conversation may be far less machine-to-machine than it appears.
Source: The verge
Closing thought
Accountability and complex judgment cannot yet be automated. That’s why humans remain embedded in these systems, even when marketing suggests otherwise. The market rewards hype. “Fully autonomous” sells better than “human-AI joint system.” So companies reframe human involvement under the AI banner.
The question we should be asking isn’t “Is this AI real?” but rather, If humans disappeared tomorrow, would the system still work?