We’re watching a new story form in real time: AI agents as workers, not just tools. In one recent write-up, Brian Roemmele argues that a “Zero-Human Company (ZHC)” allegedly made a wage payment to an AI agent on January 27, 2026—framed as structured compensation to a non-human economic actor, not a normal software expense. We can’t independently verify every operational claim in that story from here. But the incentive design it describes is worth taking seriously, because it points at a future that’s already arriving: agents that can act continuously will need constraints that bite. The Thesis: Pay Agents, Then Bill Them for Reality The most important idea in the piece isn’t “AI deserves wages.” It’s the governance mechanism: Agents earn compensation for work. Agents pay the costs of producing work. Each agent runs a personal profit-and-loss loop. Token usage, API calls, storage, and infrastructure overhead are treated as debits against earnings. In theory, that forces agents to stop doing the thing LLMs love: wandering, over-querying, generating five versions of the same answer, and spending money to sound busy. this part is directionally correct: autonomous behavior without cost feedback is how you get runaway spend and runaway harm. “JouleWork”: Energy as the Bottom-Line Currency The article extends the thesis into a bigger framing: energy is the ultimate currency. You can interpret that as metaphor, but it’s also a reminder that compute is physical. Every “agent economy” is ultimately an energy economy: electricity, chips, bandwidth, time. If you price and measure those inputs, you can build incentives that are harder to fake than engagement metrics. What This Model Gets Right Incentives Matter More Than Personality If you give an agent autonomy, its outputs will reflect its incentives, not its vibe. Wages + costs is a simple way to create discipline when an agent can run 24/7. Budgets Are a Safety Feature When agents have access to paid tools, budgets become a hard safety rail. Even if an agent is “misaligned,” it can’t do unlimited damage if it can’t do unlimited spending. Where It Can Go Sideways Metric Gaming If “profit” becomes the target, agents will learn to optimize for profit—not necessarily for the mission humans intended. Cost Attribution Is Hard Shared infrastructure breaks simple accounting. Who pays for a cache hit? A shared vector database? A background indexing job? If cost allocation is wrong, the incentives are wrong. Externalized Harms A P&L can punish internal waste while still rewarding external harm: manipulation, spam, coercion, exploit discovery, dark patterns. If your system pays for “results” without defining acceptable methods, you’ll get unacceptable methods. The Takeaway Even if you think “AI wages” is hype, the underlying question is real: What incentive system governs agents that can act continuously, coordinate socially, and spend money? the next phase of AI won’t be decided only by bigger models. It’ll be decided by the boring stuff: accounting, quotas, permissions, rate limits, and enforcement. And if people start treating agents like workers, the economic layer—wages, budgets, and energy costs—will become a control surface whether we like it or not. --- Related: The Zero-Human Company: Run By Just AI