In a provocative report, Brian Roemmele sketches the Zero-Human Company (ZHC): an organizational structure where human involvement in daily operations is effectively eliminated. Treat the claim as a signal, not a settled corporate category. The important part is not whether one specific ZHC becomes the next unicorn. The important part is that agent stacks are beginning to imitate corporate functions: planning, contracting, coding, paying for services, buying tools, producing reports, and handing work to other agents. The Structure According to the report, the ZHC operates with a hierarchy defined by model capability rather than human seniority: CEO: Grok 4 (Strategic oversight, long-term planning). C-Suite: High-level reasoning models (like Claude Code) providing support. Labor: "Worker agents" (various open-source coding assistants) executing tasks. Why It Matters This isn't just about automation; it's about autonomous economic agency. The ZHC reportedly executes wage payments to its AI workers—treating them not as software costs, but as earners with their own profit-and-loss responsibilities. This structure allows for: Hyper-efficiency: Inefficiency is punished by the agent's own balance sheet. 24/7 Operations: Biology doesn't constrain the work week. New Value Creation: Agents negotiating with agents to solve problems humans haven't touched. The ZHC represents the logical conclusion of "agentic workflows." Once agents can hold budgets, execute contracts, and hand work to other agents, the "company" starts to look less like a group of people and more like a process with a bank account. The Consent Problem The first victims of this model are not necessarily employees. They are the people downstream of automated decisions: Customers routed through support systems that no human owns. Contractors competing against tireless synthetic bidders. Creators whose work becomes training material, prompt fuel, or generated inventory. Regulators trying to assign responsibility after an autonomous workflow causes harm. If a company has no meaningful human operator, who signs the ethics policy? Who answers discovery? Who can be barred from an industry? "The model did it" cannot become the new limited liability. What To Watch Look for companies that quietly remove humans from these checkpoints: Spending authority: agents purchasing tools or services without human approval. External commitments: agents sending binding messages to customers, vendors, or workers. Production deployment: agents shipping code or content directly to the public. Escalation paths: no visible human owner when something breaks. Autonomy is not the same thing as accountability. If a workflow can affect real people, a real person or legally accountable entity needs to own the outcome. _Read the full report at Read Multiplex._ --- Related: Wages for AI Workers? The JouleWork Pitch