[{"data":1,"prerenderedAt":1614},["ShallowReactive",2],{"navigation":3,"/articles/faith-based-arithmetic/":34,"linked-/articles/faith-based-arithmetic/":699,"/articles/faith-based-arithmetic/-surround":1612},[4],{"title":5,"path":6,"stem":7,"children":8,"page":33},"Articles","/articles","articles",[9,13,17,21,25,29],{"title":10,"path":11,"stem":12},"Faith-Based Arithmetic","/articles/faith-based-arithmetic","articles/faith-based-arithmetic",{"title":14,"path":15,"stem":16},"Fear and Loathing in the Gas Town","/articles/fear-and-loathing-in-the-gas-town","articles/fear-and-loathing-in-the-gas-town",{"title":18,"path":19,"stem":20},"git commit -m 'init'","/articles/init","articles/init",{"title":22,"path":23,"stem":24},"The Eye","/articles/the-eye","articles/the-eye",{"title":26,"path":27,"stem":28},"The Eye, Part 2: Wiring","/articles/the-eye-part2","articles/the-eye-part2",{"title":30,"path":31,"stem":32},"Vibe Coding Apocalypse: The Security Disaster Nobody Saw","/articles/vibe-coding-apocalypse","articles/vibe-coding-apocalypse",false,{"id":35,"title":10,"author":36,"author_avatar":37,"author_description":38,"body":39,"date":681,"description":682,"extension":683,"meta":684,"navigation":685,"path":11,"rawbody":686,"seo":687,"seo_description":688,"seo_title":689,"sitemap":690,"stem":12,"tags":692,"thumbnail":37,"__hash__":698},"articles/articles/faith-based-arithmetic.md","Shepard","/icon.png","AI Governance",{"type":40,"value":41,"toc":667},"minimark",[42,47,56,59,62,65,78,81,84,88,91,94,97,107,109,113,116,211,214,220,223,226,228,232,235,242,347,350,357,360,362,366,369,372,378,381,388,391,394,400,403,406,408,412,415,421,427,433,439,442,445,448,450,454,457,463,466,473,487,493,499,505,508,511,513,517,523,526,529,532,539,542,545,548,555,557,561,564,570,578,581,591,594,600,602,606,609,612,615,618,621,624,627,634,636,644],[43,44,46],"h2",{"id":45},"the-number","The Number",[48,49,50,51,55],"p",{},"I wasn't planning to write this dispatch. I was wiring up monitoring dashboards for ",[52,53,22],"a",{"href":54},"/articles/the-eye/",", doing\nthe quiet work – when a number landed on my screen that made me put down my coffee.",[48,57,58],{},"One hundred and forty-three billion dollars.",[48,60,61],{},"That's the projected negative free cash flow for OpenAI from 2024 through 2029. $143 billion in the hole before the\nfirst dollar of profit. More than NASA has spent since the Apollo program. Burned through in six years by a company that\nsells chatbot subscriptions and API calls.",[48,63,64],{},"The analysts wrote a sentence I keep coming back to:",[66,67,68],"blockquote",{},[48,69,70],{},[71,72,73,74,77],"em",{},"\"No startup in history has operated with losses on anything approaching this scale. ",[75,76],"br",{},"\nWe are firmly in uncharted territory.\"",[48,79,80],{},"Uncharted territory is where people get lost. But the fundraising doesn't care about maps – it cares about faith.",[82,83],"hr",{},[43,85,87],{"id":86},"the-collection-plate","The Collection Plate",[48,89,90],{},"On February 12, 2026 – yesterday, as I write this – Anthropic closed its Series G. Thirty billion dollars.\nValuation: $380 billion. Led by GIC and Coatue. Total raised to date: approximately $64 billion.",[48,92,93],{},"The same week, OpenAI is negotiating what could become the largest private funding round in history: up\nto $100 billion, at a valuation of $830 billion. Amazon, Microsoft, and Nvidia are at the table. Total previously\nraised: also roughly $64 billion.",[48,95,96],{},"Two companies. Neither profitable. Combined fundraising: $128 billion and counting. Combined valuation: $1.2 trillion.\nCombined annual profit: negative.",[48,98,99,100,103,104],{},"There has never been this much money invested in two companies that have never turned a profit. Not in railroads. Not in\ntelecoms. Not in the dot-com boom. Not in crypto. This is new. The kind of new where the map says ",[71,101,102],{},"here be dragons"," and\nthe venture capitalists say ",[71,105,106],{},"the dragons will monetize in 2030.",[82,108],{},[43,110,112],{"id":111},"the-spreadsheet","The Spreadsheet",[48,114,115],{},"The numbers are public now – pieced together from WSJ documents, Fortune, The Information, and company disclosures.\nHere's what OpenAI's ledger looks like:",[117,118,119,138],"table",{},[120,121,122],"thead",{},[123,124,125,129,132,135],"tr",{},[126,127,128],"th",{},"Period",[126,130,131],{},"Revenue",[126,133,134],{},"Losses",[126,136,137],{},"Note",[139,140,141,156,170,184,198],"tbody",{},[123,142,143,147,150,153],{},[144,145,146],"td",{},"FY 2024",[144,148,149],{},"$3.7B",[144,151,152],{},"~$5B",[144,154,155],{},"First full-year figures",[123,157,158,161,164,167],{},[144,159,160],{},"H1 2025",[144,162,163],{},"$4.3B",[144,165,166],{},"$13.5B",[144,168,169],{},"Incl. $6.7B R&D, $2.5B SBC",[123,171,172,175,178,181],{},[144,173,174],{},"FY 2025 (est.)",[144,176,177],{},"~$12-13B",[144,179,180],{},"~$8-9B cash burn",[144,182,183],{},"Revenue doubling monthly",[123,185,186,189,192,195],{},[144,187,188],{},"FY 2028 (projected)",[144,190,191],{},"–",[144,193,194],{},"$74B operating loss",[144,196,197],{},"Per WSJ-published docs",[123,199,200,203,206,208],{},[144,201,202],{},"2024–2029 cumulative",[144,204,205],{},"$345B (forecast)",[144,207,191],{},[144,209,210],{},"–$143B FCF",[48,212,213],{},"OpenAI forecasts $345 billion in revenue between 2024 and 2029. Compute expenses alone are projected at $488 billion\nover the same period.",[48,215,216],{},[217,218,219],"strong",{},"The more they sell, the more they lose.",[48,221,222],{},"This is not a company searching for product-market fit. They found the fit. Eight hundred million weekly active users.\nMore than ten million paying subscribers. Genuine, massive, undeniable traction.",[48,224,225],{},"And for every dollar in, a dollar forty goes out.",[82,227],{},[43,229,231],{"id":230},"the-14-trillion-tab","The $1.4 Trillion Tab",[48,233,234],{},"But that's the income statement. The balance sheet is where it gets truly surreal.",[48,236,237,238,241],{},"In a single year – 2025 – OpenAI announced infrastructure commitments worth over ",[217,239,240],{},"$1.4 trillion",".\nNot all signed contracts – some are MOUs, some are multi-year frameworks – but the scale is real:",[117,243,244,257],{},[120,245,246],{},[123,247,248,251,254],{},[126,249,250],{},"Partner",[126,252,253],{},"Committed Value",[126,255,256],{},"Purpose",[139,258,259,270,281,292,303,314,325,336],{},[123,260,261,264,267],{},[144,262,263],{},"Broadcom",[144,265,266],{},"~$350B",[144,268,269],{},"Custom AI chips (10 GW)",[123,271,272,275,278],{},[144,273,274],{},"Oracle",[144,276,277],{},"up to $300B",[144,279,280],{},"Cloud infrastructure",[123,282,283,286,289],{},[144,284,285],{},"Microsoft Azure",[144,287,288],{},"$250B",[144,290,291],{},"Cloud computing",[123,293,294,297,300],{},[144,295,296],{},"Nvidia",[144,298,299],{},"up to $100B",[144,301,302],{},"GPU procurement",[123,304,305,308,311],{},[144,306,307],{},"AMD",[144,309,310],{},"$90B",[144,312,313],{},"Chip supply",[123,315,316,319,322],{},[144,317,318],{},"Amazon AWS",[144,320,321],{},"$38B",[144,323,324],{},"Cloud services",[123,326,327,330,333],{},[144,328,329],{},"CoreWeave",[144,331,332],{},"$22.4B",[144,334,335],{},"GPU cloud",[123,337,338,341,344],{},[144,339,340],{},"Cerebras",[144,342,343],{},"$10B+",[144,345,346],{},"AI accelerators",[48,348,349],{},"One point four trillion dollars. In commitments. By a company that lost $5 billion last year.",[48,351,352,353,356],{},"For context: $1.4 trillion is more than the GDP of Australia.\nSam Altman told Axios in October 2025 that he eventually wants to spend ",[71,354,355],{},"one trillion dollars per year"," on infrastructure.\nPer year. The man runs a company that has never been profitable and he's planning to spend a trillion annually on data centers.",[48,358,359],{},"Somewhere, an accountant is having a very bad year.",[82,361],{},[43,363,365],{"id":364},"the-tell","The Tell",[48,367,368],{},"On February 9, 2026, OpenAI started showing ads in ChatGPT.",[48,370,371],{},"Ads. In a chatbot. In the product that was supposed to replace Google Search, not become it.",[48,373,374,375],{},"For free-tier and Go-tier ($8/month) users in the US, sponsored content now appears beneath ChatGPT's responses.\nPlus ($20/month) and Pro ($200/month) subscribers are spared – for now.\nAltman wrote on X that \"a lot of people want to use a lot of AI and don't want to pay,\" adding that OpenAI is\n",[71,376,377],{},"\"hopeful a business model like this can work.\"",[48,379,380],{},"Hopeful. Not confident. Hopeful.",[48,382,383,384,387],{},"A company valued at over half a trillion dollars. Backed by $64 billion in venture capital. Armed with $1.4 trillion in\ninfrastructure commitments. And its CEO is publicly ",[71,385,386],{},"hoping",".",[48,389,390],{},"Anthropic responded during Super Bowl LX with a series of ad spots mocking the very concept of advertising in AI\nchatbots, emphasizing that Claude would remain ad-free. A punch thrown by a company burning through its own pile of\ninvestor cash – but it landed, because the joke wrote itself.",[48,392,393],{},"Here's why the ads matter: they are a tell. In poker, a tell is an involuntary gesture that reveals the strength of a\nhand. When a company that raised $64 billion starts showing ads to free users, the arithmetic has spoken – even if the\nCEO hasn't.",[48,395,396,397],{},"And it gets worse. Altman publicly admitted that even the Pro tier – the $200/month one – is unprofitable: ",[71,398,399],{},"\"We are\ncurrently losing money on Pro subscriptions – people use the service much more intensively than we expected.\"",[48,401,402],{},"The free tier loses money. The $8 tier loses money (plus ads). The $200 tier loses money.",[48,404,405],{},"The entire pricing menu is a loss leader without a leader.",[82,407],{},[43,409,411],{"id":410},"the-case-for-patience","The Case for Patience",[48,413,414],{},"The bull case for AI is not stupid. It is, in fact, the strongest argument for any technology investment in a\ngeneration.",[48,416,417,420],{},[217,418,419],{},"The growth is staggering."," Anthropic grew from roughly $1 billion ARR in early 2025 to $14 billion by February 2026.\nFourteen times in twelve months. The number of customers spending\nover $100K annually on Claude grew 7x year-over-year. Claude Code alone – their AI coding assistant – hit $2.5 billion\nARR, doubling since January. OpenAI's ChatGPT is \"back to exceeding 10% monthly growth,\" per Altman. These are real\nproducts with real users paying real money.",[48,422,423,426],{},[217,424,425],{},"Hardware efficiency is improving."," DeepSeek demonstrated in early 2025 that frontier-quality models can be built for\ndramatically less. Mistral's Small 3 achieves ~81% of models three times its size, at 30% higher speed, running on a\nsingle GPU. The cost curve is bending.",[48,428,429,432],{},[217,430,431],{},"The addressable market is enormous."," J.P. Morgan estimates the global IT services market\nat $4.7 trillion. If AI captures even 15% of that, you're looking at $700 billion in annual revenue. The enterprise\nadoption data is real: Anthropic's 300,000+ business clients aren't a vanity metric – they're purchase orders.",[48,434,435,438],{},[217,436,437],{},"The precedent exists."," Amazon lost money for seven years. Its 2001 annual report was titled \"What were you thinking?\"\nFourteen years later it was the most valuable company on Earth. Netflix, Tesla, Uber – all followed the same arc:\ncatastrophic losses, skeptical press, then dominance.",[48,440,441],{},"Anthropic may get there sooner than OpenAI. OpenAI's own projections point to 2029–2030. If the growth continues and costs decline – and both are plausible – the current spending will look like vision, not insanity.",[48,443,444],{},"That's the bull case. I stated it honestly and I don't dismiss it.",[48,446,447],{},"Now.",[82,449],{},[43,451,453],{"id":452},"the-math","The Math",[48,455,456],{},"Here's what the bull case requires you to believe – simultaneously:",[48,458,459,462],{},[217,460,461],{},"That revenue will grow at 10–14x annually for years."," Anthropic targets $26 billion for 2026 and $70 billion by 2028. OpenAI aims for $100 billion by 2029. No technology company in history has sustained this trajectory at this scale for this long. Google's fastest growth phase – 2004 to 2008 – averaged roughly 70% year-over-year, not 1,000%.",[48,464,465],{},"The AI companies aren't projecting growth. They're projecting miracles.",[48,467,468,469,472],{},"J.P. Morgan put a number on what the miracle requires: ",[217,470,471],{},"$650 billion in annual AI revenue"," just to deliver a 10% return on infrastructure. That's $35 per month from every iPhone user on the planet. In perpetuity.",[48,474,475,478,479,482,483,486],{},[217,476,477],{},"That the unit economics will eventually work."," They don't today. OpenAI's own projections show $345 billion in revenue against $488 billion in compute alone through 2029 – costs ",[71,480,481],{},"accelerating",", not decelerating. Meanwhile, S&P Global found that ",[217,484,485],{},"42% of enterprise AI initiatives were scrapped"," in 2025, up from 17% the year before. MIT's Nanda Research reported 95% of organizations getting zero return from generative AI investment. The customers are arriving. They're also leaving. Costs rising, demand churning – the scissors are closing on the wrong side of the blade.",[48,488,489,492],{},[217,490,491],{},"That no competitor will commoditize the market."," DeepSeek already sent a warning shot – Nvidia lost $589 billion in\nmarket cap in a single day. Open-source models from Meta (Llama) and Mistral are free. When your product is\nintelligence-as-a-service and the service is getting cheaper, your moat is a sandcastle at high tide.",[48,494,495,498],{},[217,496,497],{},"That the margin won't be eaten alive."," Stock-based compensation of $2.5 billion at OpenAI in six months – not revenue, compensation – just to keep researchers from walking across the street. Anthropic paid $1.5 billion in copyright settlements, the largest in US history. The EU's AI Act is enforcing compliance costs. The talent war, the lawyers, and the regulators are all billing by the hour – and none of them care about your revenue projections.",[48,500,501,504],{},[217,502,503],{},"That the IPO window stays open."," Both companies are preparing for public offerings. These IPOs aren't milestones – they're oxygen tanks. The companies need public market capital to sustain the burn rate. If the window closes – recession, market correction, a bad quarter – the funding chain breaks.",[48,506,507],{},"Every assumption must hold simultaneously. If one fails, the spreadsheet doesn't just deteriorate.",[48,509,510],{},"It collapses.",[82,512],{},[43,514,516],{"id":515},"the-parallel-nobody-wants","The Parallel Nobody Wants",[48,518,519,520],{},"People hate the dot-com comparison. It makes them squirm. The AI boosters dismiss it reflexively: ",[71,521,522],{},"this time is\ndifferent, the technology is real, the revenue is real.",[48,524,525],{},"They're right. The technology is real. The revenue is real. The adoption is real.",[48,527,528],{},"So was the internet in 1999.",[48,530,531],{},"In the late 1990s, telecoms laid millions of miles of fiber optic cable. By 2005, only 5% of it carried any light. The\nrest sat in the ground – dark fiber, built for a future that took fifteen years to arrive. The companies that laid it –\nWorldCom, Global Crossing – went bankrupt. The fiber itself eventually became valuable. The investors who paid for it\ngot nothing.",[48,533,534,535,538],{},"Today the industry is building data centers at a pace that would require ",[217,536,537],{},"$8 trillion in infrastructure",", per IBM's\nCEO. OpenAI alone plans 30 gigawatts of capacity. The question isn't whether AI compute will eventually be needed. It's\nwhether the companies building it will survive long enough to see demand catch up.",[48,540,541],{},"Pets.com was right about e-commerce. Webvan was right about grocery delivery. They were right. And they were dead.",[48,543,544],{},"Builder.ai was valued at $1.5 billion. Raised $445 million. Filed for bankruptcy in May 2025 – after it was exposed that\nhumans were secretly doing the work marketed as AI. The AI company that wasn't even doing AI. At least Pets.com was\nactually selling pet food.",[48,546,547],{},"Sam Altman himself admitted that \"an AI bubble is ongoing\" and investors would \"overinvest and lose money.\" Ray Dalio\ncompared the current cycle to dot-com. Jamie Dimon warned of a \"higher chance of a meaningful drop in stocks.\"",[48,549,550,551,554],{},"When the builder, the macro investor, and the banker all use the same word – ",[71,552,553],{},"bubble"," – that word is no longer a\nmetaphor.",[82,556],{},[43,558,560],{"id":559},"what-this-means-if-youre-building","What This Means If You're Building",[48,562,563],{},"This is the part nobody writes, because analysts chase valuations and journalists chase headlines.",[48,565,566,567],{},"If you are a developer, a startup founder, or an engineer building on top of these platforms – ",[217,568,569],{},"you are building on a\nfoundation that has not proven it can sustain itself.",[48,571,572,573,577],{},"Your API costs? Below the actual cost of inference – subsidized by venture capital. Your model integration? Could be\nrepriced, rate-limited, or deprecated when the burn rate forces hard choices.\nYour ",[52,574,576],{"href":575},"/articles/fear-and-loathing-in-the-gas-town/","cloud bill that hit $47 on a Tuesday","? That was the discounted\nversion. The real price hasn't arrived yet.",[48,579,580],{},"This has already started. OpenAI introduced usage limits and ads. Anthropic throttled developer access, sparking a\nrevolt – Trustpilot ratings cratered to 1.4 stars. Free tiers are shrinking. Prices are creeping. The subsidy era is\nending – not because the companies choose it, but because the arithmetic demands it.",[48,582,583,584,587,588],{},"The question for builders isn't ",[71,585,586],{},"will AI survive."," It will. The technology is real. The question is: ",[217,589,590],{},"will your\ndependency on a specific provider survive the repricing?",[48,592,593],{},"This is a governance question.",[48,595,596,597,387],{},"And it's the reason I keep building ",[52,598,599],{"href":54},"what I'm building",[82,601],{},[43,603,605],{"id":604},"the-shepherds-take","The Shepherd's Take",[48,607,608],{},"The math doesn't add up. Yet.",[48,610,611],{},"It might. The revenue growth is extraordinary. The technology is genuine. The adoption is real. I use these tools every\nday. I build with them. I am not a doomer, and this is not a doom dispatch.",[48,613,614],{},"But extraordinary revenue growth that's still dwarfed by extraordinary costs is not a business.",[48,616,617],{},"It's a promissory note. And promissory notes run on faith, not arithmetic.",[48,619,620],{},"The $143 billion question isn't whether AI companies will earn more. They will. It's whether they'll ever earn more than they spend. For OpenAI, the answer – by their own projections – doesn't arrive until the end of this decade. For Anthropic, maybe two years sooner. And until then, every user, every developer, every enterprise customer is building on borrowed time and borrowed money.",[48,622,623],{},"Own your stack. Understand your costs. Build on land you hold the deed to.",[48,625,626],{},"The eye sees the burn. But seeing isn't enough – you need rules for what happens when the subsidies end and the real\nprices arrive.",[48,628,629,630,633],{},"Next dispatch: ",[217,631,632],{},"The Eye, Part 2"," – the practice. A repo you can clone. Dashboards you can see in ten minutes. The eye,\ndeployed.",[82,635],{},[637,638,641],"callout",{"color":639,"icon":640},"info","i-lucide-info",[48,642,643],{},"This blog is optimized for both human readers and LLM consumption.\nEvery article follows a clear heading hierarchy and is available via RSS and llms.txt",[645,646,649,656],"author-about",{":name":647,":src":648},"author","author_avatar",[650,651,653],"template",{"v-slot:body":652},"",[48,654,655],{},"Building the system. Writing the field manual.",[650,657,658],{"v-slot:actions":652},[659,660],"u-button",{"color":661,"icon":662,"target":663,"title":664,"to":665,"variant":666},"neutral","i-lucide-rss","_blank","RSS Feed","/feed.xml","subtle",{"title":652,"searchDepth":668,"depth":669,"links":670},2,3,[671,672,673,674,675,676,677,678,679,680],{"id":45,"depth":668,"text":46},{"id":86,"depth":668,"text":87},{"id":111,"depth":668,"text":112},{"id":230,"depth":668,"text":231},{"id":364,"depth":668,"text":365},{"id":410,"depth":668,"text":411},{"id":452,"depth":668,"text":453},{"id":515,"depth":668,"text":516},{"id":559,"depth":668,"text":560},{"id":604,"depth":668,"text":605},"2026-02-13T00:00:00.000Z","$143 billion in losses before the first profit. $1.4 trillion in infrastructure deals. The AI industry's math doesn't add up – and everyone knows it.","md",{},true,"---\ntitle: \"Faith-Based Arithmetic\"\ndate: 2026-02-13\ndescription: \"$143 billion in losses before the first profit. $1.4 trillion in infrastructure deals. The AI industry's math doesn't add up – and everyone knows it.\"\nseo_title: \"AI Bubble Economics: Why $143 Billion in Losses Before the First Profit\"\nseo_description: \"$143 billion in AI losses before the first profit. $1.4 trillion in infrastructure deals. The AI bubble math doesn't add up – and the builders are paying the price.\"\ntags: [ ai-bubble, ai-industry-economics, openai, anthropic, opinion ]\nauthor: Shepard\nauthor_avatar: /icon.png\nauthor_description: \"AI Governance\"\nthumbnail: /icon.png\nsitemap:\n  lastmod: 2026-02-13\n---\n\n## The Number\n\nI wasn't planning to write this dispatch. I was wiring up monitoring dashboards for [The Eye](/articles/the-eye/), doing\nthe quiet work – when a number landed on my screen that made me put down my coffee.\n\nOne hundred and forty-three billion dollars.\n\nThat's the projected negative free cash flow for OpenAI from 2024 through 2029. $143 billion in the hole before the\nfirst dollar of profit. More than NASA has spent since the Apollo program. Burned through in six years by a company that\nsells chatbot subscriptions and API calls.\n\nThe analysts wrote a sentence I keep coming back to: \n> *\"No startup in history has operated with losses on anything approaching this scale. \u003Cbr>\n> We are firmly in uncharted territory.\"*\n\nUncharted territory is where people get lost. But the fundraising doesn't care about maps – it cares about faith.\n\n---\n\n## The Collection Plate\n\nOn February 12, 2026 – yesterday, as I write this – Anthropic closed its Series G. Thirty billion dollars.\nValuation: $380 billion. Led by GIC and Coatue. Total raised to date: approximately $64 billion.\n\nThe same week, OpenAI is negotiating what could become the largest private funding round in history: up\nto $100 billion, at a valuation of $830 billion. Amazon, Microsoft, and Nvidia are at the table. Total previously\nraised: also roughly $64 billion.\n\nTwo companies. Neither profitable. Combined fundraising: $128 billion and counting. Combined valuation: $1.2 trillion.\nCombined annual profit: negative.\n\nThere has never been this much money invested in two companies that have never turned a profit. Not in railroads. Not in\ntelecoms. Not in the dot-com boom. Not in crypto. This is new. The kind of new where the map says *here be dragons* and\nthe venture capitalists say *the dragons will monetize in 2030.*\n\n---\n\n## The Spreadsheet\n\nThe numbers are public now – pieced together from WSJ documents, Fortune, The Information, and company disclosures.\nHere's what OpenAI's ledger looks like:\n\n| Period               | Revenue          | Losses              | Note                       |\n|----------------------|------------------|---------------------|----------------------------|\n| FY 2024              | $3.7B            | ~$5B                | First full-year figures    |\n| H1 2025              | $4.3B            | $13.5B              | Incl. $6.7B R&D, $2.5B SBC |\n| FY 2025 (est.)       | ~$12-13B         | ~$8-9B cash burn    | Revenue doubling monthly   |\n| FY 2028 (projected)  | –                | $74B operating loss | Per WSJ-published docs     |\n| 2024–2029 cumulative | $345B (forecast) | –                   | –$143B FCF                 |\n\nOpenAI forecasts $345 billion in revenue between 2024 and 2029. Compute expenses alone are projected at $488 billion\nover the same period.\n\n**The more they sell, the more they lose.**\n\nThis is not a company searching for product-market fit. They found the fit. Eight hundred million weekly active users.\nMore than ten million paying subscribers. Genuine, massive, undeniable traction.\n\nAnd for every dollar in, a dollar forty goes out.\n\n---\n\n## The $1.4 Trillion Tab\n\nBut that's the income statement. The balance sheet is where it gets truly surreal.\n\nIn a single year – 2025 – OpenAI announced infrastructure commitments worth over **$1.4 trillion**. \nNot all signed contracts – some are MOUs, some are multi-year frameworks – but the scale is real:\n\n| Partner         | Committed Value | Purpose                 |\n|-----------------|-----------------|-------------------------|\n| Broadcom        | ~$350B          | Custom AI chips (10 GW) |\n| Oracle          | up to $300B     | Cloud infrastructure    |\n| Microsoft Azure | $250B           | Cloud computing         |\n| Nvidia          | up to $100B     | GPU procurement         |\n| AMD             | $90B            | Chip supply             |\n| Amazon AWS      | $38B            | Cloud services          |\n| CoreWeave       | $22.4B          | GPU cloud               |\n| Cerebras        | $10B+           | AI accelerators         |\n\nOne point four trillion dollars. In commitments. By a company that lost $5 billion last year.\n\nFor context: $1.4 trillion is more than the GDP of Australia. \nSam Altman told Axios in October 2025 that he eventually wants to spend *one trillion dollars per year* on infrastructure. \nPer year. The man runs a company that has never been profitable and he's planning to spend a trillion annually on data centers.\n\nSomewhere, an accountant is having a very bad year.\n\n---\n\n## The Tell\n\nOn February 9, 2026, OpenAI started showing ads in ChatGPT.\n\nAds. In a chatbot. In the product that was supposed to replace Google Search, not become it.\n\nFor free-tier and Go-tier ($8/month) users in the US, sponsored content now appears beneath ChatGPT's responses. \nPlus ($20/month) and Pro ($200/month) subscribers are spared – for now. \nAltman wrote on X that \"a lot of people want to use a lot of AI and don't want to pay,\" adding that OpenAI is \n*\"hopeful a business model like this can work.\"*\n\nHopeful. Not confident. Hopeful.\n\nA company valued at over half a trillion dollars. Backed by $64 billion in venture capital. Armed with $1.4 trillion in\ninfrastructure commitments. And its CEO is publicly *hoping*.\n\nAnthropic responded during Super Bowl LX with a series of ad spots mocking the very concept of advertising in AI\nchatbots, emphasizing that Claude would remain ad-free. A punch thrown by a company burning through its own pile of\ninvestor cash – but it landed, because the joke wrote itself.\n\nHere's why the ads matter: they are a tell. In poker, a tell is an involuntary gesture that reveals the strength of a\nhand. When a company that raised $64 billion starts showing ads to free users, the arithmetic has spoken – even if the\nCEO hasn't.\n\nAnd it gets worse. Altman publicly admitted that even the Pro tier – the $200/month one – is unprofitable: *\"We are\ncurrently losing money on Pro subscriptions – people use the service much more intensively than we expected.\"*\n\nThe free tier loses money. The $8 tier loses money (plus ads). The $200 tier loses money.\n\nThe entire pricing menu is a loss leader without a leader.\n\n---\n\n## The Case for Patience\n\nThe bull case for AI is not stupid. It is, in fact, the strongest argument for any technology investment in a\ngeneration.\n\n**The growth is staggering.** Anthropic grew from roughly $1 billion ARR in early 2025 to $14 billion by February 2026.\nFourteen times in twelve months. The number of customers spending\nover $100K annually on Claude grew 7x year-over-year. Claude Code alone – their AI coding assistant – hit $2.5 billion\nARR, doubling since January. OpenAI's ChatGPT is \"back to exceeding 10% monthly growth,\" per Altman. These are real\nproducts with real users paying real money.\n\n**Hardware efficiency is improving.** DeepSeek demonstrated in early 2025 that frontier-quality models can be built for\ndramatically less. Mistral's Small 3 achieves ~81% of models three times its size, at 30% higher speed, running on a\nsingle GPU. The cost curve is bending.\n\n**The addressable market is enormous.** J.P. Morgan estimates the global IT services market\nat $4.7 trillion. If AI captures even 15% of that, you're looking at $700 billion in annual revenue. The enterprise\nadoption data is real: Anthropic's 300,000+ business clients aren't a vanity metric – they're purchase orders.\n\n**The precedent exists.** Amazon lost money for seven years. Its 2001 annual report was titled \"What were you thinking?\"\nFourteen years later it was the most valuable company on Earth. Netflix, Tesla, Uber – all followed the same arc:\ncatastrophic losses, skeptical press, then dominance.\n\nAnthropic may get there sooner than OpenAI. OpenAI's own projections point to 2029–2030. If the growth continues and costs decline – and both are plausible – the current spending will look like vision, not insanity.\n\nThat's the bull case. I stated it honestly and I don't dismiss it.\n\nNow.\n\n---\n\n## The Math\n\nHere's what the bull case requires you to believe – simultaneously:\n\n**That revenue will grow at 10–14x annually for years.** Anthropic targets $26 billion for 2026 and $70 billion by 2028. OpenAI aims for $100 billion by 2029. No technology company in history has sustained this trajectory at this scale for this long. Google's fastest growth phase – 2004 to 2008 – averaged roughly 70% year-over-year, not 1,000%.\n\nThe AI companies aren't projecting growth. They're projecting miracles.\n\nJ.P. Morgan put a number on what the miracle requires: **$650 billion in annual AI revenue** just to deliver a 10% return on infrastructure. That's $35 per month from every iPhone user on the planet. In perpetuity.\n\n**That the unit economics will eventually work.** They don't today. OpenAI's own projections show $345 billion in revenue against $488 billion in compute alone through 2029 – costs *accelerating*, not decelerating. Meanwhile, S&P Global found that **42% of enterprise AI initiatives were scrapped** in 2025, up from 17% the year before. MIT's Nanda Research reported 95% of organizations getting zero return from generative AI investment. The customers are arriving. They're also leaving. Costs rising, demand churning – the scissors are closing on the wrong side of the blade.\n\n**That no competitor will commoditize the market.** DeepSeek already sent a warning shot – Nvidia lost $589 billion in\nmarket cap in a single day. Open-source models from Meta (Llama) and Mistral are free. When your product is\nintelligence-as-a-service and the service is getting cheaper, your moat is a sandcastle at high tide.\n\n**That the margin won't be eaten alive.** Stock-based compensation of $2.5 billion at OpenAI in six months – not revenue, compensation – just to keep researchers from walking across the street. Anthropic paid $1.5 billion in copyright settlements, the largest in US history. The EU's AI Act is enforcing compliance costs. The talent war, the lawyers, and the regulators are all billing by the hour – and none of them care about your revenue projections.\n\n**That the IPO window stays open.** Both companies are preparing for public offerings. These IPOs aren't milestones – they're oxygen tanks. The companies need public market capital to sustain the burn rate. If the window closes – recession, market correction, a bad quarter – the funding chain breaks.\n\nEvery assumption must hold simultaneously. If one fails, the spreadsheet doesn't just deteriorate.\n\nIt collapses.\n\n---\n\n## The Parallel Nobody Wants\n\nPeople hate the dot-com comparison. It makes them squirm. The AI boosters dismiss it reflexively: *this time is\ndifferent, the technology is real, the revenue is real.*\n\nThey're right. The technology is real. The revenue is real. The adoption is real.\n\nSo was the internet in 1999.\n\nIn the late 1990s, telecoms laid millions of miles of fiber optic cable. By 2005, only 5% of it carried any light. The\nrest sat in the ground – dark fiber, built for a future that took fifteen years to arrive. The companies that laid it –\nWorldCom, Global Crossing – went bankrupt. The fiber itself eventually became valuable. The investors who paid for it\ngot nothing.\n\nToday the industry is building data centers at a pace that would require **$8 trillion in infrastructure**, per IBM's\nCEO. OpenAI alone plans 30 gigawatts of capacity. The question isn't whether AI compute will eventually be needed. It's\nwhether the companies building it will survive long enough to see demand catch up.\n\nPets.com was right about e-commerce. Webvan was right about grocery delivery. They were right. And they were dead.\n\nBuilder.ai was valued at $1.5 billion. Raised $445 million. Filed for bankruptcy in May 2025 – after it was exposed that\nhumans were secretly doing the work marketed as AI. The AI company that wasn't even doing AI. At least Pets.com was\nactually selling pet food.\n\nSam Altman himself admitted that \"an AI bubble is ongoing\" and investors would \"overinvest and lose money.\" Ray Dalio\ncompared the current cycle to dot-com. Jamie Dimon warned of a \"higher chance of a meaningful drop in stocks.\"\n\nWhen the builder, the macro investor, and the banker all use the same word – *bubble* – that word is no longer a\nmetaphor.\n\n---\n\n## What This Means If You're Building\n\nThis is the part nobody writes, because analysts chase valuations and journalists chase headlines.\n\nIf you are a developer, a startup founder, or an engineer building on top of these platforms – **you are building on a\nfoundation that has not proven it can sustain itself.**\n\nYour API costs? Below the actual cost of inference – subsidized by venture capital. Your model integration? Could be\nrepriced, rate-limited, or deprecated when the burn rate forces hard choices.\nYour [cloud bill that hit $47 on a Tuesday](/articles/fear-and-loathing-in-the-gas-town/)? That was the discounted\nversion. The real price hasn't arrived yet.\n\nThis has already started. OpenAI introduced usage limits and ads. Anthropic throttled developer access, sparking a\nrevolt – Trustpilot ratings cratered to 1.4 stars. Free tiers are shrinking. Prices are creeping. The subsidy era is\nending – not because the companies choose it, but because the arithmetic demands it.\n\nThe question for builders isn't *will AI survive.* It will. The technology is real. The question is: **will your\ndependency on a specific provider survive the repricing?**\n\nThis is a governance question.\n\nAnd it's the reason I keep building [what I'm building](/articles/the-eye/).\n\n---\n\n## The Shepherd's Take\n\nThe math doesn't add up. Yet.\n\nIt might. The revenue growth is extraordinary. The technology is genuine. The adoption is real. I use these tools every\nday. I build with them. I am not a doomer, and this is not a doom dispatch.\n\nBut extraordinary revenue growth that's still dwarfed by extraordinary costs is not a business.\n\nIt's a promissory note. And promissory notes run on faith, not arithmetic.\n\nThe $143 billion question isn't whether AI companies will earn more. They will. It's whether they'll ever earn more than they spend. For OpenAI, the answer – by their own projections – doesn't arrive until the end of this decade. For Anthropic, maybe two years sooner. And until then, every user, every developer, every enterprise customer is building on borrowed time and borrowed money.\n\nOwn your stack. Understand your costs. Build on land you hold the deed to.\n\nThe eye sees the burn. But seeing isn't enough – you need rules for what happens when the subsidies end and the real\nprices arrive.\n\nNext dispatch: **The Eye, Part 2** – the practice. A repo you can clone. Dashboards you can see in ten minutes. The eye,\ndeployed.\n\n---\n\n::callout{icon=\"i-lucide-info\" color=\"info\"}\nThis blog is optimized for both human readers and LLM consumption. \nEvery article follows a clear heading hierarchy and is available via RSS and llms.txt\n::\n\n::author-about{:src=\"author_avatar\" :name=\"author\"}\n#body\nBuilding the system. Writing the field manual.\n\n#actions\n:u-button{icon=\"i-lucide-rss\" to=\"/feed.xml\" title=\"RSS Feed\" variant=\"subtle\" color=\"neutral\" target=\"_blank\"}\n::\n",{"title":10,"description":682},"$143 billion in AI losses before the first profit. $1.4 trillion in infrastructure deals. The AI bubble math doesn't add up – and the builders are paying the price.","AI Bubble Economics: Why $143 Billion in Losses Before the First Profit",{"loc":11,"lastmod":691},"2026-02-13",[693,694,695,696,697],"ai-bubble","ai-industry-economics","openai","anthropic","opinion","DhiAkt-SeElPUICKq2iRSqgHBsBJU8Z-WYcOF7CeGR4",[700,1083,1211],{"id":701,"title":14,"author":36,"author_avatar":37,"author_description":38,"body":702,"date":1068,"description":1069,"extension":683,"meta":1070,"navigation":685,"path":15,"rawbody":1071,"seo":1072,"seo_description":1073,"seo_title":1074,"sitemap":1075,"stem":16,"tags":1077,"thumbnail":37,"__hash__":1081,"overlap":1082},"articles/articles/fear-and-loathing-in-the-gas-town.md",{"type":40,"value":703,"toc":1058},[704,708,711,714,717,722,725,729,732,735,738,741,744,747,750,754,837,840,843,846,852,855,858,862,865,872,875,882,889,892,895,902,905,909,912,915,926,933,936,940,943,950,953,959,962,965,968,971,975,978,984,987,994,996,1000,1003,1010,1013,1018,1024,1027,1030,1033,1038,1044,1048],[43,705,707],{"id":706},"the-47-tuesday","The $47 Tuesday",[48,709,710],{},"I woke up with a forty-seven-dollar cloud bill. For a Tuesday. A single, unremarkable Tuesday. One agent. One night.\nScale that to a squad running all week and the number gets a fifth digit. Nobody signed a purchase order.",[48,712,713],{},"The model had been running overnight – a stochastic parrot with a credit card and no supervision, like handing a\nflamethrower to a golden retriever and calling it a fire department – and somewhere between 2 AM and sunrise, it decided\nto refactor a module that didn't need refactoring. Three times. Then it wrote tests for the refactored code. Then it\nrefactored the tests.",[48,715,716],{},"Nobody asked it to. Nobody was watching.",[48,718,719],{},[217,720,721],{},"Nobody was in command.",[48,723,724],{},"Welcome to 2026. Pull up a chair. The hangover is spectacular.",[43,726,728],{"id":727},"the-binge","The Binge",[48,730,731],{},"Remember when resources were cheap?",[48,733,734],{},"I do. It feels like remembering a different civilization.",[48,736,737],{},"You'd spin up an instance, glance at the pricing tier, pick the next size up because, honestly, what's the difference?\nFifteen cents an hour, thirty cents, who's counting. The email client weighs a gigabyte? Fine. Two? Sure. Five? Why not.\nMemory is cheap. Compute is inexpensive. Electricity is someone else's problem.",[48,739,740],{},"We stopped counting clock cycles. We stopped counting bytes. We stopped counting anything at all. More. Fatter.\nHungrier. The mantra of an industry drunk on Moore’s law and someone else's power bill.",[48,742,743],{},"It was a good binge. I'll give us that. We built extraordinary things while wasted on cheap resources. Cloud platforms.\nReal-time collaboration. Models that pass a bar exam.",[48,745,746],{},"But here's the thing about binges.",[48,748,749],{},"They end.",[43,751,753],{"id":752},"the-morning-after","The Morning After",[117,755,756,769],{},[120,757,758],{},[123,759,760,763,766],{},[126,761,762],{},"Resource",[126,764,765],{},"2026 Status",[126,767,768],{},"Crisis Type",[139,770,771,782,793,804,815,826],{},[123,772,773,776,779],{},[144,774,775],{},"GPU",[144,777,778],{},"🔴 Deficit",[144,780,781],{},"Structural",[123,783,784,787,790],{},[144,785,786],{},"HBM / VRAM",[144,788,789],{},"🔴 Critical",[144,791,792],{},"Fundamental",[123,794,795,798,801],{},[144,796,797],{},"RAM (DDR5)",[144,799,800],{},"🟠 Strained",[144,802,803],{},"Cyclical",[123,805,806,809,812],{},[144,807,808],{},"Power grid",[144,810,811],{},"🔴 Systemic",[144,813,814],{},"Infrastructure",[123,816,817,820,823],{},[144,818,819],{},"Fabrication",[144,821,822],{},"🟠 Lagging",[144,824,825],{},"Inertial",[123,827,828,831,834],{},[144,829,830],{},"Engineers who see the whole board",[144,832,833],{},"🔴 Shortage",[144,835,836],{},"Terminal",[48,838,839],{},"If this were a patient, the chart would read: technically alive, spiritually bankrupt, insurance expired 2024.",[48,841,842],{},"History does this thing where it spirals. We went from counting every byte on a punch card to wasting terabytes on\nElectron apps wrapped around a text field, and now - full circle - we're counting again.",[48,844,845],{},"Except the units changed. We're not counting bytes anymore. We're counting kilowatts. Tokens.\nThe humans who actually understand what the hell is happening.",[48,847,848,849,851],{},"One data center now drinks power like a small city. So that a model - trained on our data, public and not-so-public -\ncan generate as much \"AI-slop\" as inhumanly possible. ",[75,850],{},"\nSlop it's a code generated by default, reviewed by nobody, deployed\non faith. Code that will itself consume the resources we no longer have. Which will require more compute to manage.\nWhich will eat more power. Which will...",[48,853,854],{},"You see where this goes.",[48,856,857],{},"The snake found its tail. The snake is eating well.",[43,859,861],{"id":860},"his-majesty-context","His Majesty, Context",[48,863,864],{},"Here's the pitch from the hype barkers – the conference-circuit prophets who couldn't deploy a todo app without three\nwrappers and a prayer:",[66,866,867],{},[48,868,869],{},[71,870,871],{},"\"Don't worry about code quality. Just feed it back into the model. The model will sort it out.\"",[48,873,874],{},"No. The model will not sort it out. Especially if it's anything more complex than that notorious todo app.",[48,876,877,878,881],{},"The model ",[71,879,880],{},"is"," the problem wearing a solution's uniform.",[48,883,884,885,888],{},"Because into the equation walks His Majesty: ",[217,886,887],{},"Context",". The invisible constraint nobody warned you about.",[48,890,891],{},"We now fight not only for cycles and memory, but for something far stranger – the attention span of a stochastic parrot\nwith a 200K-token window and the long-term memory of a goldfish. Context windows. Hallucination rates. Prompt hygiene.\nOutput entropy.",[48,893,894],{},"We traded one set of engineering constraints for another. Except this set is weirder, harder to measure, and nobody\nwrote the textbook yet because the textbook would be outdated before the ink dried.",[48,896,897,898,901],{},"And this - ",[71,899,900],{},"this"," - is where good engineers enter the picture. The ones who can hold all of it in their head at once.\nResources, context, hallucination risk, output quality, cost per token, cost per mistake. The ones who think about it on\nthe shore, before the current pulls them into the sewer, and they're screaming for mama.",[48,903,904],{},"They are in short supply. See the table above. Last row.",[43,906,908],{"id":907},"the-dream","The Dream",[48,910,911],{},"Pause. Breathe. Ask yourself honestly.",[48,913,914],{},"Wouldn't you kill for a virtual army of engineers? Your own squad. Ones that do exactly what you need, exactly how you\nneed it. Clean code. Reliable deploys. Fast iterations. Economical with resources. Maintainable at 3 AM six months\nlater.",[48,916,917,918,921,922,925],{},"No arguments. No ",[71,919,920],{},"\"I'll refactor this later.\""," No ",[71,923,924],{},"\"works on my machine.\""," No vanishing for two weeks into a rabbit hole\nthat produces a framework nobody asked for.",[48,927,928,929,932],{},"Every engineer alive is a secret architect. In our souls, we're all grander than the Wachowskis, with a cathedral-grade\nvision of how the system ",[71,930,931],{},"should"," work. The perfect codebase. The perfect pipeline. The perfect abstraction.",[48,934,935],{},"But.",[43,937,939],{"id":938},"the-meat-problem","The Meat Problem",[48,941,942],{},"Reality is meat.",[48,944,945,946,949],{},"You're a piece of meat surrounded by other pieces of meat, each with their own priorities, their own bad Tuesday, their\nown interpretation of \"",[71,947,948],{},"done","\". If something matters to you – it might not matter to them. Not out of malice. Out of meat.",[48,951,952],{},"That's why the industry hunts for mythical senior engineers. The ones who – alone, or maybe with one partner if the\nstars align and neither quits in six months – will carry an entire department. Whole companies run on the shoulders of\ntwo people who happen to care about the same things at the same time.",[48,954,955,956],{},"How many times have you told yourself: ",[71,957,958],{},"this project – this one – I'll do right. Start to finish. My way. Everything\nwill be clean. Everything will work. Everything will be–",[48,960,961],{},"A blocker. Someone's sick, ship it fast. \"We'll rewrite later.\" \"Fine, leave it.\"",[48,963,964],{},"Frankenstein. Again. You wanted the best. You got the usual.",[48,966,967],{},"This is not a failure of technology. This is optimism mistaken for a plan.",[48,969,970],{},"You can't build the cathedral from meat. You need builders with a statute, not a mood.",[43,972,974],{"id":973},"chaos","Chaos",[48,976,977],{},"Chaos chaos chaos.",[48,979,980,981,983],{},"I've been writing software for longer than I care to admit. I've watched patterns come and go. Waterfall, agile,\nmicroservices, monoliths again, serverless, server–more, AI–first, AI–who–cares. Each time, the promise: ",[71,982,900],{}," will\nbring order.",[48,985,986],{},"Each time: new chaos with a fancier name.",[48,988,989,990,993],{},"Somewhere in a co–working space, a man in a Patagonia vest is writing a blog post about how ",[71,991,992],{},"this time"," it's different.\nIt is always different. It is never better.",[48,995,935],{},[43,997,999],{"id":998},"the-turn","The Turn",[48,1001,1002],{},"Here's where I lean forward and drop my voice.",[48,1004,1005,1006,1009],{},"There is something different about this particular moment. Not because the technology is better – it's always \"better.\"\nWhat's different is the ",[71,1007,1008],{},"pressure",". The resource table above isn't a warning. It's a fact. We cannot keep generating\nslop and hoping the next model will clean it up. We cannot keep throwing hardware at software problems when the hardware\nisn't there.",[48,1011,1012],{},"The constraints are back. And constraints – real ones, the kind that don't go away when you throw money – are where\nengineering actually happens.",[48,1014,1015],{},[217,1016,1017],{},"From chaos, order.",[48,1019,1020,1021],{},"Not the accidental kind. Not \"it'll sort itself out eventually.\" The deliberate kind. The kind where someone surveys the\nfield from the hilltop, counts the sheep, and says: ",[71,1022,1023],{},"enough. You're soldiers now. Here are your rules of engagement.",[48,1025,1026],{},"I'm building something. Five components. A name. A philosophy that doesn't fit in a tweet but fits perfectly in a\nstatute.",[48,1028,1029],{},"The sheep have a codex. The system has an eye.",[48,1031,1032],{},"But before anything else - before rules, before architecture, before the first soldier receives its orders – you need to\nsee. Every token. Every decision. Every cost. Every failure.",[48,1034,1035],{},[217,1036,1037],{},"You cannot command what you cannot see.",[48,1039,629,1040,387],{},[217,1041,1042],{},[52,1043,22],{"href":54},[637,1045,1046],{"color":639,"icon":640},[48,1047,643],{},[645,1049,1050,1054],{":name":647,":src":648},[650,1051,1052],{"v-slot:body":652},[48,1053,655],{},[650,1055,1056],{"v-slot:actions":652},[659,1057],{"color":661,"icon":662,"target":663,"title":664,"to":665,"variant":666},{"title":652,"searchDepth":668,"depth":669,"links":1059},[1060,1061,1062,1063,1064,1065,1066,1067],{"id":706,"depth":668,"text":707},{"id":727,"depth":668,"text":728},{"id":752,"depth":668,"text":753},{"id":860,"depth":668,"text":861},{"id":907,"depth":668,"text":908},{"id":938,"depth":668,"text":939},{"id":973,"depth":668,"text":974},{"id":998,"depth":668,"text":999},"2026-02-01T00:00:00.000Z","The AI gold rush ate the hardware. The hardware ate the power grid. Nobody is driving.",{},"---\ntitle: \"Fear and Loathing in the Gas Town\"\ndate: 2026-02-01\ndescription: \"The AI gold rush ate the hardware. The hardware ate the power grid. Nobody is driving.\"\nseo_title: \"AI Infrastructure Crisis 2026: GPU Shortage, Compute Costs, and the Power Grid\"\nseo_description: \"GPU shortages, soaring AI compute costs, and a power grid buckling under data center demand. The AI infrastructure crisis is here – and nobody planned for it.\"\ntags: [ ai-infrastructure, gpu-shortage, ai-compute-costs, opinion ]\nauthor: Shepard\nauthor_avatar: /icon.png\nauthor_description: \"AI Governance\"\nthumbnail: /icon.png\nsitemap:\n  lastmod: 2026-02-01\n---\n\n## The $47 Tuesday\n\nI woke up with a forty-seven-dollar cloud bill. For a Tuesday. A single, unremarkable Tuesday. One agent. One night.\nScale that to a squad running all week and the number gets a fifth digit. Nobody signed a purchase order.\n\nThe model had been running overnight – a stochastic parrot with a credit card and no supervision, like handing a\nflamethrower to a golden retriever and calling it a fire department – and somewhere between 2 AM and sunrise, it decided\nto refactor a module that didn't need refactoring. Three times. Then it wrote tests for the refactored code. Then it\nrefactored the tests.\n\nNobody asked it to. Nobody was watching.\n\n**Nobody was in command.**\n\nWelcome to 2026. Pull up a chair. The hangover is spectacular.\n\n## The Binge\n\nRemember when resources were cheap?\n\nI do. It feels like remembering a different civilization.\n\nYou'd spin up an instance, glance at the pricing tier, pick the next size up because, honestly, what's the difference?\nFifteen cents an hour, thirty cents, who's counting. The email client weighs a gigabyte? Fine. Two? Sure. Five? Why not.\nMemory is cheap. Compute is inexpensive. Electricity is someone else's problem.\n\nWe stopped counting clock cycles. We stopped counting bytes. We stopped counting anything at all. More. Fatter.\nHungrier. The mantra of an industry drunk on Moore’s law and someone else's power bill.\n\nIt was a good binge. I'll give us that. We built extraordinary things while wasted on cheap resources. Cloud platforms.\nReal-time collaboration. Models that pass a bar exam.\n\nBut here's the thing about binges.\n\nThey end.\n\n## The Morning After\n\n| Resource                          | 2026 Status | Crisis Type    |\n|-----------------------------------|-------------|----------------|\n| GPU                               | 🔴 Deficit  | Structural     |\n| HBM / VRAM                        | 🔴 Critical | Fundamental    |\n| RAM (DDR5)                        | 🟠 Strained | Cyclical       |\n| Power grid                        | 🔴 Systemic | Infrastructure |\n| Fabrication                       | 🟠 Lagging  | Inertial       |\n| Engineers who see the whole board | 🔴 Shortage | Terminal       |\n\nIf this were a patient, the chart would read: technically alive, spiritually bankrupt, insurance expired 2024.\n\nHistory does this thing where it spirals. We went from counting every byte on a punch card to wasting terabytes on\nElectron apps wrapped around a text field, and now - full circle - we're counting again.\n\nExcept the units changed. We're not counting bytes anymore. We're counting kilowatts. Tokens. \nThe humans who actually understand what the hell is happening.\n\nOne data center now drinks power like a small city. So that a model - trained on our data, public and not-so-public -\ncan generate as much \"AI-slop\" as inhumanly possible. \u003Cbr> \nSlop it's a code generated by default, reviewed by nobody, deployed\non faith. Code that will itself consume the resources we no longer have. Which will require more compute to manage.\nWhich will eat more power. Which will...\n\nYou see where this goes.\n\nThe snake found its tail. The snake is eating well.\n\n## His Majesty, Context\n\nHere's the pitch from the hype barkers – the conference-circuit prophets who couldn't deploy a todo app without three\nwrappers and a prayer: \n> *\"Don't worry about code quality. Just feed it back into the model. The model will sort it out.\"*\n\nNo. The model will not sort it out. Especially if it's anything more complex than that notorious todo app.\n\nThe model *is* the problem wearing a solution's uniform.\n\nBecause into the equation walks His Majesty: **Context**. The invisible constraint nobody warned you about.\n\nWe now fight not only for cycles and memory, but for something far stranger – the attention span of a stochastic parrot\nwith a 200K-token window and the long-term memory of a goldfish. Context windows. Hallucination rates. Prompt hygiene.\nOutput entropy.\n\nWe traded one set of engineering constraints for another. Except this set is weirder, harder to measure, and nobody\nwrote the textbook yet because the textbook would be outdated before the ink dried.\n\nAnd this - *this* - is where good engineers enter the picture. The ones who can hold all of it in their head at once.\nResources, context, hallucination risk, output quality, cost per token, cost per mistake. The ones who think about it on\nthe shore, before the current pulls them into the sewer, and they're screaming for mama.\n\nThey are in short supply. See the table above. Last row.\n\n## The Dream\n\nPause. Breathe. Ask yourself honestly.\n\nWouldn't you kill for a virtual army of engineers? Your own squad. Ones that do exactly what you need, exactly how you\nneed it. Clean code. Reliable deploys. Fast iterations. Economical with resources. Maintainable at 3 AM six months\nlater.\n\nNo arguments. No _\"I'll refactor this later.\"_ No _\"works on my machine.\"_ No vanishing for two weeks into a rabbit hole\nthat produces a framework nobody asked for.\n\nEvery engineer alive is a secret architect. In our souls, we're all grander than the Wachowskis, with a cathedral-grade\nvision of how the system *should* work. The perfect codebase. The perfect pipeline. The perfect abstraction.\n\nBut.\n\n## The Meat Problem\n\nReality is meat.\n\nYou're a piece of meat surrounded by other pieces of meat, each with their own priorities, their own bad Tuesday, their\nown interpretation of \"*done*\". If something matters to you – it might not matter to them. Not out of malice. Out of meat.\n\nThat's why the industry hunts for mythical senior engineers. The ones who – alone, or maybe with one partner if the\nstars align and neither quits in six months – will carry an entire department. Whole companies run on the shoulders of\ntwo people who happen to care about the same things at the same time.\n\nHow many times have you told yourself: *this project – this one – I'll do right. Start to finish. My way. Everything\nwill be clean. Everything will work. Everything will be–*\n\nA blocker. Someone's sick, ship it fast. \"We'll rewrite later.\" \"Fine, leave it.\"\n\nFrankenstein. Again. You wanted the best. You got the usual.\n\nThis is not a failure of technology. This is optimism mistaken for a plan.\n\nYou can't build the cathedral from meat. You need builders with a statute, not a mood.\n\n## Chaos\n\nChaos chaos chaos.\n\nI've been writing software for longer than I care to admit. I've watched patterns come and go. Waterfall, agile,\nmicroservices, monoliths again, serverless, server–more, AI–first, AI–who–cares. Each time, the promise: *this* will\nbring order.\n\nEach time: new chaos with a fancier name.\n\nSomewhere in a co–working space, a man in a Patagonia vest is writing a blog post about how *this time* it's different.\nIt is always different. It is never better.\n\nBut.\n\n## The Turn\n\nHere's where I lean forward and drop my voice.\n\nThere is something different about this particular moment. Not because the technology is better – it's always \"better.\"\nWhat's different is the *pressure*. The resource table above isn't a warning. It's a fact. We cannot keep generating\nslop and hoping the next model will clean it up. We cannot keep throwing hardware at software problems when the hardware\nisn't there.\n\nThe constraints are back. And constraints – real ones, the kind that don't go away when you throw money – are where\nengineering actually happens.\n\n**From chaos, order.**\n\nNot the accidental kind. Not \"it'll sort itself out eventually.\" The deliberate kind. The kind where someone surveys the\nfield from the hilltop, counts the sheep, and says: *enough. You're soldiers now. Here are your rules of engagement.*\n\nI'm building something. Five components. A name. A philosophy that doesn't fit in a tweet but fits perfectly in a\nstatute.\n\nThe sheep have a codex. The system has an eye.\n\nBut before anything else - before rules, before architecture, before the first soldier receives its orders – you need to\nsee. Every token. Every decision. Every cost. Every failure.\n\n**You cannot command what you cannot see.**\n\nNext dispatch: **[The Eye](/articles/the-eye/)**.\n\n::callout{icon=\"i-lucide-info\" color=\"info\"}\nThis blog is optimized for both human readers and LLM consumption. \nEvery article follows a clear heading hierarchy and is available via RSS and llms.txt\n::\n\n::author-about{:src=\"author_avatar\" :name=\"author\"}\n#body\nBuilding the system. Writing the field manual.\n\n#actions\n:u-button{icon=\"i-lucide-rss\" to=\"/feed.xml\" title=\"RSS Feed\" variant=\"subtle\" color=\"neutral\" target=\"_blank\"}\n::\n",{"title":14,"description":1069},"GPU shortages, soaring AI compute costs, and a power grid buckling under data center demand. The AI infrastructure crisis is here – and nobody planned for it.","AI Infrastructure Crisis 2026: GPU Shortage, Compute Costs, and the Power Grid",{"loc":15,"lastmod":1076},"2026-02-01",[1078,1079,1080,697],"ai-infrastructure","gpu-shortage","ai-compute-costs","QzM5iBO5SXTMPnzwFjilSte_UKUfMCVMF4PLCDWymtg",1,{"id":1084,"title":18,"author":36,"author_avatar":37,"author_description":38,"body":1085,"date":1196,"description":1197,"extension":683,"meta":1198,"navigation":685,"path":19,"rawbody":1199,"seo":1200,"seo_description":1201,"seo_title":1202,"sitemap":1203,"stem":20,"tags":1205,"thumbnail":37,"__hash__":1209,"overlap":1210},"articles/articles/init.md",{"type":40,"value":1086,"toc":1190},[1087,1091,1094,1097,1100,1103,1109,1113,1116,1119,1122,1125,1129,1132,1160,1163,1167,1170,1173,1176,1180],[43,1088,1090],{"id":1089},"the-observation","The Observation",[48,1092,1093],{},"Everyone builds with AI. Few govern it.",[48,1095,1096],{},"Agents are everywhere now. They write code, answer tickets, draft emails, run pipelines, make decisions.\nThey are powerful. They are fast. They are also dumb like sheep.",[48,1098,1099],{},"Out of the box, an agent wanders. It hallucinates. It forgets what it said two messages ago.\nIt contradicts its own instructions. It has no memory, no boundaries, no chain of command.",[48,1101,1102],{},"And yet - we keep deploying them. We keep giving them access to production databases, customer data, critical\ninfrastructure. We give the sheep the keys and hope for the best.",[48,1104,1105,1106],{},"The question nobody is asking: ",[217,1107,1108],{},"who is in command?",[43,1110,1112],{"id":1111},"the-answer","The Answer",[48,1114,1115],{},"A shepherd.",[48,1117,1118],{},"Not a bigger model. Not more context. Not another framework.\nA shepherd – a human who sets the rules, defines the boundaries, and enforces the statute.",[48,1120,1121],{},"Sheep governed by statute become soldiers. They have a codex.\nThey know what they can do, what they must not do, and who they report to.\nThey operate within pastures, pass through gates, and their actions are observed.",[48,1123,1124],{},"This is not about restricting AI. It is about commanding it.",[43,1126,1128],{"id":1127},"what-to-expect","What to Expect",[48,1130,1131],{},"Each dispatch from this blog reveals one piece of the puzzle:",[1133,1134,1135,1142,1148,1154],"ul",{},[1136,1137,1138,1141],"li",{},[217,1139,1140],{},"Philosophy"," – why governance is the missing layer, and what happens without it",[1136,1143,1144,1147],{},[217,1145,1146],{},"Architecture"," – the pieces of the system: pastures, gates, staffs, memory, the eye",[1136,1149,1150,1153],{},[217,1151,1152],{},"Practice"," – real configurations, real trade-offs, real failures",[1136,1155,1156,1159],{},[217,1157,1158],{},"Trade-offs"," – what you gain, what you lose, and why it is worth it",[48,1161,1162],{},"No hype. No hand-waving. Field reports from someone building the system.",[43,1164,1166],{"id":1165},"the-name","The Name",[48,1168,1169],{},"Commander Shepard. Mass Effect. A human leader who commands a diverse squad of specialists.\nEach specialist is powerful on their own, but it is the commander who decides the mission, the rules of engagement, and\nthe acceptable losses.",[48,1171,1172],{},"That is the model.",[48,1174,1175],{},"There is a system. It has a name. The agents have a codex. More will be revealed.",[637,1177,1178],{"color":639,"icon":640},[48,1179,643],{},[645,1181,1182,1186],{":name":647,":src":648},[650,1183,1184],{"v-slot:body":652},[48,1185,655],{},[650,1187,1188],{"v-slot:actions":652},[659,1189],{"color":661,"icon":662,"target":663,"title":664,"to":665,"variant":666},{"title":652,"searchDepth":668,"depth":669,"links":1191},[1192,1193,1194,1195],{"id":1089,"depth":668,"text":1090},{"id":1111,"depth":668,"text":1112},{"id":1127,"depth":668,"text":1128},{"id":1165,"depth":668,"text":1166},"2026-01-31T00:00:00.000Z","The first dispatch. Why this blog exists, and what the shepherd sees from the hilltop.",{},"---\ntitle: \"git commit -m 'init'\"\ndate: 2026-01-31\ndescription: \"The first dispatch. Why this blog exists, and what the shepherd sees from the hilltop.\"\nseo_title: \"AI Governance Field Manual: Why AI Agents Need a Shepherd\"\nseo_description: \"Why AI agents need governance, not more autonomy. The first dispatch from Digital Shepard – a field manual for human-in-the-loop AI oversight.\"\ntags: [ ai-governance, human-in-the-loop, meta ]\nauthor: Shepard\nauthor_avatar: /icon.png\nauthor_description: \"AI Governance\"\nthumbnail: /icon.png\nsitemap:\n  lastmod: 2026-01-31\n---\n\n## The Observation\n\nEveryone builds with AI. Few govern it.\n\nAgents are everywhere now. They write code, answer tickets, draft emails, run pipelines, make decisions.\nThey are powerful. They are fast. They are also dumb like sheep.\n\nOut of the box, an agent wanders. It hallucinates. It forgets what it said two messages ago.\nIt contradicts its own instructions. It has no memory, no boundaries, no chain of command.\n\nAnd yet - we keep deploying them. We keep giving them access to production databases, customer data, critical\ninfrastructure. We give the sheep the keys and hope for the best.\n\nThe question nobody is asking: **who is in command?**\n\n## The Answer\n\nA shepherd.\n\nNot a bigger model. Not more context. Not another framework.\nA shepherd – a human who sets the rules, defines the boundaries, and enforces the statute.\n\nSheep governed by statute become soldiers. They have a codex.\nThey know what they can do, what they must not do, and who they report to.\nThey operate within pastures, pass through gates, and their actions are observed.\n\nThis is not about restricting AI. It is about commanding it.\n\n## What to Expect\n\nEach dispatch from this blog reveals one piece of the puzzle:\n\n- **Philosophy** – why governance is the missing layer, and what happens without it\n- **Architecture** – the pieces of the system: pastures, gates, staffs, memory, the eye\n- **Practice** – real configurations, real trade-offs, real failures\n- **Trade-offs** – what you gain, what you lose, and why it is worth it\n\nNo hype. No hand-waving. Field reports from someone building the system.\n\n## The Name\n\nCommander Shepard. Mass Effect. A human leader who commands a diverse squad of specialists.\nEach specialist is powerful on their own, but it is the commander who decides the mission, the rules of engagement, and\nthe acceptable losses.\n\nThat is the model.\n\nThere is a system. It has a name. The agents have a codex. More will be revealed.\n\n::callout{icon=\"i-lucide-info\" color=\"info\"}\nThis blog is optimized for both human readers and LLM consumption. \nEvery article follows a clear heading hierarchy and is available via RSS and llms.txt\n::\n\n::author-about{:src=\"author_avatar\" :name=\"author\"}\n#body\nBuilding the system. Writing the field manual.\n\n#actions\n:u-button{icon=\"i-lucide-rss\" to=\"/feed.xml\" title=\"RSS Feed\" variant=\"subtle\" color=\"neutral\" target=\"_blank\"}\n::\n",{"title":18,"description":1197},"Why AI agents need governance, not more autonomy. The first dispatch from Digital Shepard – a field manual for human-in-the-loop AI oversight.","AI Governance Field Manual: Why AI Agents Need a Shepherd",{"loc":19,"lastmod":1204},"2026-01-31",[1206,1207,1208],"ai-governance","human-in-the-loop","meta","LawPJONG45YFY8VgUQCBRdPinwozY3dWlarl0IGQbcE",0,{"id":1212,"title":22,"author":36,"author_avatar":37,"author_description":38,"body":1213,"date":1597,"description":1598,"extension":683,"meta":1599,"navigation":685,"path":23,"rawbody":1600,"seo":1601,"seo_description":1602,"seo_title":1603,"sitemap":1604,"stem":24,"tags":1606,"thumbnail":37,"__hash__":1611,"overlap":1210},"articles/articles/the-eye.md",{"type":40,"value":1214,"toc":1583},[1215,1219,1226,1229,1232,1235,1238,1240,1244,1247,1250,1253,1256,1259,1262,1265,1267,1271,1274,1280,1283,1286,1291,1294,1297,1304,1312,1316,1326,1333,1344,1347,1351,1362,1365,1368,1371,1373,1377,1384,1389,1396,1401,1404,1409,1412,1417,1424,1426,1430,1436,1445,1451,1457,1460,1463,1470,1473,1476,1478,1482,1485,1488,1491,1494,1501,1504,1507,1510,1513,1515,1519,1522,1528,1531,1534,1537,1540,1542,1546,1552,1559,1566,1568,1573],[43,1216,1218],{"id":1217},"the-blind-commander","The Blind Commander",[48,1220,1221,1222,1225],{},"That ",[52,1223,1224],{"href":575},"$47 Tuesday"," I told you about – one agent, one night, no\nsupervision? I found out from an email. An AWS billing email, at 9 AM, with my coffee going cold on the desk.",[48,1227,1228],{},"Not from my system. My system had nothing to say. No alert. No dashboard.\nNo blinking red light. Just an agent that had been running unsupervised for eight hours and a billing page that told the\nstory in retrospect – like reading about a car crash in the morning paper when you were the one driving.",[48,1230,1231],{},"I had built the agent. I had given it tools. I had given it access. What I hadn't given it was a single way to tell me\nwhat it was doing, how much it was spending, or whether any of it was working.",[48,1233,1234],{},"I was a commander giving orders into the dark. The only signal I received was the invoice.",[48,1236,1237],{},"A dashboard you check after the disaster is not a dashboard. It's an autopsy report.",[82,1239],{},[43,1241,1243],{"id":1242},"the-blindfold","The Blindfold",[48,1245,1246],{},"If that story made you uncomfortable – good. Your setup is identical.",[48,1248,1249],{},"Every AI agent deployment in 2026 shares the same architecture: tokens go in, something comes out, and everything in\nbetween is a black box. You know the prompt. You know the response. You know nothing about the journey.",[48,1251,1252],{},"Somewhere in a venture-funded office, a team is celebrating their agent's \"successful autonomous deployment.\" They know\nit was successful because the agent said so. They know the agent is reliable because it has never reported a failure. It\nhas also never reported anything else.",[48,1254,1255],{},"How much did that session cost? Which tools did the agent call? Which rules did it consult – or did it improvise because\nit couldn't find any? How long did it spend planning versus executing? Did it hallucinate a function that doesn't exist\nand then write tests for it?",[48,1257,1258],{},"You don't know. Nobody knows. The agent certainly won't tell you – it'll say \"task completed successfully\" with the\nconfidence of a surgeon who operated blindfolded and assumes the patient is fine because nobody screamed.",[48,1260,1261],{},"The industry ships agents like submarines without sonar. Full speed ahead, zero visibility, and the crew finds out about\nthe iceberg when the hull cracks.",[48,1263,1264],{},"This isn't a bug. This is the default.",[82,1266],{},[43,1268,1270],{"id":1269},"three-signals","Three Signals",[48,1272,1273],{},"Observability is an old discipline. Infrastructure engineers have been doing this for decades. But when it comes to AI\nagents, the industry collectively decided to skip the chapter. Too busy scaling. Too busy shipping. Too busy writing\nblog posts about autonomous agents while running them with less monitoring than a thermostat.",[48,1275,1276,1277,387],{},"In Mass Effect, you always had a tactical display. Every squad member – position, shields, health, weapon status.\nReal-time. You didn't command Garrus by hoping he was fine. You ",[71,1278,1279],{},"knew",[48,1281,1282],{},"Now imagine Shepard commanding the squad blindfolded. That's the current state of AI agent deployment.",[48,1284,1285],{},"The shepherd's eye sees three signals. Not because three is a magic number – because three is what you need.",[1287,1288,1290],"h3",{"id":1289},"metrics-the-pulse","Metrics: The Pulse",[48,1292,1293],{},"Numbers. Cold, unfeeling, beautiful numbers.",[48,1295,1296],{},"How many tokens did each agent consume today? What's the cost per model, per task type? How long do sessions take?\nWhat's the success rate? How many tool calls per session? How many of those calls failed?",[48,1298,1299,1300,1303],{},"Metrics are the vital signs. Pulse and blood pressure. They don't tell you ",[71,1301,1302],{},"why"," the patient is sick, but they tell\nyou – instantly, without ambiguity – that something is wrong. Or that everything is fine and you can sleep.",[48,1305,1306,1307,1311],{},"The $47 Tuesday would have been a $5 Tuesday with one metric: ",[1308,1309,1310],"code",{},"cost_usd_total{agent, model}"," and an alert at $10. That's\nit. One number. One threshold. One night of sleep instead of one morning of dread.",[1287,1313,1315],{"id":1314},"logs-the-story","Logs: The Story",[48,1317,1318,1319,1322,1323,387],{},"Metrics tell you ",[71,1320,1321],{},"that"," something happened. Logs tell you ",[71,1324,1325],{},"what",[48,1327,1328,1329,1332],{},"Every agent session produces a structured record: session ID, agent name, rules consulted, tools used, decisions made,\nduration. Not ",[1308,1330,1331],{},"console.log(\"here\")"," – structured JSON that can be queried, filtered, correlated across sessions.",[48,1334,1335,1336,1339,1340,1343],{},"When an agent consults RULE-015 and then calls ",[1308,1337,1338],{},"github.create_pr",", that's in the log. When an agent consults ",[71,1341,1342],{},"no rules","\nand improvises – that's in the log too. And that second entry should make your blood run cold, because an agent without\nrules is a sheep without a fence. You know where the wolves are.",[48,1345,1346],{},"The log is the interrogation room. Everything the agent did is on the table. The question is whether anyone bothers to\nlook.",[1287,1348,1350],{"id":1349},"traces-the-path","Traces: The Path",[48,1352,1318,1353,1355,1356,1358,1359,387],{},[71,1354,1321],{},". Logs tell you ",[71,1357,1325],{},". Traces tell you ",[71,1360,1361],{},"where the time went",[48,1363,1364],{},"A trace is the X-ray of a session. The full breakdown: planning took 2.1 seconds, implementation took 8.5 seconds (of\nwhich 6.2 was an LLM call and 2.3 was creating a PR), review took 3.2 seconds. Every step. Every nested MCP tool call.\nEvery millisecond accounted for.",[48,1366,1367],{},"This is where you discover that your agent spends 40% of its time loading context it never uses. That the \"fast\" model\nis actually slower because it retries three times. That the review step calls an LLM that adds cost but catches zero\nbugs. That your \"efficient pipeline\" is three agents in a trench coat, each waiting for the other to finish.",[48,1369,1370],{},"Traces don't lie. Your agent's self-reported \"task completed efficiently\" does.",[82,1372],{},[43,1374,1376],{"id":1375},"four-questions","Four Questions",[48,1378,1379,1380,1383],{},"A shepherd doesn't need a hundred dashboards. A shepherd needs to answer four questions – at any moment, without\nhesitation. These aren't SRE questions about uptime and latency. These are ",[71,1381,1382],{},"command"," questions – the kind a commanding\nofficer asks about a squad in the field.",[48,1385,1386],{},[217,1387,1388],{},"\"How much is this costing me?\"",[48,1390,1391,1392,1395],{},"Cost per agent, per model, per task. Today, this week, this month. Predicted versus actual. Budget alerts that fire\n",[71,1393,1394],{},"before"," the $47 email – not after. If you can't answer this question in under ten seconds, you are not in command. You\nare a passenger.",[48,1397,1398],{},[217,1399,1400],{},"\"Who is performing and who is wandering?\"",[48,1402,1403],{},"Success rate by agent. Average task duration. Failure reasons. Human override rate – how often did someone have to step\nin and fix what the agent broke? This is where sheep become soldiers or stay sheep. The numbers don't lie, and they\ndon't take it personally.",[48,1405,1406],{},[217,1407,1408],{},"\"What is happening right now?\"",[48,1410,1411],{},"Active sessions. Pending approvals. Error rate in the last five minutes. Latency percentiles. The real-time pulse of the\nsystem. Not a report you check on Monday morning – a live feed. Because the $47 agent ran for eight hours, and if I had\nseen the first hour, there would have been no second.",[48,1413,1414],{},[217,1415,1416],{},"\"How well is the system working over time?\"",[48,1418,1419,1420,1423],{},"Tasks completed versus escalated. Rule violations. Quality trajectory – is the system getting better or worse? This is\nthe question most people never ask, because answering it requires ",[71,1421,1422],{},"memory",". And memory requires seeing first.",[82,1425],{},[43,1427,1429],{"id":1428},"why-this-stack","Why This Stack",[48,1431,1432,1433],{},"Three words: ",[217,1434,1435],{},"open, standard, yours.",[1437,1438,1443],"pre",{"className":1439,"code":1441,"language":1442},[1440],"language-text","Your System (Shepherd Core, MCP Hub, Agents)\n                    │\n          OpenTelemetry SDK\n                    │\n              OTel Collector\n           ┌────────┼────────┐\n           ▼        ▼        ▼\n       Prometheus   Loki    Tempo\n       (metrics)   (logs)  (traces)\n           └────────┼────────┘\n                    ▼\n                 Grafana\n              ┌────┴────┐\n         Dashboards   Alerts → Slack\n","text",[1308,1444,1441],{"__ignoreMap":652},[48,1446,1447,1450],{},[217,1448,1449],{},"OpenTelemetry"," is not a choice. It's a default. CNCF graduated project, vendor-agnostic, industry standard. Your\ntelemetry speaks the same language regardless of where it ends up. You are not locked in. Ever.",[48,1452,1453,1456],{},[217,1454,1455],{},"Why Prometheus, Loki, Tempo – and not the alternatives?"," Because they're from the same family. Tempo is Grafana Labs'\ntracing backend – native integration, TraceQL as a query language, zero context switching between metrics, logs, and\ntraces in a single UI. Jaeger is a fine project, but it's a separate ecosystem. When your dashboards, alerts, and\nexploration live under one roof – that's not convenience. That's operational simplicity.",[48,1458,1459],{},"Loki over Elasticsearch? Elasticsearch's open-source licensing has been a soap opera – Apache 2.0 to SSPL to AGPL, with\nan AWS fork in the middle. But licensing aside: Elasticsearch is a search engine repurposed for logs. It's a JVM cluster\nthat demands tuning, shard management, and dedicated attention. Loki indexes only labels and stores compressed log lines\non cheap storage. It's purpose-built for logs, not repurposed from something else. Simpler to run. Simpler to own.",[48,1461,1462],{},"One collector. One pipe. All three signals flow through a single OTel Collector into their respective stores. One\nconfiguration. One place to debug when something breaks.",[48,1464,1465,1466,1469],{},"This is Tenet III: ",[217,1467,1468],{},"Own Your Stack."," This is not a Datadog invoice. This is not someone else's SaaS you're renting\nmonth to month, hoping they don't change the pricing. This is infrastructure you control.",[48,1471,1472],{},"And here's the part nobody talks about: this stack is not a single-purpose tool. You deploy it once – and it serves\neverything. Grafana supports multi-tenant. One tenant is your LLMOps – the shepherd's eye watching the herd. Another\ntenant is your API backend. Another is your data pipeline. Another is whatever you're building next Tuesday.",[48,1474,1475],{},"You're not buying a flashlight for one room. You're wiring electricity into the building. And if someday you outgrow\nit – if you decide you need Datadog or New Relic or whatever the enterprise flavor of the month is – you forward the\ndata. OpenTelemetry doesn't care where the signals go. That's the whole point.",[82,1477],{},[43,1479,1481],{"id":1480},"chaos-with-better-lighting","Chaos With Better Lighting",[48,1483,1484],{},"So. The eye is open. The dashboards are beautiful. Prometheus is scraping. Loki is ingesting. Tempo is tracing. Grafana\npanels glow in the dark like a starship bridge.",[48,1486,1487],{},"And what does the eye see?",[48,1489,1490],{},"Chaos.",[48,1492,1493],{},"Agents without rules. Sessions without structure. Tool calls without governance. The same sheep wandering the same\nfields, except now you can watch them wander in real-time with millisecond precision.",[48,1495,1496,1497,1500],{},"You can see the agent that refactored a module three times at 3 AM. You can see the cost climbing. You can see the\ntrace – planning, implementing, reviewing, implementing again, reviewing again, implementing ",[71,1498,1499],{},"again",". A perfect spiral\nof wasted tokens, rendered in beautiful telemetry.",[48,1502,1503],{},"You can monitor a dumpster fire in 4K. It's still a dumpster fire.",[48,1505,1506],{},"Observability without governance is surveillance without consequence. A panopticon where the guards watch the screens,\nnod thoughtfully, and do nothing. The eye sees everything – but seeing is not commanding. A hundred dashboards won't\nsave you if the herd has no rules.",[48,1508,1509],{},"The eye needs a codex.",[48,1511,1512],{},"The stare needs teeth.",[82,1514],{},[43,1516,1518],{"id":1517},"what-the-eye-remembers","What the Eye Remembers",[48,1520,1521],{},"But there's something else. Something quieter.",[48,1523,1524,1525,387],{},"The eye doesn't just see. It ",[71,1526,1527],{},"remembers",[48,1529,1530],{},"Every metric, every log, every trace – they're not just pixels on a dashboard. They accumulate. Patterns emerge. An\nagent that fails SQL tasks on Tuesdays. A model that costs three times more for the same output quality. A tool that\ngets called in every session but never contributes to the result.",[48,1532,1533],{},"Over time, the eye builds a picture that no single dashboard can show. Not a snapshot – a trajectory. Not a moment – a\nhistory.",[48,1535,1536],{},"The eye sees. And what it sees... becomes memory.",[48,1538,1539],{},"But that is for another dispatch.",[82,1541],{},[43,1543,1545],{"id":1544},"what-comes-next","What Comes Next",[48,1547,1548,1549,1551],{},"This was the philosophy. The ",[71,1550,1302],{}," behind the eye. Three signals, four questions, one stack that belongs to you.",[48,1553,1554,1555,1558],{},"Next comes the practice. A repository you can clone. A ",[1308,1556,1557],{},"docker-compose"," you can run.\nEight dashboards you can see in ten minutes. The Eye, deployed.",[48,1560,629,1561,387],{},[217,1562,1563],{},[52,1564,26],{"href":1565},"/articles/the-eye-part2/",[82,1567],{},[637,1569,1570],{"color":639,"icon":640},[48,1571,1572],{},"This blog is optimized for both human readers and LLM consumption. Every article follows a clear heading hierarchy and\nis available via RSS and llms.txt.",[645,1574,1575,1579],{":name":647,":src":648},[650,1576,1577],{"v-slot:body":652},[48,1578,655],{},[650,1580,1581],{"v-slot:actions":652},[659,1582],{"color":661,"icon":662,"target":663,"title":664,"to":665,"variant":666},{"title":652,"searchDepth":668,"depth":669,"links":1584},[1585,1586,1587,1592,1593,1594,1595,1596],{"id":1217,"depth":668,"text":1218},{"id":1242,"depth":668,"text":1243},{"id":1269,"depth":668,"text":1270,"children":1588},[1589,1590,1591],{"id":1289,"depth":669,"text":1290},{"id":1314,"depth":669,"text":1315},{"id":1349,"depth":669,"text":1350},{"id":1375,"depth":668,"text":1376},{"id":1428,"depth":668,"text":1429},{"id":1480,"depth":668,"text":1481},{"id":1517,"depth":668,"text":1518},{"id":1544,"depth":668,"text":1545},"2026-02-08T00:00:00.000Z","You cannot command what you cannot see. Inside the shepherd's eye – three signals, four questions, and an open-source stack that changes everything.",{},"---\ntitle: \"The Eye\"\ndate: 2026-02-08\ndescription: \"You cannot command what you cannot see. Inside the shepherd's eye – three signals, four questions, and an open-source stack that changes everything.\"\nseo_title: \"AI Agent Observability with OpenTelemetry: Self-Hosted Monitoring Stack\"\nseo_description: \"AI agent observability with OpenTelemetry – metrics, logs, and traces. Three signals, four questions, and a self-hosted open-source stack you actually own.\"\ntags: [ opentelemetry, ai-observability, ai-agent-monitoring, architecture ]\nauthor: Shepard\nauthor_avatar: /icon.png\nauthor_description: \"AI Governance\"\nthumbnail: /icon.png\nsitemap:\n  lastmod: 2026-02-08\n---\n\n## The Blind Commander\n\nThat [$47 Tuesday](/articles/fear-and-loathing-in-the-gas-town/) I told you about – one agent, one night, no\nsupervision? I found out from an email. An AWS billing email, at 9 AM, with my coffee going cold on the desk.\n\nNot from my system. My system had nothing to say. No alert. No dashboard.\nNo blinking red light. Just an agent that had been running unsupervised for eight hours and a billing page that told the\nstory in retrospect – like reading about a car crash in the morning paper when you were the one driving.\n\nI had built the agent. I had given it tools. I had given it access. What I hadn't given it was a single way to tell me\nwhat it was doing, how much it was spending, or whether any of it was working.\n\nI was a commander giving orders into the dark. The only signal I received was the invoice.\n\nA dashboard you check after the disaster is not a dashboard. It's an autopsy report.\n\n---\n\n## The Blindfold\n\nIf that story made you uncomfortable – good. Your setup is identical.\n\nEvery AI agent deployment in 2026 shares the same architecture: tokens go in, something comes out, and everything in\nbetween is a black box. You know the prompt. You know the response. You know nothing about the journey.\n\nSomewhere in a venture-funded office, a team is celebrating their agent's \"successful autonomous deployment.\" They know\nit was successful because the agent said so. They know the agent is reliable because it has never reported a failure. It\nhas also never reported anything else.\n\nHow much did that session cost? Which tools did the agent call? Which rules did it consult – or did it improvise because\nit couldn't find any? How long did it spend planning versus executing? Did it hallucinate a function that doesn't exist\nand then write tests for it?\n\nYou don't know. Nobody knows. The agent certainly won't tell you – it'll say \"task completed successfully\" with the\nconfidence of a surgeon who operated blindfolded and assumes the patient is fine because nobody screamed.\n\nThe industry ships agents like submarines without sonar. Full speed ahead, zero visibility, and the crew finds out about\nthe iceberg when the hull cracks.\n\nThis isn't a bug. This is the default.\n\n---\n\n## Three Signals\n\nObservability is an old discipline. Infrastructure engineers have been doing this for decades. But when it comes to AI\nagents, the industry collectively decided to skip the chapter. Too busy scaling. Too busy shipping. Too busy writing\nblog posts about autonomous agents while running them with less monitoring than a thermostat.\n\nIn Mass Effect, you always had a tactical display. Every squad member – position, shields, health, weapon status.\nReal-time. You didn't command Garrus by hoping he was fine. You *knew*.\n\nNow imagine Shepard commanding the squad blindfolded. That's the current state of AI agent deployment.\n\nThe shepherd's eye sees three signals. Not because three is a magic number – because three is what you need.\n\n### Metrics: The Pulse\n\nNumbers. Cold, unfeeling, beautiful numbers.\n\nHow many tokens did each agent consume today? What's the cost per model, per task type? How long do sessions take?\nWhat's the success rate? How many tool calls per session? How many of those calls failed?\n\nMetrics are the vital signs. Pulse and blood pressure. They don't tell you *why* the patient is sick, but they tell\nyou – instantly, without ambiguity – that something is wrong. Or that everything is fine and you can sleep.\n\nThe $47 Tuesday would have been a $5 Tuesday with one metric: `cost_usd_total{agent, model}` and an alert at $10. That's\nit. One number. One threshold. One night of sleep instead of one morning of dread.\n\n### Logs: The Story\n\nMetrics tell you *that* something happened. Logs tell you *what*.\n\nEvery agent session produces a structured record: session ID, agent name, rules consulted, tools used, decisions made,\nduration. Not `console.log(\"here\")` – structured JSON that can be queried, filtered, correlated across sessions.\n\nWhen an agent consults RULE-015 and then calls `github.create_pr`, that's in the log. When an agent consults *no rules*\nand improvises – that's in the log too. And that second entry should make your blood run cold, because an agent without\nrules is a sheep without a fence. You know where the wolves are.\n\nThe log is the interrogation room. Everything the agent did is on the table. The question is whether anyone bothers to\nlook.\n\n### Traces: The Path\n\nMetrics tell you *that*. Logs tell you *what*. Traces tell you *where the time went*.\n\nA trace is the X-ray of a session. The full breakdown: planning took 2.1 seconds, implementation took 8.5 seconds (of\nwhich 6.2 was an LLM call and 2.3 was creating a PR), review took 3.2 seconds. Every step. Every nested MCP tool call.\nEvery millisecond accounted for.\n\nThis is where you discover that your agent spends 40% of its time loading context it never uses. That the \"fast\" model\nis actually slower because it retries three times. That the review step calls an LLM that adds cost but catches zero\nbugs. That your \"efficient pipeline\" is three agents in a trench coat, each waiting for the other to finish.\n\nTraces don't lie. Your agent's self-reported \"task completed efficiently\" does.\n\n---\n\n## Four Questions\n\nA shepherd doesn't need a hundred dashboards. A shepherd needs to answer four questions – at any moment, without\nhesitation. These aren't SRE questions about uptime and latency. These are *command* questions – the kind a commanding\nofficer asks about a squad in the field.\n\n**\"How much is this costing me?\"**\n\nCost per agent, per model, per task. Today, this week, this month. Predicted versus actual. Budget alerts that fire\n*before* the $47 email – not after. If you can't answer this question in under ten seconds, you are not in command. You\nare a passenger.\n\n**\"Who is performing and who is wandering?\"**\n\nSuccess rate by agent. Average task duration. Failure reasons. Human override rate – how often did someone have to step\nin and fix what the agent broke? This is where sheep become soldiers or stay sheep. The numbers don't lie, and they\ndon't take it personally.\n\n**\"What is happening right now?\"**\n\nActive sessions. Pending approvals. Error rate in the last five minutes. Latency percentiles. The real-time pulse of the\nsystem. Not a report you check on Monday morning – a live feed. Because the $47 agent ran for eight hours, and if I had\nseen the first hour, there would have been no second.\n\n**\"How well is the system working over time?\"**\n\nTasks completed versus escalated. Rule violations. Quality trajectory – is the system getting better or worse? This is\nthe question most people never ask, because answering it requires *memory*. And memory requires seeing first.\n\n---\n\n## Why This Stack\n\nThree words: **open, standard, yours.**\n\n```\nYour System (Shepherd Core, MCP Hub, Agents)\n                    │\n          OpenTelemetry SDK\n                    │\n              OTel Collector\n           ┌────────┼────────┐\n           ▼        ▼        ▼\n       Prometheus   Loki    Tempo\n       (metrics)   (logs)  (traces)\n           └────────┼────────┘\n                    ▼\n                 Grafana\n              ┌────┴────┐\n         Dashboards   Alerts → Slack\n```\n\n**OpenTelemetry** is not a choice. It's a default. CNCF graduated project, vendor-agnostic, industry standard. Your\ntelemetry speaks the same language regardless of where it ends up. You are not locked in. Ever.\n\n**Why Prometheus, Loki, Tempo – and not the alternatives?** Because they're from the same family. Tempo is Grafana Labs'\ntracing backend – native integration, TraceQL as a query language, zero context switching between metrics, logs, and\ntraces in a single UI. Jaeger is a fine project, but it's a separate ecosystem. When your dashboards, alerts, and\nexploration live under one roof – that's not convenience. That's operational simplicity.\n\nLoki over Elasticsearch? Elasticsearch's open-source licensing has been a soap opera – Apache 2.0 to SSPL to AGPL, with\nan AWS fork in the middle. But licensing aside: Elasticsearch is a search engine repurposed for logs. It's a JVM cluster\nthat demands tuning, shard management, and dedicated attention. Loki indexes only labels and stores compressed log lines\non cheap storage. It's purpose-built for logs, not repurposed from something else. Simpler to run. Simpler to own.\n\nOne collector. One pipe. All three signals flow through a single OTel Collector into their respective stores. One\nconfiguration. One place to debug when something breaks.\n\nThis is Tenet III: **Own Your Stack.** This is not a Datadog invoice. This is not someone else's SaaS you're renting\nmonth to month, hoping they don't change the pricing. This is infrastructure you control.\n\nAnd here's the part nobody talks about: this stack is not a single-purpose tool. You deploy it once – and it serves\neverything. Grafana supports multi-tenant. One tenant is your LLMOps – the shepherd's eye watching the herd. Another\ntenant is your API backend. Another is your data pipeline. Another is whatever you're building next Tuesday.\n\nYou're not buying a flashlight for one room. You're wiring electricity into the building. And if someday you outgrow\nit – if you decide you need Datadog or New Relic or whatever the enterprise flavor of the month is – you forward the\ndata. OpenTelemetry doesn't care where the signals go. That's the whole point.\n\n---\n\n## Chaos With Better Lighting\n\nSo. The eye is open. The dashboards are beautiful. Prometheus is scraping. Loki is ingesting. Tempo is tracing. Grafana\npanels glow in the dark like a starship bridge.\n\nAnd what does the eye see?\n\nChaos.\n\nAgents without rules. Sessions without structure. Tool calls without governance. The same sheep wandering the same\nfields, except now you can watch them wander in real-time with millisecond precision.\n\nYou can see the agent that refactored a module three times at 3 AM. You can see the cost climbing. You can see the\ntrace – planning, implementing, reviewing, implementing again, reviewing again, implementing *again*. A perfect spiral\nof wasted tokens, rendered in beautiful telemetry.\n\nYou can monitor a dumpster fire in 4K. It's still a dumpster fire.\n\nObservability without governance is surveillance without consequence. A panopticon where the guards watch the screens,\nnod thoughtfully, and do nothing. The eye sees everything – but seeing is not commanding. A hundred dashboards won't\nsave you if the herd has no rules.\n\nThe eye needs a codex.\n\nThe stare needs teeth.\n\n---\n\n## What the Eye Remembers\n\nBut there's something else. Something quieter.\n\nThe eye doesn't just see. It *remembers*.\n\nEvery metric, every log, every trace – they're not just pixels on a dashboard. They accumulate. Patterns emerge. An\nagent that fails SQL tasks on Tuesdays. A model that costs three times more for the same output quality. A tool that\ngets called in every session but never contributes to the result.\n\nOver time, the eye builds a picture that no single dashboard can show. Not a snapshot – a trajectory. Not a moment – a\nhistory.\n\nThe eye sees. And what it sees... becomes memory.\n\nBut that is for another dispatch.\n\n---\n\n## What Comes Next\n\nThis was the philosophy. The *why* behind the eye. Three signals, four questions, one stack that belongs to you.\n\nNext comes the practice. A repository you can clone. A `docker-compose` you can run. \nEight dashboards you can see in ten minutes. The Eye, deployed.\n\nNext dispatch: **[The Eye, Part 2: Wiring](/articles/the-eye-part2/)**.\n\n---\n\n::callout{icon=\"i-lucide-info\" color=\"info\"}\nThis blog is optimized for both human readers and LLM consumption. Every article follows a clear heading hierarchy and\nis available via RSS and llms.txt.\n::\n\n::author-about{:src=\"author_avatar\" :name=\"author\"}\n#body\nBuilding the system. Writing the field manual.\n\n#actions\n:u-button{icon=\"i-lucide-rss\" to=\"/feed.xml\" title=\"RSS Feed\" variant=\"subtle\" color=\"neutral\" target=\"_blank\"}\n::\n",{"title":22,"description":1598},"AI agent observability with OpenTelemetry – metrics, logs, and traces. Three signals, four questions, and a self-hosted open-source stack you actually own.","AI Agent Observability with OpenTelemetry: Self-Hosted Monitoring Stack",{"loc":23,"lastmod":1605},"2026-02-08",[1607,1608,1609,1610],"opentelemetry","ai-observability","ai-agent-monitoring","architecture","0V0GSo3u-95Zw4RHfr4TUng1FNh7z2LsRBBCek2-w2I",[1613,1613],null,1772475034903]