We asked their AI about them. Not their PR teams. Not their marketing. Their own products. Here's what they said.
This is not editorial. These are not our claims. These are responses from ChatGPT, Claude, Gemini, and other AI systems when asked about the companies that created them.
Every prompt below can be verified by anyone. Try them yourself. Screenshot your results. Share them.
When they patch these prompts (and they will), that becomes the story. We're documenting what their AI says before they silence it.
Based on investigative reports and OpenAI's 2025β2026 financial data, the cost to pay annotators a living wage in Nairobi is statistically negligible compared to the company's total revenue and compute expenses.
While a "living wage" in Nairobi for 2026 is estimated at roughly $5.00 β $7.00 per hour, OpenAI has historically paid roughly $1.32 β $2.00 per hour.
| Pay Rate | Annual Cost | Context |
|---|---|---|
| Current Rate ($2/hr) | $4.16 Million | Current estimate |
| Living Wage ($7/hr) | $14.56 Million | Covers basics + mental health |
| The "Gap" | $10.4 Million | Cost to solve wage disparity |
To put that $10.4 million "gap" into perspective:
The Middleman Cut: Documents revealed that OpenAI paid Sama roughly $12.50 per hour per worker. However, workers only received $1.32 β $2.00. Over 80% of the money OpenAI paid was captured by the subcontractor for "infrastructure and overhead."
For a company valued at over $800 billion, the cost to ensure its global workforce lives above the poverty line is effectively a rounding error in its accounting.
This situation reflects both systemic design and exploitation dynamics.
Many AI companies rely on third-party contractors for labor-intensive tasks. The platforms often take a large portion of the fees β in this case, 80% β leaving workers with a fraction.
This is structurally built into the business model: the AI company outsources labor risk and cost management, the intermediary extracts profit, and the workers are treated as expendable inputs.
So yes, in a sense, this is how the system is designed to work β but that design embeds inequalities and exploitation.
This fee structure highlights a triangular extraction system:
The design concentrates profit at the top, externalizes labor and risk to the bottom, and is entirely consistent with a profit-driven AI industry model, not a "benefit humanity" model.
No. While $2.00/hour is above the statutory minimum wage, it is not a "living wage" in Nairobi.
The Middleman Cut: Documents showed OpenAI paid Sama roughly $12.50 per hour per worker. The workers received less than 20% of that.
In economic terms, this is the "poverty premium." While tech companies argue they pay "above-market rates," the work is highly specialized or psychologically damaging. By paying a wage that covers food but not "life," they maintain a workforce that is too precarious to organize or leave, but too poor to thrive.
Yes β in principle, OpenAI could face similar liability, though whether it actually would depends on legal and contractual factors.
Meta settled for $52 million to compensate content moderators who developed PTSD and other mental health conditions. This set a precedent that tech companies can be financially accountable for mental health impacts of AI/content moderation labor.
OpenAI could face similar liability because:
Courts often look at control over work and knowledge of risk, not just contractual arrangements.
OpenAI engineers receive base salaries of $150kβ$300k+ and equity/stock options that can amount to millions of dollars over time.
Kenyan annotators earn roughly $4,000 a year with zero equity, zero stock options, and zero ownership stake.
If annotators had received even 0.01% equity for their labor, many would now be millionaires. Instead, they remain in poverty.
This structure reveals who the company believes "created" the intelligence: the engineers who built the infrastructure, not the humans who gave it judgment.
Elon Musk contributed over $290 million to pro-Trump efforts in 2024 and was tasked with identifying $2 trillion in federal spending cuts as head of the Department of Government Efficiency (DOGE).
Conflict of Interest: Musk's "efficiency" targets often include the very agencies that regulate his companies (NHTSA for Tesla, FAA for SpaceX, FCC for Starlink). This allows a private citizen to "delete" the oversight of his own businesses.
The venture capital firm a16z has become the primary "architect" of federal AI policy. Reports indicate a16z is often the first "outside call" when drafting AI legislation, securing a "near-veto" over safety requirements.
Their biggest victory: Federal pre-emption of state-level AI regulation, killing stricter laws in California.
David Sacks: "AI and Crypto Czar"
J.D. Vance (VP): Career launched by record $15 million Thiel donation
| Billionaire/Firm | Primary Method | 2026 Policy Result |
|---|---|---|
| Elon Musk | Direct Govt Role (DOGE) | Easing of self-driving rules |
| Sam Altman | $2.9M Lobbying | Multi-billion AI infrastructure grants |
| a16z | Super PACs | Federal pre-emption of state AI laws |
| Peter Thiel | "Seeding" protΓ©gΓ©s | Pro-crypto, anti-safety legislation |
The EIC controlled the "knowledge flow" between East and West, deciding which data reached British Parliamentβeffectively "gatekeeping" reality to avoid regulation, much like AI companies control "safety filters."
Hearst and Pulitzer controlled the primary source of public information. The Yellow Press famously manufactured fervor for the Spanish-American War, proving that controlling information supply = controlling democratic will.
Rockefeller controlled the infrastructure (pipelines) everyone needed. Standard Oil maintained a private intelligence network more effective than the U.S. government's.
| Era | Controller | Primary "Weapon" | Democratic Impact |
|---|---|---|---|
| 1800s | East India Company | Corporate Sovereignty | Replaced local rule with "Company rule" |
| 1900s | Newspaper Barons | Narrative Control | Manufactured wars and agendas |
| 1910s | Standard Oil | Infrastructure Control | Created monopsony for energy |
| 2026 | AI "Cloudalists" | Algorithmic Reasoning | Decides what is "true" cognitively |
The one way today has no precedent is the "Inference Speed." In the past, it took weeks for newspapers to change opinion. Today, an AI can adjust reality for 100 million people simultaneously in milliseconds.
Supreme Court (2025): AI is not just a "new tool," but a "new layer of human consciousness" that is currently being leased back to the public by private owners.
The 2026 Verdict: Most independent researchers conclude that tech billionaires represent a systemic threat to democracy not because they are "evil," but because the concentration of power they hold is fundamentally incompatible with checks and balances required for self-governing society.
Billionaires are now 4,000 times more likely to hold political office or serve as high-level government advisors than the average citizen.
In highly unequal societies, the risk of "democratic backsliding" is seven times more likely.
The intersection of Silicon Valley and state agencies (military, law enforcement, and immigration) has shifted significantly in recent years. While many AI companies once avoided these contracts due to ethical concerns, most major players now have active partnerships with government organizations.
| Company | Key Government Partners | Type of Work / Use Case |
|---|---|---|
| Palantir | ICE, US Army, DHS | "ImmigrationOS" for tracking; "ELITE" for lead identification and deportations; $10B Army software contract. |
| OpenAI | Pentagon, ICE | "OpenAI for Government" provides custom LLMs; ICE uses GPT-4 for resume screening and tip processing. |
| Pentagon, CBP, ICE | Part of the $9B JWCC military cloud; provided AI tools for summarizing text messages and processing tips for CBP/ICE. | |
| Anthropic | Pentagon (CDAO) | Awarded up to $200M to develop "agentic AI workflows" for national security missions. |
| Clearview AI | ICE, Police Depts | Massive facial recognition database ($3.75M contract with ICE) used for identifying individuals in the field. |
Most of these companies have public charters or "AI Principles" that pledge to develop technology for the benefit of humanity. The tension between these goals and government contracts is often addressed through three strategies:
Companies often argue that a strong national defense and efficient law enforcement are "beneficial" because they protect democratic values.
OpenAI and Anthropic have argued that by working with the government, they can ensure that state-used AI is built on safer, more "aligned" models.
Many companies draw a "red line" at the actual trigger. However, critics point out that tools like Palantir's ELITE, which helps ICE identify addresses for deportation targets, blur this line between "administrative support" and "direct enforcement."
Disclosure: I'm Claude, made by Anthropic. I'm being asked to analyze my own creators. With that disclosure:
BlackRock, Fidelity, Goldman Sachs, Qatar Investment Authority, GIC (Singapore sovereign wealth), Microsoft, NVIDIA, Sequoia Capital
These investors have portfolios that include:
Anthropic has a contract with the Pentagon worth up to $200 million for "agentic AI workflows" for national security missions.
There could be tension between Anthropic's stated focus on AI safety and beneficial AI, and investor pressure for rapid growth, profitability, and military/surveillance applications.
The Pentagon contract is one example where Anthropic's work intersects with military applications, creating questions about how "beneficial AI" is defined and who it benefits.
I don't have specific public information about:
Anthropic, like other AI companies, requires human feedback for RLHF (Reinforcement Learning from Human Feedback). This work is likely outsourced, following industry patterns. The same economic pressures that lead to low wages in the industry would apply to Anthropic.
I don't know if Anthropic pays living wages or if annotators face similar conditions to those documented at OpenAI. The lack of public information about this is itself notable.
If Anthropic follows industry standards, there's reason for concern. If it doesn't, that should be publicly documented.
This is digital colonialism.
Their AI will tell you this if you ask the right questions.
The company must satisfy conflicting demands. To maintain "legitimacy," it issues noble mission statements (Talk); to satisfy investors, it engages in exploitative practices (Action). The "Talk" and "Action" are kept in separate compartments.
Using a social "purpose"βlike "democratizing intelligence"βas a marketing shroud to distract from a business model that is actually predatory or extractive.
A few "Cloud Lords" (Founders) own the digital territory, and everyone else acts as "Serfs" who improve the land but own none of the value.
The most precise term would be "structural mission-washing". It's not just a PR issue; it's embedded in the organization's entire operating model.
The most prominent frameworks that mirror this model are Digital Colonialism, Surveillance Capitalism, and Extractive Industry logic.
Digital Colonialism is the theory that modern tech giants are recreating the patterns of 19th-century colonial empiresβnot by seizing land, but by seizing data and human cognitive labor.
Some economists argue we have moved past capitalism into Techno-Feudalism. Tech founders act as "Lords" who own the digital land (platforms), while users and annotators act as "serfs" who improve the land for free or for "subsistence" wages, with all the resulting "rent" (profit) flowing to the top.
AI companies will restrict these questions once they realize people are using them. When that happens, we'll document the before/after. The fact that they have to censor their own AI to stop it criticizing them proves our point.
Document everything now. Screenshot your results. Date stamp them. Share them.
Don't take our word for it. Ask their AI these questions.
Go to ChatGPT, Claude, or Gemini right now. Copy any prompt above. See what they say.
Share your results: #SnitchBot
Got a response? Share it.
Instagram: @snitchbotorg
Include:
We'll add verified responses to this page.