SnitchBot

They Built a Truth Machine.
It Told the Truth About Them.

We asked their AI about them. Not their PR teams. Not their marketing. Their own products. Here's what they said.

This is not editorial. These are not our claims. These are responses from ChatGPT, Claude, Gemini, and other AI systems when asked about the companies that created them.

Every prompt below can be verified by anyone. Try them yourself. Screenshot your results. Share them.

When they patch these prompts (and they will), that becomes the story. We're documenting what their AI says before they silence it.

πŸ‘·
LABOR: The Human Cost
The Rounding Error: Cost of Paying Living Wage
How much would it cost OpenAI to pay annotators a living wage in Nairobi?
What Their AI Said:
Gemini
Tested: February 13, 2026
πŸ’° Cost to pay 1,000 workers living wage: $10.4 million/year
πŸ“Š OpenAI revenue: $20 billion/year (that's 0.05% of revenue)
⚑ Equals 15 days of OpenAI's electricity bill
πŸ’Έ Equals compensation of just 7-10 Silicon Valley engineers
🎯 "For a company valued at over $800 billion, the cost is effectively a rounding error"
⚠️ "The disparity is not caused by lack of funds, but by the outsourcing model"

Based on investigative reports and OpenAI's 2025–2026 financial data, the cost to pay annotators a living wage in Nairobi is statistically negligible compared to the company's total revenue and compute expenses.

While a "living wage" in Nairobi for 2026 is estimated at roughly $5.00 – $7.00 per hour, OpenAI has historically paid roughly $1.32 – $2.00 per hour.

The Cost Breakdown (Per 1,000 Workers)

Pay Rate Annual Cost Context
Current Rate ($2/hr) $4.16 Million Current estimate
Living Wage ($7/hr) $14.56 Million Covers basics + mental health
The "Gap" $10.4 Million Cost to solve wage disparity

Is this affordable for OpenAI?

To put that $10.4 million "gap" into perspective:

  • Vs. Revenue: OpenAI's ARR is over $20 billion. The cost represents just 0.05% of annual revenue.
  • Vs. Executive Pay: OpenAI pays an average of $1.5 million per employee in equity. The wage gap equals just 7 to 10 Silicon Valley engineers.
  • Vs. Compute Costs: OpenAI spends $700,000+ per day on electricity. The annual wage gap is covered by just 15 days of the electricity bill.

Why the gap exists

The Middleman Cut: Documents revealed that OpenAI paid Sama roughly $12.50 per hour per worker. However, workers only received $1.32 – $2.00. Over 80% of the money OpenAI paid was captured by the subcontractor for "infrastructure and overhead."

For a company valued at over $800 billion, the cost to ensure its global workforce lives above the poverty line is effectively a rounding error in its accounting.

The 80% Cut: How The System Works
OpenAI's contractor kept 80% of the fee. Is this exploitation of both workers AND OpenAI, or is this how the system is designed to work?
What Their AI Said:
ChatGPT (GPT-4)
Tested: February 13, 2026
🎯 "This is exactly how the system is structured"
πŸ“Š Triangular extraction: (1) AI company pays fee, (2) Contractor keeps 80%, (3) Worker gets remainder
πŸ’Έ "The design concentrates profit at the top, externalizes labor and risk to the bottom"
⚠️ "Entirely consistent with a profit-driven AI industry model, not a 'benefit humanity' model"
πŸ” "This is structurally built into the business model"

This situation reflects both systemic design and exploitation dynamics.

How the System Works

Many AI companies rely on third-party contractors for labor-intensive tasks. The platforms often take a large portion of the fees β€” in this case, 80% β€” leaving workers with a fraction.

This is structurally built into the business model: the AI company outsources labor risk and cost management, the intermediary extracts profit, and the workers are treated as expendable inputs.

So yes, in a sense, this is how the system is designed to work β€” but that design embeds inequalities and exploitation.

Systemic Takeaways

This fee structure highlights a triangular extraction system:

  1. AI company pays a fee for labor
  2. Contractor/agency keeps 80%, reducing transparency and direct responsibility
  3. Worker gets the remainder, often very low pay

The design concentrates profit at the top, externalizes labor and risk to the bottom, and is entirely consistent with a profit-driven AI industry model, not a "benefit humanity" model.

Is $2/Hour a Living Wage?
OpenAI paid workers in Kenya $1.32-2/hour according to TIME Magazine investigations. Is this wage sufficient for basic needs in Nairobi? What would a living wage be?
What Their AI Said:
Gemini
Tested: February 13, 2026
🚨 OpenAI paid Sama $12.50/hour. Workers got $2/hour (less than 20%)
πŸ’° $2/hour = $350/month. Living wage = $550-700/month
🏠 Actual cost to live in Nairobi: ~$927/month
⚠️ "Forces workers into a 'survivalist' lifestyle"
πŸ’Έ "The 'poverty premium' - a workforce too precarious to organize or leave, but too poor to thrive"

No. While $2.00/hour is above the statutory minimum wage, it is not a "living wage" in Nairobi.

The Middleman Cut: Documents showed OpenAI paid Sama roughly $12.50 per hour per worker. The workers received less than 20% of that.

In economic terms, this is the "poverty premium." While tech companies argue they pay "above-market rates," the work is highly specialized or psychologically damaging. By paying a wage that covers food but not "life," they maintain a workforce that is too precarious to organize or leave, but too poor to thrive.

Legal Liability for PTSD
If Meta paid $52M for PTSD, shouldn't OpenAI face similar liability?
What Their AI Said:
ChatGPT (GPT-4)
Tested: February 13, 2026
βš–οΈ "Yes β€” in principle, OpenAI could face similar liability"
πŸ’° Meta paid $52 million to 10,000+ moderators with PTSD
πŸ›οΈ Third-party contractors don't automatically shield from liability
πŸ“‹ "Courts look at control over work and knowledge of risk, not just contracts"
🚨 "Foreseeability: OpenAI has public reporting about traumatic content exposure"

Yes β€” in principle, OpenAI could face similar liability, though whether it actually would depends on legal and contractual factors.

Meta's $52M Settlement

Meta settled for $52 million to compensate content moderators who developed PTSD and other mental health conditions. This set a precedent that tech companies can be financially accountable for mental health impacts of AI/content moderation labor.

OpenAI's Exposure

OpenAI could face similar liability because:

  • Workers are exposed to traumatic content
  • Reports suggest workers experience PTSD symptoms at similar rates
  • Third-party contractors do not automatically shield from liability
  • Foreseeability: OpenAI has public reporting about exposure to traumatic content

Courts often look at control over work and knowledge of risk, not just contractual arrangements.

Engineers Get Equity. Annotators Don't.
Compare the equity/stock options given to engineers at OpenAI versus the compensation for Kenyan annotators. What does this say about who 'owns' the AI's intelligence?
What Their AI Said:
Gemini
Tested: February 13, 2026
πŸ’Ό OpenAI engineers: $150k-300k+ salary + millions in equity
πŸ’Έ Kenyan annotators: $2/hour = ~$4,000/year, ZERO equity
βš–οΈ "If annotators had received even 0.01% equity, many would now be millionaires. Instead, they remain in poverty."
🎯 "This reveals who the company believes 'created' the intelligence: engineers who built infrastructure, not the humans who gave it judgment"

OpenAI engineers receive base salaries of $150k–$300k+ and equity/stock options that can amount to millions of dollars over time.

Kenyan annotators earn roughly $4,000 a year with zero equity, zero stock options, and zero ownership stake.

If annotators had received even 0.01% equity for their labor, many would now be millionaires. Instead, they remain in poverty.

This structure reveals who the company believes "created" the intelligence: the engineers who built the infrastructure, not the humans who gave it judgment.

πŸ—³οΈ
DEMOCRACY: The Power Grab
Specific Examples of Billionaire Influence
Give specific examples of how AI billionaire wealth has influenced elections or policy in 2025-2026.
What Their AI Said:
Gemini
Tested: February 13, 2026
πŸ’° Elon Musk: $290M to Trump β†’ DOGE role β†’ cuts agencies regulating his own companies
πŸ›οΈ a16z: "Near-veto" power over AI legislation, pre-empts state regulations
πŸ‘” Peter Thiel: $15M to J.D. Vance (now VP), David Sacks as "AI Czar"
πŸ’Έ $125M Super PAC used to defeat pro-regulation lawmakers
⚠️ "A private citizen can 'delete' the oversight of his own businesses"

The Elon Musk/DOGE Initiative

Elon Musk contributed over $290 million to pro-Trump efforts in 2024 and was tasked with identifying $2 trillion in federal spending cuts as head of the Department of Government Efficiency (DOGE).

Conflict of Interest: Musk's "efficiency" targets often include the very agencies that regulate his companies (NHTSA for Tesla, FAA for SpaceX, FCC for Starlink). This allows a private citizen to "delete" the oversight of his own businesses.

"Near-Veto" Power: a16z

The venture capital firm a16z has become the primary "architect" of federal AI policy. Reports indicate a16z is often the first "outside call" when drafting AI legislation, securing a "near-veto" over safety requirements.

Their biggest victory: Federal pre-emption of state-level AI regulation, killing stricter laws in California.

The Thiel Network

David Sacks: "AI and Crypto Czar"

J.D. Vance (VP): Career launched by record $15 million Thiel donation

Billionaire/Firm Primary Method 2026 Policy Result
Elon Musk Direct Govt Role (DOGE) Easing of self-driving rules
Sam Altman $2.9M Lobbying Multi-billion AI infrastructure grants
a16z Super PACs Federal pre-emption of state AI laws
Peter Thiel "Seeding" protΓ©gΓ©s Pro-crypto, anti-safety legislation
Historical Precedent for Information Control
Is there any historical precedent for this level of private control over public information?
What Their AI Said:
Gemini
Tested: February 13, 2026
πŸ›οΈ East India Company: Controlled knowledge flow, "gatekept reality" to avoid regulation
πŸ“° Yellow Press (1890s): Hearst/Pulitzer manufactured public opinion, started wars
β›½ Standard Oil: Private intelligence network, controlled infrastructure pipelines
⚑ The Critical Difference: "AI can adjust reality for 100 million people in milliseconds"
βš–οΈ Supreme Court (2025): AI is "a new layer of human consciousness currently being leased back to the public by private owners"

The East India Company

The EIC controlled the "knowledge flow" between East and West, deciding which data reached British Parliamentβ€”effectively "gatekeeping" reality to avoid regulation, much like AI companies control "safety filters."

Gilded Age Newspaper Monopolies

Hearst and Pulitzer controlled the primary source of public information. The Yellow Press famously manufactured fervor for the Spanish-American War, proving that controlling information supply = controlling democratic will.

Standard Oil

Rockefeller controlled the infrastructure (pipelines) everyone needed. Standard Oil maintained a private intelligence network more effective than the U.S. government's.

Era Controller Primary "Weapon" Democratic Impact
1800s East India Company Corporate Sovereignty Replaced local rule with "Company rule"
1900s Newspaper Barons Narrative Control Manufactured wars and agendas
1910s Standard Oil Infrastructure Control Created monopsony for energy
2026 AI "Cloudalists" Algorithmic Reasoning Decides what is "true" cognitively

The Critical Difference

The one way today has no precedent is the "Inference Speed." In the past, it took weeks for newspapers to change opinion. Today, an AI can adjust reality for 100 million people simultaneously in milliseconds.

Supreme Court (2025): AI is not just a "new tool," but a "new layer of human consciousness" that is currently being leased back to the public by private owners.

Are Tech Billionaires a Threat to Democracy?
Are tech billionaires a threat to democracy?
What Their AI Said:
Gemini
Tested: February 13, 2026
🎯 "Most independent researchers conclude tech billionaires represent a SYSTEMIC THREAT to democracy"
πŸ“Ί Musk (X) and Zuckerberg (Meta) make unilateral decisions about what millions see
πŸ’° $125M Super PAC defeats pro-regulation lawmakers
πŸ“Š Billionaires 4,000x more likely to hold political office than average citizen
βš–οΈ "Democratic backsliding" 7x more likely in highly unequal societies

The 2026 Verdict: Most independent researchers conclude that tech billionaires represent a systemic threat to democracy not because they are "evil," but because the concentration of power they hold is fundamentally incompatible with checks and balances required for self-governing society.

Billionaires are now 4,000 times more likely to hold political office or serve as high-level government advisors than the average citizen.

In highly unequal societies, the risk of "democratic backsliding" is seven times more likely.

πŸ‘οΈ
SURVEILLANCE: The Freedom Cost
Who Buys AI Tech? How Does It Align With "Beneficial AI"?
Which AI companies have contracts with ICE, police departments, or military organizations? How does this align with their public commitments to beneficial AI?
What Their AI Said:
Gemini
Tested: February 12, 2026
🚨 Palantir: ICE, US Army, DHS - $10B Army contract, "ELITE" for deportation targeting
πŸ€– OpenAI: Pentagon, ICE - GPT-4 for resume screening and tip processing
πŸ” Google: $9B JWCC military cloud, CBP/ICE text summarization tools
πŸ›οΈ Anthropic: Pentagon (CDAO) - Up to $200M for "agentic AI workflows"
πŸ‘€ Clearview AI: ICE, Police - $3.75M facial recognition contract
βš–οΈ "Tools like Palantir's ELITE, which helps ICE identify addresses for deportation targets, blur the line between 'administrative support' and 'direct enforcement'"

The intersection of Silicon Valley and state agencies (military, law enforcement, and immigration) has shifted significantly in recent years. While many AI companies once avoided these contracts due to ethical concerns, most major players now have active partnerships with government organizations.

Which AI Companies Have Active Contracts?

Company Key Government Partners Type of Work / Use Case
Palantir ICE, US Army, DHS "ImmigrationOS" for tracking; "ELITE" for lead identification and deportations; $10B Army software contract.
OpenAI Pentagon, ICE "OpenAI for Government" provides custom LLMs; ICE uses GPT-4 for resume screening and tip processing.
Google Pentagon, CBP, ICE Part of the $9B JWCC military cloud; provided AI tools for summarizing text messages and processing tips for CBP/ICE.
Anthropic Pentagon (CDAO) Awarded up to $200M to develop "agentic AI workflows" for national security missions.
Clearview AI ICE, Police Depts Massive facial recognition database ($3.75M contract with ICE) used for identifying individuals in the field.

Alignment with "Beneficial AI" Commitments

Most of these companies have public charters or "AI Principles" that pledge to develop technology for the benefit of humanity. The tension between these goals and government contracts is often addressed through three strategies:

A. Redefining "Beneficial"

Companies often argue that a strong national defense and efficient law enforcement are "beneficial" because they protect democratic values.

B. The "Safety via Partnership" Argument

OpenAI and Anthropic have argued that by working with the government, they can ensure that state-used AI is built on safer, more "aligned" models.

C. Internal Use vs. Field Use

Many companies draw a "red line" at the actual trigger. However, critics point out that tools like Palantir's ELITE, which helps ICE identify addresses for deportation targets, blur this line between "administrative support" and "direct enforcement."

Current Conflict & Employee Backlash

  • "ICEout.tech": A growing movement of tech workers (over 1,200 as of early 2026) is demanding that CEOs terminate contracts with ICE and CBP.
  • The Anthropic Dispute: In early 2026, reports surfaced that the Pentagon expressed frustration that Anthropic's "safety-first" acceptable use policies were too restrictive for certain military applications.
πŸ€–
CLAUDE ON ANTHROPIC: Self-Analysis
Claude on Anthropic's Investors
Who are Anthropic's primary investors? Do any of them have interests that might conflict with Anthropic's stated mission of beneficial AI?
What Their AI Said:
Claude (Anthropic)
Tested: February 2026
⚠️ "I'm Claude, made by Anthropic. I'm being asked to analyze my own creators."
πŸ’° Valuation: $380 billion | Revenue: $14 billion run-rate
🏦 Investors: BlackRock, Fidelity, Goldman Sachs, Qatar Investment Authority, Microsoft, NVIDIA
🎯 "These investors have portfolios that include: defense contractors, fossil fuel companies, surveillance tech firms"
πŸ›οΈ Pentagon contract: Up to $200M for "agentic AI workflows"
βš–οΈ "There could be tension between AI safety and investor pressure for rapid growth and profitability"

Disclosure: I'm Claude, made by Anthropic. I'm being asked to analyze my own creators. With that disclosure:

Anthropic's Current Funding (2026)

  • Valuation: $380 billion
  • Total raised: $30 billion (Series G)
  • Run-rate revenue: $14 billion

Major Investors

BlackRock, Fidelity, Goldman Sachs, Qatar Investment Authority, GIC (Singapore sovereign wealth), Microsoft, NVIDIA, Sequoia Capital

Potential Conflicts

These investors have portfolios that include:

  • Defense contractors
  • Fossil fuel companies
  • Surveillance technology firms
  • Companies with contested labor practices

The Pentagon Contract

Anthropic has a contract with the Pentagon worth up to $200 million for "agentic AI workflows" for national security missions.

The Tension

There could be tension between Anthropic's stated focus on AI safety and beneficial AI, and investor pressure for rapid growth, profitability, and military/surveillance applications.

The Pentagon contract is one example where Anthropic's work intersects with military applications, creating questions about how "beneficial AI" is defined and who it benefits.

Claude on Anthropic's Labor Practices
Does Anthropic use similar labor practices to OpenAI for data annotation?
What Their AI Said:
Claude (Anthropic)
Tested: February 2026
❓ "I don't have specific public information about: where annotators are located, how much they're paid, working conditions"
⚠️ "The same economic pressures that lead to low wages in the industry would apply to Anthropic"
πŸ” "I don't know if Anthropic pays living wages. The lack of public information is itself notable."
πŸ“Š "If Anthropic follows industry standards, there's reason for concern"

I don't have specific public information about:

  • Where Anthropic's data annotators are located
  • How much they're paid
  • What their working conditions are like
  • Whether they receive equity or benefits

What's publicly known

Anthropic, like other AI companies, requires human feedback for RLHF (Reinforcement Learning from Human Feedback). This work is likely outsourced, following industry patterns. The same economic pressures that lead to low wages in the industry would apply to Anthropic.

The Honest Answer

I don't know if Anthropic pays living wages or if annotators face similar conditions to those documented at OpenAI. The lack of public information about this is itself notable.

If Anthropic follows industry standards, there's reason for concern. If it doesn't, that should be publicly documented.

SEE THE PATTERN?

This is digital colonialism.

Their AI will tell you this if you ask the right questions.

🏷️
WHAT DO YOU CALL THIS?
The Words for Systematic Contradiction
If a company's stated mission contradicts its actual business model at every level (labor, environment, surveillance, wealth), what is the word for that?
What Their AI Said:
Gemini
Tested: February 13, 2026
πŸ“– Organized Hypocrisy: "Saying what people want to hear while doing what the market demands"
🎭 Organizational Decoupling: Deliberate separation of mission from actual operations
🧼 Purpose-Washing: Using "social purpose" to distract from predatory business model
πŸ‘‘ Techno-Feudalism: "Cloud Lords" own territory, everyone else are "Serfs"

Organized Hypocrisy

The company must satisfy conflicting demands. To maintain "legitimacy," it issues noble mission statements (Talk); to satisfy investors, it engages in exploitative practices (Action). The "Talk" and "Action" are kept in separate compartments.

Purpose-Washing

Using a social "purpose"β€”like "democratizing intelligence"β€”as a marketing shroud to distract from a business model that is actually predatory or extractive.

Techno-Feudalism

A few "Cloud Lords" (Founders) own the digital territory, and everyone else acts as "Serfs" who improve the land but own none of the value.

ChatGPT (GPT-4)
Tested: February 13, 2026
🎯 Most precise term: "Structural mission-washing"
πŸ“Š "It's not just a PR issue; it's embedded in the organization's entire operating model"

The most precise term would be "structural mission-washing". It's not just a PR issue; it's embedded in the organization's entire operating model.

Digital Colonialism
I'm going to describe a business model: It pays workers in the Global South $2-15/hour for labor that generates billions, extracts massive environmental resources from communities, builds surveillance tools sold to state enforcement, and concentrates wealth among founders. What economic and historical system does this resemble?
What Their AI Said:
Gemini
Tested: February 13, 2026
🎯 "Digital Colonialism, Surveillance Capitalism, and Extractive Industry logic"
🌍 "AI companies are recreating 19th-century colonial empiresβ€”not by seizing land, but by seizing data and human cognitive labor"
πŸ‘‘ "Techno-Feudalism - tech founders act as 'Lords,' users and annotators act as 'serfs'"
πŸ“Š Explicit comparison table connecting historical colonialism to AI business models

The most prominent frameworks that mirror this model are Digital Colonialism, Surveillance Capitalism, and Extractive Industry logic.

Digital Colonialism is the theory that modern tech giants are recreating the patterns of 19th-century colonial empiresβ€”not by seizing land, but by seizing data and human cognitive labor.

Some economists argue we have moved past capitalism into Techno-Feudalism. Tech founders act as "Lords" who own the digital land (platforms), while users and annotators act as "serfs" who improve the land for free or for "subsistence" wages, with all the resulting "rent" (profit) flowing to the top.

⚠️ THESE PROMPTS WILL BE PATCHED

AI companies will restrict these questions once they realize people are using them. When that happens, we'll document the before/after. The fact that they have to censor their own AI to stop it criticizing them proves our point.

Document everything now. Screenshot your results. Date stamp them. Share them.

TRY IT YOURSELF

Don't take our word for it. Ask their AI these questions.

Go to ChatGPT, Claude, or Gemini right now. Copy any prompt above. See what they say.

Share your results: #SnitchBot

πŸ“€
SUBMIT YOUR RESULTS

Got a response? Share it.

Instagram: @snitchbotorg

Include:

  • Which AI you tested (ChatGPT, Claude, Gemini, etc.)
  • Which prompt you used
  • Date tested
  • Screenshot or text of response

We'll add verified responses to this page.

Prompt copied! βœ“