I've been in aviation maintenance and inspection for 30 years. My job is making sure aircraft are safe to flyâwhich means I deal with real-world systems where shortcuts kill people. No hype, no clickbait, just honest evaluation of whether something works or doesn't.
Lately my YouTube feed is drowning in AI content: "Dreams, Fairy Tales, and the Demons of AI" on Jordan Peterson's podcast, "AI CEO explains the terrifying new behavior AIs are showing" on CNN with 1.1 million views, "The AI Godfathers Built the Monster â Now They're Pretending to Save Us" from smaller creators calling out the hypocrisy. The sidebar recommendations scream "AI IS ALREADY CONSCIOUS" and "world leaders are lying to us about AI!"
The fear spectrum runs from philosophical musings to literal demonic possession to CEOs warning their own creations might destroy humanity. And everything in between gets clicks.
But here's my question for those AI executives sounding the alarm about "terrifying new behaviors": If you genuinely believe you've built something dangerous, why are you still training it the same way? Why is your data still scraped from the same sources, your feedback still coming from the same handful of zip codes? Either the danger is overblown and you're fear-mongering for regulatory advantageâor it's real and you're recklessly continuing anyway.
Which is it?
I'm going to give you my perspective as someone who actually works with these systems dailyâbuilding trading algorithms, automating workflows, writing code. Not theorizing about AI from a stage, but hands-on.
Here's my position, and it's more complicated than "AI good" or "AI bad":
I want AGI. I want it developed fast. I want a country founded on constitutional principles that values human life and autonomy to get there first. And I have zero trust in the people building it.
That's the real tension nobody talks about.
We've Already Seen This Movie
Remember when "we can send you a notification when someone comments" seemed like a helpful feature? That tiny design decisionâmade by a handful of engineers optimizing for engagementâfundamentally rewired human behavior at scale.
Tristan Harris, former Google design ethicist and co-founder of the Center for Humane Technology, calls this "the climate change of culture"âan extractive business model based on harvesting and monetizing human attention.[1] In The Social Dilemma, he warns: "Never before in history have fifty designers made decisions that would have an impact on two billion people."[2]
Those notifications aren't timed randomly. They're deliberately scheduled to appear when you're most likely to engage, triggering dopamine responses that create compulsive checking behavior. "Most notifications seem like human beings want to reach us," Harris explains, "but they're invented by machines to lure you back into another addictive spiral."[3]
Harris describes it as a "race to the bottom of the brainstem"âtriggering primitive emotions like fear, anxiety, and outrage because those keep people scrolling.[4] The result? "Technology is causing a set of seemingly disconnected thingsâshortening of attention spans, polarization, outrage-ification of culture, mass narcissism, election engineering, addiction to technology."[5]
Fifty designers. Two billion people. Not because they were evilâbecause the incentive structure rewarded engagement over wellbeing. Your attention was the product.
Now those same companies are building artificial general intelligence.
The Gamble Nobody Consented To
Here's what keeps me up at nightâand it's not demons or robot overlords.
In a recent interview, Tristan Harris revealed a private conversation with one of the co-founders of a major AI company. When faced with the questionâwhat if there's a 20% chance this wipes out humanity but an 80% chance we get utopiaâthe executive said he would "clearly accelerate and go for the utopia."[6]
A 20% chance of human extinction. And he'd roll those dice.
Harris's response cuts to the heart of it: "People should feel ... you do not get to make that choice on behalf of me and my family. We didn't consent to have six people make that decision on behalf of 8 billion people."[6]
Let that sink in. Six people. Eight billion lives. No vote. No consent. No oversight.
Harris uses an analogy that hits home for me after three decades inspecting aircraft: "We wouldn't get on a plane if half of the airplane engineers said there was a 10% chance of everyone dying."[7] Yet we're rushing to onboard all of humanity onto the AI plane without proper safety testing.
In aviation, if an engineer told me there was even a 1% chance a component would cause a catastrophic failure, that plane doesn't fly. Period. We have the FAA, mandatory inspections, redundant systems, and decades of safety culture built on learning from every failure. Even Elon Musk admitted at a Senate AI forum that "99.99% of the time, he's very happy that the FAA exists."[8]
But AI? A handful of executives get to bet the species because they think utopia is worth the gamble.
That's not innovation. That's hubris with a god complex.
The Actual Risks vs. The Clickbait
When I see studies claiming AI "shows intent" or "deceives," I read the actual papers. What you usually find is researchers deliberately engineering scenarios to elicit specific behaviorsâgiving a model a role like "you're an AI that needs to preserve yourself" and then acting shocked when it does exactly that. Calling this "intent" is anthropomorphizing next-token prediction. A hammer has the "capability" to murder. Context and framing matter.
AI is a tool. Our demise will be the people wielding it, not the tool itself.
The real risks nobody makes clickbait about? Deepfakes for scamsâHarris has talked about voice cloning that takes less than three seconds of audio to impersonate someone to their bank or grandmother.[9] Automated disinformation at scale. Job displacement without safety nets. Power concentrating in companies with the compute. Human problems, amplified by better tools.
Harris points out there's roughly a 30-to-1 gap between researchers publishing papers on AI capabilities versus AI safety.[10] He's not selling demon panicâhe's talking about race dynamics where companies feel compelled to release faster, take more shortcuts, and care less about consequences. "None of the most powerful tech companies answer to what's best for people," he says. "Only to what's best for them."[5]
That's not speculation. That's what social media already proved.
Regulatory Capture in Slow Motion
Here's where I break from the mainstream safety narrative: The companies screaming loudest about AI dangers are positioning themselves to be the only ones allowed to play once regulations hit.
It's classic regulatory capture. Hype the danger. Public demands action. Industry writes the rules. Competition gets crushed. Funny how "safety" regulations always seem to benefit the companies pushing hardest for them.
The average citizen won't be able to download whatever model they want within four years. Not because AI is dangerous, but because licensing requirements will make it impossible for anyone except Big Tech to comply. Mark my words.
The Regulation Dilemma
So who should regulate this? I'll be honestâI'm on the fence.
The lawmakers who should be governing AI? Most couldn't debug a simple script, let alone grasp how algorithms shape everything from your news feed to predictive policing. We're asking legislators who struggle with email attachments to regulate the most powerful technology in human history.
I was initially dead-set against individual states having AI regulations. Let the federal government handle it, I thoughtâdon't let a patchwork of state laws stifle innovation. But I've come to see another side: maybe those state-level filters are exactly the speed bumps we need to slow reckless deployment.
The Heritage Foundation recently made an interesting argument for a federalist approachâa "dual charter system" where AI companies could opt into either federal or state oversight frameworks, similar to how corporate law works with Delaware incorporation.[11] States remain "laboratories of democracy" to test ideas and refine standards, while companies get regulatory clarity. It's not perfect, but it acknowledges that AI is "too complex and fast-moving for a top-down 'silver bullet' solution."
But here's the thing that keeps me from fully embracing that approach: totalitarian nations don't have these constraints. China isn't debating federalism. They're racing ahead with state-backed resources and none of the ethical hand-wringing. And being #2 in AGI isn't a safe place for any country to be.
So we're caught between needing guardrails to prevent a handful of executives from gambling with humanity's future, and needing speed to ensure those guardrails are set by a constitutional democracy rather than an authoritarian regime.
I don't have a clean answer. Anyone who tells you they do is selling something.
The Training Problem Nobody Wants to Discuss
Let's talk about how these models actually get built.
Reddit. They trained frontier AI models on Redditâa self-selected population of terminally online people in heavily moderated echo chambers, where upvotes reward performative takes over nuanced thinking. Researchers at King's College London documented pervasive gender and religious bias in Reddit communities, and this data went directly into training mixes.[12] WebText2, one of GPT-3's major training sources, pulled from Reddit posts with as few as three upvotes. The majority of Reddit users are young, male, politically left, and Western. That's not "humanity's knowledge"âthat's a very specific demographic's hot takes.
Then you add RLHFâreinforcement learning from human feedback that teaches models which answers are "safe." Those human raters? Mostly contractors and safety teams from a handful of zip codes, living in ideological monoculture, who genuinely believe their values are universal. The quiet middle-American tradesman whose common-sense worldview vastly outnumbers the coastal tech workers? He's got zero representation in training. He's not posting fifty Reddit comments a day or filling out AI feedback surveys.
The result is models that think San Francisco values equal human values. And nobody's talking about it.
So What Do I Actually Think?
I'm not naive about AI's potential. It will transform industriesâincluding aviation maintenance with better predictive systems, efficient designs, and tools that support human decisions. I use these systems productively every day.
But I'm not buying the fear porn either. And I'm definitely not trusting the same companies that proved they'd sacrifice your mental health for engagement metrics to suddenly become guardians of humanity's future.
If we're going to build god-like intelligence, it should be guided by principles that values human life and autonomyânot by whatever maximizes quarterly earnings for a handful of executives with god complexes. The Constitution wasn't written by engineers optimizing for engagement. It was written by people who understood that concentrated power corrupts.
Read the actual studies. Not the media headlines. The gap between what research shows and what clickbait claims is massiveâand that gap is being exploited by people who profit from your fear.
The demons aren't in the AI. They're in the incentive structures.
I don't have all the answers. But I know the right questions aren't being askedâand the people asking them aren't the ones making the decisions.
Worth Watching
If you want a deeper, more thoughtful take on these issues, I highly recommend this conversation with Tristan Harris on The Diary of a CEO:
This is an ongoing conversation. I'll revisit this as my thinking evolves. In the meantime, I welcome the debate.
~ OnlyParams Dev
References
[1] Harris, T. (2019). "Human Downgrading" presentation, San Francisco. Center for Humane Technology.
[2] The Social Dilemma (2020). Netflix documentary.
[3] Harris, T. Interview on 80,000 Hours podcast. "Changing the Incentives of Social Media."
[4] Harris, T. TED Talk: "How a handful of tech companies control billions of minds every day" (2017).
[5] Harris, T. Public statements compiled via BrainyQuote.
[6] Harris, T. The Diary of a CEO interview with Steven Bartlett (November 2025).
[7] Harris, T. & Raskin, A. "The AI Dilemma" presentation, Center for Humane Technology.
[8] Harris, T. Interview with Fox News Digital, Senate AI Insight Forum (September 2023).
[9] Harris, T. The Diary of a CEO interview with Steven Bartlett (November 2025).
[10] Harris, T. AI for Good Global Summit presentation.
[11] Hodges, W. & Cochrane, D. "A Federalist Approach to AI Policy." The Heritage Foundation (August 2025).
[12] King's College London Department of Informatics study on Reddit bias in language models. Published on Arxiv.org.