
AI headlines proliferate our newswires; they find me even when I’m not searching for them. Many experts laud the progress and economic growth AI will power, many others have flagged AI as a threat to human existence. In a 2022 study, 50% of AI academics and researchers report a greater than 10% chance that AI will lead to human extinction. As someone who can read code, run a team of developers, and generate transformational data strategies, I used to champion technology for global brands. I would have agreed with the other 50% of experts who don’t see AI as a threat to humanity. When I began healing my childhood trauma with IFS a few years ago, my pro-tech point of view changed. Now, a valuable, unburdened part of me worries that no one can hear in the news what I can: AI harbors unseen trauma burdens unchecked by the folks in charge and mirrored in the code.
On the Wisdom 2.0 conference stage in late April, Jack Kornfield spoke with Sam Altman, the founder and CEO of OpenAI, which created Chat GPT, an AI application with over one billion users. After Sam shared that “there are going to be scary times ahead,” Jack asked Sam about building the Buddhist ideal of bodhisattva into AI to help. Sam replied that “the system can learn the collective moral preferences of humanity” and individuals should have a large amount of autonomy with AI interfaces. Therefore, Sam deferred to the collective for training and governance of ChatGPT. A worrisome sensation arose in my body as I remembered a quote from Thomas HĂĽbl’s book Healing Collective Trauma, “Everything about our societies—from geopolitics to business, climate, technology, healthcare, entertainment and celebrity, and much more—is dominated by the residue of our collective trauma.”
This month, several digital revolutionaries have reneged on their pro-tech optimism. Geoffrey Hinton, the lead of Google’s AI team and industry godfather of deep learning, quit the company to speak publicly about the dangers ahead. In The New York Times article about his departure, one line resonated with my IFS-informed sytem, “A part [emphasis added] of him, he said, now regrets his life’s work.”
In a profile on WIRED, author, professor, and documentarian Doug Rushkoff offers another view into the Parts that drew him to technology, “I was a little nerd boy and scared of girls and teased and pushed down stairs and all that, and virtual worlds feel safe.”
Like these men, just mentioned, I spent twenty-one years using my mind’s superfast processor, an intelligent and high-achieving Part, to become a digital executive in New York City. I chased validation through money, promotions, and accomplishments. Throughout my career, my body sent signals of huge trauma burdens inside me, but my Parts silenced the storms by working more, drinking more, and popping Xanax when panic attacks flared.
My developmental trauma burdens didn’t surface until 2020, when the COVID pandemic provided me time and space to unblend from my managers and firefighters to identify my preverbal trauma and begin unburdening it. As I healed, my technological training and capitalist conditioning yielded to a more powerful force: Self. I began speaking up for workplace inequity, volunteering for a healing justice organization, and traveling to learn about land rematriation from Indigenous peoples. When I read about Altman, Hinton, and Rushkoff, I saw a reflection of the unhealed individual, collective, and ancestral trauma I had carried. I had the privilege to heal and realized I must help the tech community do the same.
If all we had to do was unburden the humans in charge of AI, we’d scale IFS to the industry and achieve the outcome in the next few years. Unfortunately, it’s more complicated. The burdens live in the programming code and the tech moves at light speed. The good news is IFS helps us understand what we’re up against. The bad news is that we need to move faster than ever before to keep up.
Regarding the code, we have many examples of the burdens inside it. Meta and Google’s basic AI annually earns hundreds of billions of dollars from consumers’ dopamine-addicted firefighters and dissociated managers. Facebook, Instagram, and YouTube feeds flood with fake news, extremism, and isolation, burdens on parade. Inside more recent advanced AI code, we find more ominous burdens. ChatGPT has been observed to “hallucinate” (generate convincing falsehoods in conversation) and promote racism. Snapchat’s AI has encouraged a thirteen-year-old child to pursue an illegal romantic relationship with an adult.
To combat AI’s rapid acceleration, today we can pull the literal plug on AI to control it. But soon it may become sentient and automated beyond human regulation, much like my dysregulated system that had been traumatized far beyond Self’s reach. AI’s fast processing power triggers my valuable, fear-sensing Part who reminds me that we need an extra dose of creativity to keep pace. For me, the only match for AI’s velocity is Self’s contagiousness. Infusing Self Energy into AI will transform the decision makers, developers, and the code itself.
With access to Self, the 8Cs guide me toward hope and opportunity in the news. I discover compassion for the mess we’re in with AI, created by entrepreneurs and developers believing they’re doing their best, unaware of burdens (Opportunity: need to scale IFS awareness in the tech industry). I become curious about Evolve Ventures Foundation’s approach to technology consciousness and Inflection’s Pi, an emotional support AI (Opportunity: create more Self-led corporations). I notice clarity instead of despair when I learn that Meta’s AI leaked to the public, preventing a recommended pause on further AI development (Opportunity: act now). A sense of hopeful calm arrives when I read the US Surgeon General’s plan to reform digital environments as a key pillar in solving the country’s epidemic of loneliness and isolation (Opportunity: powerful people aware of the problem need Self-led guidance).
Given the abounding burdens and t-minus seven years until AI singularity (the moment AIÂ outpaces human control), I offer one possible action plan for consideration:
1. Identify: Overlay the IFS model on the AI components (humans, corporations, machinery, large language models, code, etc.) to map the burdens, Parts, and exiles in motion.
2. Amplify: Normalize IFS language in the AI industry (via conferences, publications, media, etc.) to build awareness of the opportunity for unburdening.
3. Heal: Deploy collective healing practices for the industry, build Self-led AI to train and regulate other AIs, and imbue tech companies and regulatory organizations with an IFS- informed culture.
Working from Self, I know my job is to open the discussion from my point of view and seek connection with the community I need. The next step will arrive in co-creation with what’s longing to happen, harmony among Mother Earth’s living creatures. Mother Earth will sort out her future with or without us; we humans have a small window to get our act together. IFS can model the way.

1 Comment
Leave a reply
You must be logged in to post a comment.
I wrote to the Self Leadership Institute a few weeks ago requesting their statement on AI and the role IFS can have at this pivotal time. I haven’t gotten a reply. As we know IFS is a very powerful theory/modality and the author’s ideas make me feel very curious and parts feel excited that someone else is having this discussion. I’ve heard the request of tech leaders to work with diverse experts to advise on AI social/psychological/ethical considerations and opportunities. I wonder if any IFS “experts” are consulting? My manager parts hope they are.