🌸 ikigai 生き甲斐 is a reason for being, your purpose in life - from the Japanese iki 生き meaning life and gai 甲斐 meaning worth 🌸
I’ve been told many times I’m too optimistic.
It’s true that I do tend to think things will work out okay in the end, so I often race ahead getting excited about new tech with a head full of a sparkly future.
Others start out by listing potential disasters and we need those peeps for sure.
The consequences of some tech releases have arguably been as negative as positive, in ways I hadn’t foreseen when building one of the first online communities on the Isle of Man, 25 or so years ago.
With the gift of hindsight I wonder could I have done more to educate and protect those around me. Especially on issues of online safety for women & girls or the tendency for algorithms to fuel addiction, division and hate, to name but two of the 12 favourite problems I care about deeply (the WHAT THE WORLD NEEDS aspect of the ikigai/hatarakigai equation).
So I have worried whether optimism might be a flaw in me, maybe I wasn't thinking things through thoroughly enough or I need to tone down my enthusiasm to be taken seriously.
However, the deeper I go into my ikigai journey I realise that relentless optimism isn't a bug, it's a feature. It's part of who I am, and an ability to see sparkle and potential where others might only see problems *is* helpful in moving things forward. As long as I have some pessimists around me for balance *grin*
After recently hearing this phrase for the first time, I’ve been reading up on the origins of ‘ikigai risk of AI’. Which led to me being completely absorbed by a two-hour podcast debate between Lex Fridman and AI safety researcher Roman Yampolskiy, someone often labeled an AI pessimist.
What caught my attention wasn't just their discussion of whether AI safety is even possible, but the fascinating concept Yampolskiy introduced alongside the commonly discussed existential threats to humanity (X-risk) and risks of suffering (S-risk), he describes an "I-risk", the threat to human ikigai.
Why did he separate loss of purpose from suffering more broadly?
I think it’s because even in the optimistic scenarios where AI only enhances rather than threatens humanity, we still *do* face a unique challenge to what makes us fundamentally human, our need for meaningful work and contribution.
As Viktor Frankl famously wrote in Man's Search for Meaning;
“He who has a why to live for can bear almost any how”
I think this is a profound truth that underscores why protecting human purpose isn't a philosophical luxury, it's essential to our resilience and survival as a species.
More precisely, what Yampolskiy terms "I-risk" I think relates to hatarakigai “work worth doing”. When AI can potentially handle any cognitive or creative task, what becomes of work and human purpose?
Threats to some sources of meaning are already emerging;
As systems make increasingly complex decisions, do we risk losing our sense of agency?
If AI handles our hardest problems, where and how do we develop grit and skill?
When achievements come more easily through AI assistance, what happens to our sense of accomplishment?
If AI can create poetry or music that humans enjoy does that impact artistic mastery?
What impact will AI have for young people’s ability to learn if they aren’t shown the cognitive value of figuring some things out for themselves offline?
The threat to human purpose is something we can ALL actively work on right now, through thoughtful integration and boundary setting. We don't need to wait for artificial general intelligence (AGI) to ask these questions and collaborate to protect human meaning.
This is why diverse voices in AI development are so important, bringing different perspectives on safety, ethics and emotional intelligence. The more people who understand these tools well enough to experiment and form informed opinions, the better our chances of developing AI tools and practices that enhance rather than diminish human purpose.
I will keep exploring this topic and share my thoughts on practical steps to diminish ikigai risk in particular.
Many experts are optimistic about our ability to shape positive outcomes. This is all the more likely if we are mindful of what makes us uniquely human. This willingness to thoughtfully engage while maintaining optimism has led to unexpected opportunities.
Speaking of mindful, I was so excited this week to accept an invitation within the OpenAI Forum (an incredible network of curated members) to become part of the Community Leader group. A volunteer role taking responsibility to drive engagement amongst the passionate AI researchers, professors and professionals from across the globe. Encouraging collaboration and challenge on responsible and ethical AI use, to explore and understand the transformative impact of AI across various sectors. Someone from OpenAI mentioned they are working on an ikigai talk for the forum soon, I couldn't help but smile at the serendipity.
Who would have thought that these two very different topics; ikigai and AI, would intersect not just in my own journey, but in the minds of others working to shape AI's future?
When you're open to possibilities, the world has a way of revealing magical connections.
So yes, listening to a podcast talking about risks to humanity made me reflect on my own optimism. There are challenges we need to talk about for AI safety, but perhaps my natural inclination to see possibility isn't naivety but what is needed right now. Rather than paralysis in the face of potential risks, we need people willing to engage thoughtfully with these tools, to experiment and advocate for development that enhances rather than diminishes what makes us human.
Ultimately, protecting our sense of purpose is about ensuring humanity continues to thrive, grow and find meaning in whatever future we create.
Sarah, seeking ikigai xxx
PS - Let’s start by thinking about two measurements that could be helpful frames if you want to join me in the quest to diminish the ikigai risk of AI. I suggest you create a note or journal spread to explore one or both, the second is a little on the geeky but alarmist side but an important conversation I feel….I’d SO love for you to share in the comments your answers and thoughts! >
Your level of AI confidence or literacy, for this purpose it just needs to be honest reflection on where you would place yourself on a scale like the one Section use in their AI Proficiency Report (which is a fascinating read and free to download); a) AI skeptic, b) AI novice, c) AI experimenter, d) AI practitioner or e) AI expert. It can be easy to move up this scale and I’d argue that it is VERY sensible to be an expert by this definition (which I promise does NOT require a lot of time or cost, please reach out if you want specific help, or look at some of the free resources I have been building at www.learnai.im/courses!)
What you would classify your p(doom) score as and why? Where p(doom) is your assessment of the probability of AI doom expressed as a percentage. For context Roman Yampolskiy states his is 99.9% on a 100 year timescale, Dario Amodei founder of Anthropic has stated a range of 10-25% and there are many who put it at or near 0, stating that it is more likely an asteroid wipes us out. Instinctively I think I’m in the range of 0-10% but the fact I can’t confidently state a number at the lower end means I’m driven to keep reading up on AI safety research and understand better what is and isn’t happening in this space!