I think of us in some kind of twilight world as transformative AI looks more likely: things are about to change, and I don’t know if it’s about to get a lot darker or a lot brighter.
Increasingly this makes me wonder how I should be raising my kids differently.
Why I’m thinking about this
I’m somewhat used to thinking of this in terms of “doom / not doom” and less used to thinking in terms of “what kind of transformation?”
One thing that got me thinking beyond that binary was historian Ian Morris on Whether deep history says we’re heading for an intelligence explosion, specifically why we should expect the future to be wild.
Another was this interview with Ben Garfinkel of the Centre for the Governance of AI, starting with reasons you might think AI will change things a lot:
- If you think it’ll be comparable to the industrial revolution, that sure altered people’s work and personal lives a lot
- Maybe enough work will be automated that people won’t really have jobs
- AI could exacerbate and destabilize political conflicts, so we might see more political chaos and/or war
- Really powerful, capable AI systems could behave unexpectedly or go wrong in lots of ways
What might the world look like
Most of my imaginings about my children’s lives have them in pretty normal futures, where they go to college and have jobs and do normal human stuff, but with better phones.
It’s hard for me to imagine the other versions:
- A lot of us are killed or incapacitated by AI
- More war, pandemics, and general chaos
- Post-scarcity utopia, possibly with people living as uploads rather than in bodies that get sick and die
- Some other weird outcome I haven’t imagined
Even in the world where change is slower, more like the speed of the industrial revolution, I feel a bit like we’re preparing children to be good blacksmiths or shoemakers in 1750 when the factory is coming. The families around us are still very much focused on the track of do well in school > get into a good college > have a career > have a nice life. It seems really likely that chain will change a lot sometime in my children’s lifetimes.
When?
Of course it would have been premature in 1750 to not teach your child blacksmithing or shoemaking, because the factory and the steam engine took a while to replace older forms of work. And history is full of millenialist groups who wrongly believed the world was about to end or radically change.
I don’t want to be a crackpot who fails to prepare my children for the fairly normal future ahead of them because I wrongly believe something weird is about to happen. I may be entirely wrong, or I may be wrong about the timing.
Is it even ok to have kids?
Is it fair to the kids?
This question has been asked many times by people contemplating awful things in the world. My friend’s parents asked their priest if it was ok to have a child in the 1980s given the risk of nuclear war. Fortunately for my friend, the priest said yes.
I find this very unintuitive, but I think the logic goes: it wouldn’t be fair to create lives that will be cut short and never reach their potential. To me it feels pretty clear that if someone will have a reasonably happy life, it’s better for them to live and have their life cut short than to never be born. When we asked them about this, our older kids said they’re glad to be alive even if humans don’t last much longer.
I’m not sure about babies, but to me it seems that by age 1 or so, most kids are having a pretty good time overall. There’s not good data on children’s happiness, maybe because it’s hard to know how meaningful their answers are. But there sure seems to be a U-shaped curve that children are on one end of. This indicates to me that even if my children only get another 5 or 10 or 20 years, that’s still very worthwhile for them.
This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it.
What about the effects on your work?
If you’re considering whether to have children, and you think your work can make a difference to what kind of outcomes we see from AI, that’s a different question. Some approaches that both seem valid to me:
- “I’m allowed to make significant personal decisions how I want, even if it decreases my focus on work”
- “I care more about this work going as well as it can than I do about fulfillment in my personal life”
There are some theories about how parenting will make you more productive or motivated, which I don’t really buy (especially for mothers). I do buy that it would be corrosive for a field to have a norm that foregoing children is a signal of being a Dedicated, High-Impact Person.
One option seems to be “spend a lot of money on childcare,” which still seems positive for the kids compared with not existing.
In the meantime
Our kids do normal things like school. Partly because even in a world where it became clear that school isn’t useful, our pandemic experience makes me think they would not be happier if we somehow pulled them out.
I’m trying to lean toward more grasshopper, less ant. Live like life might be short. More travel even when it means missing school, more hugs, more things that are fun for them.
What skills or mindsets will be helpful?
It feels like in a lot of possible scenarios, nothing we could do to prepare the kids will particularly matter. Or what turns out to be helpful is so weird we can’t predict it well. So we’re just thinking about this for the possible futures where some skills matter, and we can predict them to some degree.
I haven’t really looked into what careers are less automatable; that seems probably worth looking at when teenagers or young adults are moving toward careers. I wouldn’t be surprised if childcare is actually one of the most human-specialized jobs at some point.
Some thoughts from other parents:
- A friend pointed out is that it’s good if children’s self-image isn’t too built around the idea of a career, because of the high chance that careers as we know them won’t be a thing.
- “For now I basically just want her to be happy and healthy and curious and learn things.”
- “I think it’s worth focusing on fundamental characteristics for a good life: high self esteem and optimistic outlook towards life, problem solving and creative thinking, high emotional intelligence, hobbies/sports/activities that they truly enjoy, being AI- and tech-native.”
- “I’m less worried about mine being doctors or engineers. I feel more confident they should just pursue their passions.”
How much contact with AI?
I know some parents who are encouraging kids to play around with generative AI, with the idea that being “AI-native” will help them be better prepared for the future.
Currently my guess is that the risk of the kids falling into some weird headspace, falling in love with the AI or something, is higher than is worth it. As Joe Carlsmith writes: “If they want, AIs will be cool, cutting, sophisticated, intimidating. They will speak in subtle and expressive human voices. And sufficiently superintelligent ones will know you better than you know yourself – better than any guru, friend, parent, therapist.”
Maybe in a few years it’ll be impossible to keep my children away from this coolest of cool kids. But currently I’m not trying to hasten that.
What we say to them
Not a lot. One of our kids has been interested in the possibility of human extinction at points, starting when she learned about the dinosaurs. (She used to check out the window to see if any asteroids were headed our way.)
We’ve occasionally talked about AI risk, and biorisk a bit more, but the kids don’t really grasp anything worse than the pandemic we just went through. I think they’re more viscerally worried about climate change and the loss of panda habitats, because they’ve heard more about those from sources outside the family.
CS Lewis in 1948
I think this quote doesn’t do justice to “Try hard to avert futures where we all get destroyed,” but I still find it useful.
“If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs. They may break our bodies (a microbe can do that) but they need not dominate our minds.”
Related writing
Zvi’s AI: Practical advice for the worried, with section “Does it still make sense to try and have kids?” and thoughts on jobs.
Anna Salamon and Oliver Habryka on whether people who care about existential risk should have children.
Nice article with perspective/questions I hadn’t thought of before. That said, this isn’t the “AI” that people talked about, and was my conception, back in the Asimov Era (50s-80s). Back then, the “I” in AI was emphasized, as in “intelligence”, which also implied agency. What they call AI now, feels more like a what was originally known as data mining, only with a widely-expanded database, and combined with what I will call “impersation algorithms.” Both of those elements of 2020’s AI have their obvious dangers. What I fear about 2020’s AI is that it isn’t really “intelligent” at all — it doesn’t *know* whether the things it knows are truthful or not, in spite of what its “impersonation” algorithm may report. As a result, humans who aren’t very careful or skeptical may place more trust in these AI non-agents than they are deserving of — which I think is the real danger.
2030’s “AI” will probably be much different that our current “AI” — and maybe someday, such a system may in fact achieve “agency” and/or “awareness”, in which case, watch out. I remember when I was a kid, reading about astronomy and learning, about age 9-10, that someday the sun was going to go red-giant and swallow up the solar system, burning up Earth in the process. This thought has stuck with me all these decades. There is no infinite future for us, and regardless, it makes as much sense worrying about those power-of-ten descendants of ours as, say, Plato worrying about us, his own 2-plus-millenium descendants.
“Life as we know it” seems to only last 10-15 years, then it’s not that anymore. And so it goes, as Vonnegut said. Thanks for the well-written article and for allowing me to riff on it.
[…] Article URL: https://juliawise.net/raising-children-on-the-eve-of-ai/ […]