Mark Zuckerberg has reportedly been developing a vast, fortified estate on the Hawaiian island of Kauai since 2014. This ambitious project includes an underground shelter, complete with its own self-sustaining energy and food supplies, shrouded in secrecy with strict non-disclosure agreements for those working on it. While Zuckerberg dismisses the notion of it being a ‘doomsday bunker,’ his neighbours have dubbed it a ‘billionaire’s bat cave,’ sparking widespread speculation.
Similarly, other tech leaders are reportedly acquiring land and constructing elaborate underground facilities, leading many to wonder if they are privy to information about impending global catastrophes. LinkedIn co-founder Reid Hoffman has openly discussed ‘apocalypse insurance,’ suggesting that a significant portion of the ultra-wealthy are investing in secure havens, with New Zealand being a favoured destination.
This trend has intensified with the rapid advancements in Artificial Intelligence (AI). OpenAI co-founder Ilya Sutskever, concerned about the potential existential risks of Artificial General Intelligence (AGI), is reported to have suggested building an underground shelter for top scientists. The debate around AGI’s arrival and its potential impact on humanity is a growing concern, with some tech leaders predicting its advent within the next decade, while experts like Professor Dame Wendy Hall express skepticism, emphasizing that current AI is far from matching human intelligence.
Proponents of AGI envision a future where AI solves major global challenges like disease and climate change, potentially leading to an era of ‘universal high income,’ as suggested by Elon Musk. However, the darker side of this advancement poses significant risks, including the possibility of AI being weaponized or deciding that humanity is the problem.
Tim Berners-Lee, the inventor of the World Wide Web, stresses the importance of having safeguards, including the ability to ‘switch off’ advanced AI. Governments worldwide are beginning to address these concerns, with the US implementing executive orders requiring AI companies to share safety test results and the UK establishing an AI Safety Institute.
Despite these efforts, the underlying question remains: as AI capabilities grow, are we adequately prepared for its potential consequences? While the immediate future of AGI remains uncertain, the actions of tech billionaires and the growing discourse around AI safety highlight a profound shift in how we perceive the relationship between technology and humanity’s future.