Sapiens, Technology, and Conflict: Ben Zweibelson's Substack

Sapiens, Technology, and Conflict: Ben Zweibelson's Substack

Share this post

Sapiens, Technology, and Conflict: Ben Zweibelson's Substack
Sapiens, Technology, and Conflict: Ben Zweibelson's Substack
Strong AI and the Future Disruption of War (Part 3 of 3)

Strong AI and the Future Disruption of War (Part 3 of 3)

How the Race to AGI Might Lead to Paradise or Create Paradise Lost

Ben Zweibelson's avatar
Ben Zweibelson
Jun 07, 2025
∙ Paid
2

Share this post

Sapiens, Technology, and Conflict: Ben Zweibelson's Substack
Sapiens, Technology, and Conflict: Ben Zweibelson's Substack
Strong AI and the Future Disruption of War (Part 3 of 3)
Share

This is part three of a three-part blog explores AI's transformative impact, focusing on its evolution and implications and delivered in my preferred style for intellectual exploration of wicked challenges. It will blend academia with pop culture and some snark. Part 1 examined AI's rapid progression over the next 12-18 months, introducing agentic AI and its potential to disrupt professionalization, paving the way for artificial general intelligence (AGI). Part 2, located here, was where I delved into AGI’s security challenges by 2030-2050, questioning global governance, non-proliferation policies, and risks of misuse by authoritarian regimes. Part 3 (now below) envisions AGI reshaping conflict and war, exploring utopian promises, societal disparities, and existential threats. In each part of this 3-part series for subscribers, I offer plenty of new perspectives, links to other content and readings, and try to have fun along the way as we consider technological elimination, replacement, or societal transformation. Each part builds on AI’s paradigm shift, urging readers to consider ethical, strategic, and societal ramifications. If you have not yet subscribed, consider doing so now:

Part 3: How Strong AI will change how we conceptualize conflict and war

Part 2 ended on the 4th question; Part 3 continues with the fifth through ninth strategic questions. If you did not read the first two parts of this series, you may want to circle back to them as I define many of the AI terms below.

Question 5: AGI and Paradise Found?

Going “all in” on the altruistic AGI assumption, could AGI usher in some future prosperity for all of humanity (Musk and the unprecedented abundance by 2040) where the existing drivers for conflict, organized violence, and human suffering are absolved in new and unanticipated ways? Assuming that AGI might solve global hunger, the energy demands of the globe, along with environmental challenges and other resource demands, why not world peace? Unfortunately, we humans tend to complicate things considerably. Take global hunger- even if AGI figured out a scaled way to maximize unprecedented amounts of food for the entire globe, societies are organized in such ways that the logistics and access to unlimited food is still an artificial limiting factor. Just because there is infinite food does not mean everyone can eat it. Ultimately, it is a strong likelihood that AGI, once realized somewhere on the planet, will be stymied by many social and physical barriers despite AGI’s tremendous potential. Those closest to the AGI stand to benefit first, while those either geographically or sociologically distant may not reap AGI benefits until well after the closer proximity population is satisfied. This echoes the 1970s ‘Miles’ Law of “where you stand depends on where you sit.” The AGI variation might be some ‘AGI Miles’ Law’ of “how utopian a prosperous life you live depends on where you sit within the AGI reach”.

Keep reading with a 7-day free trial

Subscribe to Sapiens, Technology, and Conflict: Ben Zweibelson's Substack to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Ben Zweibelson
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share