![Featured image for “[UG] Join the Ahmanson Lab’s Critical AI Studies Reading Group”](https://www.cs.usc.edu/wp-content/uploads/2025/09/thumbnail_4b3715b5-f8e3-077b-f8b9-f8c855d7d5a1.png)
The following announcement is from [Curtis Fletcher – cfletche@usc.edu]. Please contact them directly if you have any questions.
Silicon Valley thinks AI will save the world.
Let’s talk about why.
Join the Critical AI Studies Reading Group at the Ahmanson Lab this Fall to explore the ideologies shaping artificial intelligence—from techno-utopianism to transhumanism, accelerationism, and more.
__________
Join Us!
The Ahmanson Lab is looking for a diverse cohort of students to join our bi-weekly Critical AI Studies Reading group for Fall 2025. This group will engage with works in the history of technology, interdisciplinary perspectives on AI and society, and primary texts by influential tech thinkers and futurists that articulate and promote various Silicon Valley-isms—the dominant techno-ideologies shaping the development of artificial intelligence today.
Short Readings
Lively Discussion
Free Pizza
Open to all Majors
This is a unique opportunity to think critically about core texts in the philosophies and ideologies surrounding AI.
All sessions will take place at the Ahmanson Lab in Leavey Library (directions).
__________
Why it matters.
While much of the public conversation about artificial intelligence has focused on technological breakthroughs or market disruptions, less attention is paid to the techno-ideologies that drive the relentless pursuit of AI in Silicon Valley. From visions of god-like superintelligence to the promise of digital immortality, the development of AI is deeply entangled with a range of quasi-religious ideas about the future of humanity.
“The path to solving hunger, disease and poverty is AI and robotics. Elon Musk (July 30, 2025).”
For figures like Sam Altman, Elon Musk, and others, these beliefs radically reposition AI, not as a practical tool, but as a civilizational imperative; a development that they argue is essential for the survival, or even cosmic destiny, of the human species. What’s more, because they see AI as pivotal to humanity’s fate, and frame its development in near-religious terms, they believe their work is far too consequential to be impeded by any external forces, including regulatory oversight, broader social responsibilities, or ethical frameworks that might slow innovation. In this worldview, concerns about labor conditions, environmental impacts, algorithmic fairness, or the growing dominance of large tech firms become secondary—treated as manageable trade-offs or temporary challenges on the path to wildly speculative futures.
__________
How it’ll work.
Biweekly discussions. The Reading Group takes place every other Thursday from 1:00PM–2:00PM on the dates listed below.
October 2 | October 16 | October 30
__________
What we’ll read.
We’ll examine a number of Silicon Valley-isms—including transhumanism, accelerationism, effective altruism, longtermism, and singularitarianism—and situate them within a longer history of technological utopianism, technological determinism, and techno-libertarianism.
October 2: Transhumanism and Singularitarianism
Becker, Adam. “Machines of Loving Grace.” More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Basic Books, 2025, pp. 39-90. [Link]
October 16: Longtermism and Effective Altruism
Bostrom, Nick. “Astronomical Waste: The Opportunity Cost of Delayed Technological Development,” Utilitas Vol. 15, No. 3 (2003): pp. 308-314. [Link]
Hao, Karen. Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Penguin Press, 2025, pp. 227–234. [Link]
October 30: Accelerationism
Andreessen, Marc. “The Techno‑Optimist Manifesto.” Andreessen Horowitz, 16 Oct. 2023. Approx. 10 pages. [Link]
Buterin, Vitalik. “My techno-optimism.” vitalik.eth.limo, 27 Nov. 2023. Approx 10 pages. [Link]

