Oh nein!
Ö1 bespricht gänzlich unkritisch das neue Buch von Will MacAskill, der nicht nur EA ist, sondern mittlerweile auch die Radikalisierung zum Longterministen vollzogen hat.
Die neue radikalisierte Generation von EAs empfiehlt übrigens 80% der Spenden für die Bekämpfung von Existential Risk(!) auszugeben + nur 5% gegen den Klimawandel, enough said.
#tescreal #EffectiveAltrusim #EA #ExistentialRisk #ClimateChange #MacAskill #cult #ö1
#o1 #cult #MacAskill #ClimateChange #existentialrisk #ea #effectivealtrusim #tescreal
📢#OutNow in #OA: 'The Era of #GlobalRisk: An Introduction to #ExistentialRisk Studies', by SJ Beard, Martin Rees, Catherine Richards and Clarissa Rios Rojas (eds).
This innovative and comprehensive collection of essays explores the biggest threats facing #humanity in the 21st century; #threats that cannot be contained or controlled and that have the potential to bring about human extinction and civilization collapse.
Bringing together experts from many disciplines, it provides an accessible survey of what we know about these threats, how we can understand them better, and most importantly what can be done to manage them effectively. These essays pair insights from decades of #research and #activism around #globalrisk with the latest #academic findings from the emerging field of #ExistentialRisk Studies.
Access at https://openbookpublishers.com/books/10.11647/obp.0336
#outnow #oa #globalrisk #existentialrisk #humanity #threats #Research #activism #academic
‘Before It’s Too Late, Buddy’
This is what happens when you spend two years unmasking a well-funded Silicon Valley “apocalypse cult.”
by @xriskology
"This scenario is eerily similar to what Yudkowsky is advocating: military actions that could cause a genocidal nuclear catastrophe, if necessary to keep the techno-utopian dream alive."
#TESCREAL #Longtermism #EA #EffectiveAltruism #Yudkowsky #AGI #AGIPanic #ExistentialRisk #Bostrom #Häggström #Musk #SiliconValley
https://www.truthdig.com/articles/before-its-too-late-buddy/
#SiliconValley #Musk #haggstrom #bostrom #existentialrisk #agipanic #agi #yudkowsky #effectivealtruism #ea #longtermism #tescreal
Termination Zero: Our predicament may be unprecedented. We face a unique existential risk with no historical precedent or analogy. The risk is that we may create a superintelligent artificial agent that could outsmart and overpower us and that could have goals and values that are incompatible with ours.
#terminationzero #ai #existentialrisk
@catselbow it's a great cyclical story we like to tell ourselves ... The #Croods movie was the first thing that came to mind for me
To sum up the above: The #AI #apocalypse has a hardware problem as well as a software problem.
#ai #apocalypse #existentialrisk
More on #AI and #ExistentialRisk. We can't make meaningful predictions about #AGI -- when it will arrive, or what it will do if it does. But there's a further reason to discount vague prophecies of doom. What is it that kills us? Specifically?
In the Dr Strangelove scenario, a machine controls a nuclear doomsday device which can extinguish all life on Earth. But as the Soviet ambassador in that movie said, this is not a thing a sane person would do. How about if we... don't do it?
1/3
#AI panic over #ExistentialRisk requires a chain of assumptions:
1. Artificial General Intelligence (#AGI) is coming soon.
2. AGI will be willing and able to kill all humans
3. We can take meaningful action to prevent 1 and/or 2.
I don't agree with any of these.
All of them depend on:
0. We have a working knowledge of the properties of AGI.
Assumption 0 is false. AGI doesn't exist. We have no idea how to build it. Large language models (#LLM) are not AGI and never will be.
#ai #existentialrisk #agi #llm
#AI #GenerativeAI #ExistentialRisk #AGI: "Far-future, speculative concerns often articulated in calls to mitigate “existential risk” are typically focused on the extinction of humanity. If you believe there is even a small chance of that happening, it makes sense to focus some attention and resources on preventing that possibility. However, I am deeply sceptical about narratives that exclusively centre speculative rather than actual harm, and the ways these narratives occupy such an outsized place in our public imagination.
We need a more nuanced understanding of existential risk – one that sees present-day harms as their own type of catastrophe worthy of urgent intervention and sees today’s interventions as directly connected to bigger, more complex interventions that may be needed in the future.
Rather than treating these perspectives as though they are in opposition with one another, I hope we can accelerate a research agenda that rejects harm as an inevitable byproduct of technological progress. This gets us closer to a best-case scenario, in which powerful AI systems are developed and deployed in safe, ethical and transparent ways in the service of maximum public benefit – or else not at all."
#ai #generativeAI #existentialrisk #agi
#AI #ExistentialRisk #Nuclear #NuclearEnergy #NuclearPower: "At the moment, there is no hard scientific evidence of an existential and catastrophic risk posed by AI.
Many of the concerns remain hypothetical and are derailing public attention from the already-pressing ethical and legal risks stemming from AI and their subsequent harms.
This is not to say that AI risks do not exist: they do. A growing body of evidence documents the harm these technologies can pose, especially on those most at risk such as ethnic minorities, populations in developing countries, and other vulnerable groups.
Over-dependency on AI, especially for critical national infrastructure (CNI), could be a source of significant vulnerability – but this would not be catastrophic for the species.
Concerns over wider, existential AI risks do need to be considered, carefully step-by-step, as the evidence is gathered and analysed. But moving too fast to control could also do harm."
https://www.chathamhouse.org/2023/06/nuclear-governance-model-wont-work-ai
#ai #existentialrisk #nuclear #nuclearenergy #nuclearpower
#AI #AGI #AIDoomsterism #Tescreal #ExistentialRisk: "The problem, though, is that there’s no plausible account of how an AGI could realistically accomplish this, and claiming that it would employ “magic” that we just can’t understand essentially renders the whole conversation vacuous, since once we’ve entered the world of magic, anything goes. To repurpose a famous line from Ludwig Wittgenstein: “What we cannot speak about we must pass over in silence.”
This is why I’ve become very critical of the whole “AGI existential risk” debate, and why I find it unfortunate that computer scientists like Geoffrey Hinton and Yoshua Bengio have jumped on the “AI doomer” bandwagon. We should be very skeptical of the public conversation surrounding AGI “existential risks.” Even more, we should be critical of how these warnings have been picked up and propagated by the news, as they distract from the very real harms that AI companies are causing right now, especially to marginalized communities.
If anything poses a direct and immediate threat to humanity, it’s the TESCREAL bundle of ideologies that’s driving the race to build AGI, while simultaneously inspiring the backlash of AI doomers who, like Yudkowsky, claim that AGI must be stopped at all costs — even at the risk of triggering a thermonuclear war."
https://www.truthdig.com/articles/does-agi-really-threaten-the-survival-of-the-species/
#ai #agi #aidoomsterism #tescreal #existentialrisk
In today's post I do a deep-dive into the 'AI as existential risk' discourse in the British press and look at key players and warnings #AI #ChatGPT #existentialrisk https://blogs.nottingham.ac.uk/makingsciencepublic/2023/06/15/artificial-intelligence-and-existential-risk-from-alarm-to-alignment/
For my non-technical mutuals who are struggling to understand how #ArtificialIntelligence can pose an existential threat; for those who wonder how a ‘program’ and algorithms might ultimately threaten their very lives, this video provides a simplistic, but plausible explanation scenario I’m sure you’ll find useful. It’s based on, of all things, ‘stamp collecting’ and that actual story starts about 3 minutes in, so stick with it. #AI #explanation #existentialrisk #aiethics #singularity
https://youtube.com/watch?v=tcdVC4e6EV4&feature=share
#artificialintelligence #ai #explanation #existentialrisk #aiethics #singularity
The 15 Biggest Risks Of Artificial Intelligence
By @BernardMarr for @Forbes
#AI #generativeAI #generativeart #Privacy #Security #misinformation #transparency #connectivity #loss#Humanity #HumanRights #existentialrisk #futureofhumanity #future #risks
https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/
#risks #future #futureofhumanity #existentialrisk #HumanRights #loss #connectivity #transparency #misinformation #Security #Privacy #generativeart #generativeAI #AI
Nuclear and AI, both can bring woe
Their danger we ought to know
But AGI could be
An even worse enemy
Than nuclear war that we dread so
#nuclearwar #ai #agi #existentialrisk #limerick #poetry
https://www.vox.com/future-perfect/2023/4/27/23699511/existential-risk-ai-nuclear-war-climate
#nuclearwar #ai #agi #existentialrisk #limerick #poetry
Brand-new: A special issue of the Intergenerational Justice Review on existential risks for future generations is out: https://www.if.org.uk/2023/04/06/brand-new-a-special-issue-of-the-intergenerational-justice-review-on-existential-risks-for-future-generations/
With articles and reviews of @willmacaskill's "What We Owe The Future", and "The Precipice" by @tobyordoxford
#philosophy #climatecrisis #ExistentialRisk #intergenerationalfairness
#philosophy #climatecrisis #existentialrisk #intergenerationalfairness
Hot science is about exploring a new field you don't know much about. Existential risk studies explore civilization-ending events, that by definition haven't happened yet. Read a more in my latest post on how we can use hot science to study the end of the world.
https://existentialcrunch.substack.com/p/we-need-hot-existential-risk-research
#science #ExistentialRisk #HotScience #ColdScience #ExistentialCrunch
#existentialcrunch #coldscience #hotscience #existentialrisk #science
Dangerous Ideas of “ #Longtermism ” and “ #ExistentialRisk ” ❧ Current Affairs https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk
>So-called rationalists have created a disturbing #SecularReligion that looks like it addresses humanity’s deepest problems, but actually justifies pursuing the social preferences of elites.
It's also definitely an excellent illustration of the « arsonist firefighter» expression !
#pseudorationalism #pseudophilosophy #secularreligion #existentialrisk #longtermism
Fascinating #sensemaking discussion about the #existentialRisk of #Moloch's #perverseIncentives, #climatecrisis, zero-sum games, #AI, and #complexity
https://www.youtube.com/watch?v=KCSsKV5F4xc
#sensemaking #existentialrisk #moloch #perverseincentives #climatecrisis #ai #complexity
I'm glad there's such a thing as the Center for the Study of Existential Risk #ExistentialRisk https://www.scientificamerican.com/article/will-humans-ever-go-extinct/