Miguel Afonso Caetano · @remixtures
561 followers · 2219 posts · Server tldr.nettime.org

: "At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.” It also hosts an annual conference and sponsors a student group, one of dozens of AI safety clubs that Open Philanthropy has helped support in the past year at universities around the country.

Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research. And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others"

washingtonpost.com/technology/

#ai #agi #aiapocalypse #aidoomsterism #stanford #siliconvalley

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
519 followers · 2172 posts · Server tldr.nettime.org

: "The problem, though, is that there’s no plausible account of how an AGI could realistically accomplish this, and claiming that it would employ “magic” that we just can’t understand essentially renders the whole conversation vacuous, since once we’ve entered the world of magic, anything goes. To repurpose a famous line from Ludwig Wittgenstein: “What we cannot speak about we must pass over in silence.”

This is why I’ve become very critical of the whole “AGI existential risk” debate, and why I find it unfortunate that computer scientists like Geoffrey Hinton and Yoshua Bengio have jumped on the “AI doomer” bandwagon. We should be very skeptical of the public conversation surrounding AGI “existential risks.” Even more, we should be critical of how these warnings have been picked up and propagated by the news, as they distract from the very real harms that AI companies are causing right now, especially to marginalized communities.

If anything poses a direct and immediate threat to humanity, it’s the TESCREAL bundle of ideologies that’s driving the race to build AGI, while simultaneously inspiring the backlash of AI doomers who, like Yudkowsky, claim that AGI must be stopped at all costs — even at the risk of triggering a thermonuclear war."

truthdig.com/articles/does-agi

#ai #agi #aidoomsterism #tescreal #existentialrisk

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
515 followers · 2160 posts · Server tldr.nettime.org

: "Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation.

Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.

It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.

Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).

These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how."

nytimes.com/2023/06/30/opinion

#ai #agi #agism #neoliberalism #aidoomsterism #capitalism

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
509 followers · 2132 posts · Server tldr.nettime.org

: "Tech firms must formulate industry standards for responsible development of AI systems and tools, and undertake rigorous safety testing before products are released. They should submit data in full to independent regulatory bodies that are able to verify them, much as drug companies must submit clinical-trial data to medical authorities before drugs can go on sale.

For that to happen, governments must establish appropriate legal and regulatory frameworks, as well as applying laws that already exist. Earlier this month, the European Parliament approved the AI Act, which would regulate AI applications in the European Union according to their potential risk — banning police use of live facial-recognition technology in public spaces, for example. There are further hurdles for the bill to clear before it becomes law in EU member states and there are questions about the lack of detail on how it will be enforced, but it could help to set global standards on AI systems.

Further consultations about AI risks and regulations, such as the forthcoming UK summit, must invite a diverse list of attendees that includes researchers who study the harms of AI and representatives from communities that have been or are at particular risk of being harmed by the technology."

nature.com/articles/d41586-023

#ai #generativeAI #bigtech #regulation #aidoomsterism

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
510 followers · 2100 posts · Server tldr.nettime.org

: "Below, we’ve put together a kind of scorecard. IEEE Spectrum has distilled the published thoughts and pronouncements of 22 AI luminaries on large language models, the likelihood of an AGI, and the risk of civilizational havoc. We scoured news articles, social media feeds, and books to find public statements by these experts, then used our best judgment to summarize their beliefs and to assign them yes/no/maybe positions below. If you’re one of the luminaries and you’re annoyed because we got something wrong about your perspective, please let us know. We’ll fix it.

And if we’ve left out your favorite AI pundit, our apologies. Let us know in the comments section below whom we should have included, and why. And feel free to add your own pronouncements, too."
spectrum.ieee.org/artificial-g

#ai #agi #aidoomsterism

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
507 followers · 2074 posts · Server tldr.nettime.org

: "The week before, Altman told the US Senate that his worst fears were that the AI industry would cause significant harm to the world. Altman’s testimony helped spark calls for a new kind of agency to address such unprecedented harm.

With the Overton window shifted, is the damage done? “If we're talking about the far future, if we're talking about mythological risks, then we are completely reframing the problem to be a problem that exists in a fantasy world and its solutions can exist in a fantasy world too,” says Whittaker.

But Whittaker points out that policy discussions around AI have been going on for years, longer than this recent buzz of fear. “I don't believe in inevitability,” she says. “We will see a beating back of this hype, it will subside.”"

technologyreview.com/2023/06/1

#ai #generativeAI #hype #aidoomsterism

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
469 followers · 1780 posts · Server tldr.nettime.org

: "We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us?"

#ai #generativeAI #aidoomsterism #regulation

Last updated 2 years ago