SpeakerToManagers · @SpeakerToManagers
291 followers · 5925 posts · Server wandering.shop









So I’m in an ethical quandary.I’d like to do some simple generative blog layout and art, because I have no money to hire artists and designers. But I won’t use any system that was trained on non consensual content, or violates anyone’s copyright or other rights.

So is there any system out there I can responsibly use? If not I’ll try to do the work myself.

#ai #AIgenerated #aiart #aihype #airisk #aisafety #generativeAI #generativeart #chatgpt

Last updated 1 year ago

· @maxpool
54 followers · 14 posts · Server mathstodon.xyz

𝑇ℎ𝑒 𝐴.𝐼. 𝐷𝑖𝑙𝑒𝑚𝑚𝑎: 𝐺𝑟𝑜𝑤𝑡ℎ 𝑣𝑒𝑟𝑠𝑢𝑠 𝐸𝑥𝑖𝑠𝑡𝑒𝑛𝑡𝑖𝑎𝑙 𝑅𝑖𝑠𝑘, Charles I. Jones, Stanford GSB and NBER
June 13, 2023 web.stanford.edu/~chadj/existe

The goal of the paper is not to provide an exact answer to this question, as the answer will surely depend on parameters that we cannot precisely quantify. Instead, the paper develops some simple models to elucidate the economic forces that are involved in thinking through these questions.

One key sensitivity is whether we use log utility or CRRA utility with γ = 2 or more. With log utility, remarkably large amounts of existential risk are tolerated in order to take advantage of huge advances in living standards. But with γ = 2 or more, gambling with existential risk is much less appealing.

Next, even singularities that deliver infinite consumption immediately are not as valuable as one might have thought. With bounded utility (e.g. γ > 1), infinite consumption merely pushes us to the upper bound and the marginal utility of the additional consumption is small. The finding that with γ = 2 or more, social welfare in these models suggests taking great care with existential risk continues to hold even in the presence of a singularity.

Finally, one way in which it can be optimal to entertain greater amounts of existential risk is if A.I. leads to new innovations that improve life expectancy.

#growth #alignment #airisk #economics #singularity #ai

Last updated 1 year ago

Tech news from Canada · @TechNews
672 followers · 19858 posts · Server mastodon.roitsystems.ca
Joe Cardillo (they/them) · @joecardillo
310 followers · 484 posts · Server federate.social

We don't need a new narrative or startup/tech lingo, we need to focus on what's already in front of us...because if we ever do get sentient AI, one thing we can't do is have it take cues from a set of homogeneous, control obsessed, and biased executives and investors.

#ai #airisk #narrativestrategy #narrativepositioning #tech #siliconvalley

Last updated 1 year ago

Joe Cardillo (they/them) · @joecardillo
310 followers · 480 posts · Server federate.social

Working in/around VC-backed startups for a decade, one thing I know for sure is that people like Altman, Hinton, and Musk are less afraid of AI taking over than they are of not being able to control the narrative...

Narrative positioning, messaging strategy, competitive analysis, and a host of other similar phrases are as important as engineering & design because narrative is an important form of currency in Silicon Valley.

#siliconvalley #ai #airisk #tech #startups

Last updated 1 year ago

Kathy Reid · @KathyReid
3366 followers · 1876 posts · Server aus.social

A group of prominent and scientists signed a very simple statement on giving the possibilities of global catastrophe caused by AI more prominence.

safe.ai/statement-on-ai-risk

This is part of a broader movement of or . I don't disagree with everything this movement has to say; there are real, and tangible consequences to unfettered development of AI systems.

But the focus of this work is on possible futures. Right now, currently, there are people who experience discrimination, poorer outcomes, impeded life chances, and real, material harms because of the technologies we have in place *right now*.

And I wonder if this focus on possible futures is because the people warning about them *don't* feel the real and material harms already causes? Because they're predominantly male-identifying. Or white. Or socio-economically advantaged. Or well educated. Or articulate. Or powerful. Or intersectionally, many of these qualities.

It's hard to worry about a possible future when you're living a life of a thousand machine learning-triggered paper cuts in the one that exists already.

#ai #ml #aisafety #airisk

Last updated 2 years ago

Alex Brown · @Alexbbrown
388 followers · 1066 posts · Server hachyderm.io

@ezraklein@twitter.com Ezra Klein on the and the ethics of fintech:
"I mean, if you create A.I.s, and the main way they make money is manipulating human behavior, you’ve created the exact kind of A.I. I think you should fear the most. But that’s the most obvious business model for all of these companies."

AI models don't need to be self conscious to pose a high risk. The Morris worm was not aware.

nytimes.com/2023/03/21/opinion

#airisk

Last updated 2 years ago

Fresh report from the team at BABL AI about the current state of AI governance! What kinds of AI governance tools are being used across sectors? Are they working? If they're working (or not), why? babl.ai/the-current-state-of-a

#aigovernance #aiethics #airisk

Last updated 2 years ago

Airminded AI · @AirmindedAI
881 followers · 1017 posts · Server sigmoid.social

RT @itsandrewgao
jailbreak by asking it to speak in binary.
ironically, talking to computers in their own language bypasses safety filters

#gpt4 #ai #aisafety #airisk #chatgpt #binary #openai #gpt3

Last updated 2 years ago

Norobiik · @Norobiik
238 followers · 3794 posts · Server noc.social

When asked why OpenAI changed its approach to sharing its research, Sutskever replied simply, “We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... ”

co-founder on company’s past approach to openly sharing research: ‘We were wrong’ |

theverge.com/2023/3/15/2364018

#airisk #generativeAI #gpt4 #openai #agi #ai

Last updated 2 years ago

jayseeem · @jayseeem
1 followers · 1 posts · Server kolektiva.social

Bonjour Mastodon!

I gave the benefit of the doubt and wanted to see what would turn into before leaving. Well, unsurprisingly, it is turning into a kingdom run by an autocratic and erratic billionaire who can't stand criticism.

So here I am looking for recommendations for people to follow. Interested in

#musk #twitter #disasters #DisasterRiskReduction #disasterriskmanagement #climatechangeadaptation #climatescience #airisk #ExistentialRisk #aisafety #Intersectionalism #commonsgovernance #directdemocracy #techrisk #democraticsocialism #antiauthoritarianism #antifascism

Last updated 2 years ago

Jef Allbright · @jef
29 followers · 90 posts · Server mathstodon.xyz

@melaniemitchell reviews and its various types of believers and adherents with her usual thoughtful depth and style:
quantamagazine.org/what-does-i

"If you believe that intelligence is defined by the ability to achieve goals, that any goal could be “inserted” by humans into a superintelligent AI agent, and that such an agent would use its superintelligence to do anything to achieve that goal…"

#airisk #ethicalai #MachineIntelligence #aialignment

Last updated 2 years ago

No Forbidden Questions · @nfq
8 followers · 61 posts · Server mindly.social

One thing I liked about was the emphasis on measuring the *effectiveness* of charitable work: are they helping people in the ways they claim to, how much help per dollar donated, etc. But for folks working in the / / space, how do they measure their effectiveness? How do you know if you've made it less likely that an evil AI will turn us all into paperclips, or made it more likely - or done nothing at all?

#ea #longtermism #xrisk #airisk

Last updated 2 years ago

Minh Trinh · @ai4good
189 followers · 479 posts · Server mastodon.online
Minh Trinh · @ai4good
189 followers · 479 posts · Server mastodon.online

This is Michael K. Cohen's talk on Expected Behavior of Intelligent Artificial Agents
youtu.be/3iXZYjDbd_0
and a link to his article published in magazine
doi.org/10.1002/aaai.12064

#aiforgood #airisk #ai

Last updated 2 years ago