#AI
#AIGenerated
#AIArt
#AIhype
#AIrisk
#AIsafety
#GenerativeAI
#GenerativeArt
#ChatGPT
So I’m in an ethical quandary.I’d like to do some simple generative blog layout and art, because I have no money to hire artists and designers. But I won’t use any system that was trained on non consensual content, or violates anyone’s copyright or other rights.
So is there any system out there I can responsibly use? If not I’ll try to do the work myself.
#ai #AIgenerated #aiart #aihype #airisk #aisafety #generativeAI #generativeart #chatgpt
𝑇ℎ𝑒 𝐴.𝐼. 𝐷𝑖𝑙𝑒𝑚𝑚𝑎: 𝐺𝑟𝑜𝑤𝑡ℎ 𝑣𝑒𝑟𝑠𝑢𝑠 𝐸𝑥𝑖𝑠𝑡𝑒𝑛𝑡𝑖𝑎𝑙 𝑅𝑖𝑠𝑘, Charles I. Jones, Stanford GSB and NBER
June 13, 2023 https://web.stanford.edu/~chadj/existentialrisk.pdf
The goal of the paper is not to provide an exact answer to this question, as the answer will surely depend on parameters that we cannot precisely quantify. Instead, the paper develops some simple models to elucidate the economic forces that are involved in thinking through these questions.
One key sensitivity is whether we use log utility or CRRA utility with γ = 2 or more. With log utility, remarkably large amounts of existential risk are tolerated in order to take advantage of huge advances in living standards. But with γ = 2 or more, gambling with existential risk is much less appealing.
Next, even singularities that deliver infinite consumption immediately are not as valuable as one might have thought. With bounded utility (e.g. γ > 1), infinite consumption merely pushes us to the upper bound and the marginal utility of the additional consumption is small. The finding that with γ = 2 or more, social welfare in these models suggests taking great care with existential risk continues to hold even in the presence of a singularity.
Finally, one way in which it can be optimal to entertain greater amounts of existential risk is if A.I. leads to new innovations that improve life expectancy.
#growth #alignment #airisk #economics #singularity #ai
Tech (Global News): Biden says AI risks to security, economy must be addressed https://globalnews.ca/news/9782052/biden-artificial-intelligence-ai/ #globalnews #TechNews #Technology #ArtificialIntelligence #Nationalsecurity #U.S.News #JoeBiden #chatgpt #Economy #AIrisk #OpenAI #Tech #tech #AI
#globalnews #technews #technology #artificialintelligence #nationalsecurity #U #joebiden #chatgpt #economy #airisk #openai #Tech #ai
We don't need a new narrative or startup/tech lingo, we need to focus on what's already in front of us...because if we ever do get sentient AI, one thing we can't do is have it take cues from a set of homogeneous, control obsessed, and biased executives and investors.
#AI #AIRisk #NarrativeStrategy #NarrativePositioning #Tech #SiliconValley
#ai #airisk #narrativestrategy #narrativepositioning #tech #siliconvalley
Working in/around VC-backed startups for a decade, one thing I know for sure is that people like Altman, Hinton, and Musk are less afraid of AI taking over than they are of not being able to control the narrative...
Narrative positioning, messaging strategy, competitive analysis, and a host of other similar phrases are as important as engineering & design because narrative is an important form of currency in Silicon Valley.
#siliconvalley #ai #airisk #tech #startups
A group of prominent #AI and #ML scientists signed a very simple statement on giving the possibilities of global catastrophe caused by AI more prominence.
https://www.safe.ai/statement-on-ai-risk
This is part of a broader movement of #AISafety or #AIRisk. I don't disagree with everything this movement has to say; there are real, and tangible consequences to unfettered development of AI systems.
But the focus of this work is on possible futures. Right now, currently, there are people who experience discrimination, poorer outcomes, impeded life chances, and real, material harms because of the technologies we have in place *right now*.
And I wonder if this focus on possible futures is because the people warning about them *don't* feel the real and material harms #AI already causes? Because they're predominantly male-identifying. Or white. Or socio-economically advantaged. Or well educated. Or articulate. Or powerful. Or intersectionally, many of these qualities.
It's hard to worry about a possible future when you're living a life of a thousand machine learning-triggered paper cuts in the one that exists already.
@ezraklein@twitter.com Ezra Klein on the #AIrisk and the ethics of fintech:
"I mean, if you create A.I.s, and the main way they make money is manipulating human behavior, you’ve created the exact kind of A.I. I think you should fear the most. But that’s the most obvious business model for all of these companies."
AI models don't need to be self conscious to pose a high risk. The Morris worm was not aware.
https://www.nytimes.com/2023/03/21/opinion/ezra-klein-podcast-kelsey-piper.html
Fresh report from the team at BABL AI about the current state of AI governance! What kinds of AI governance tools are being used across sectors? Are they working? If they're working (or not), why? #aigovernance #aiethics #airisk https://babl.ai/the-current-state-of-ai-governance/
#aigovernance #aiethics #airisk
When asked why OpenAI changed its approach to sharing its research, Sutskever replied simply, “We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, #AI — #AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea... ”
#OpenAI co-founder on company’s past approach to openly sharing research: ‘We were wrong’ | #GPT4 #GenerativeAI #AIRisk
#airisk #generativeAI #gpt4 #openai #agi #ai
Bonjour Mastodon!
I gave #Musk the benefit of the doubt and wanted to see what #Twitter would turn into before leaving. Well, unsurprisingly, it is turning into a kingdom run by an autocratic and erratic billionaire who can't stand criticism.
So here I am looking for recommendations for people to follow. Interested in #Disasters #DisasterRiskReduction #DisasterRiskManagement #ClimateChangeAdaptation #ClimateScience #AIRisk #ExistentialRisk #AISafety #Intersectionalism #CommonsGovernance #DirectDemocracy #TechRisk #DemocraticSocialism #Antiauthoritarianism #Antifascism
#musk #twitter #disasters #DisasterRiskReduction #disasterriskmanagement #climatechangeadaptation #climatescience #airisk #ExistentialRisk #aisafety #Intersectionalism #commonsgovernance #directdemocracy #techrisk #democraticsocialism #antiauthoritarianism #antifascism
@melaniemitchell reviews #AIAlignment and its various types of believers and adherents with her usual thoughtful depth and style:
https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213?swcfpc=1
"If you believe that intelligence is defined by the ability to achieve goals, that any goal could be “inserted” by humans into a superintelligent AI agent, and that such an agent would use its superintelligence to do anything to achieve that goal…"
#airisk #ethicalai #MachineIntelligence #aialignment
One thing I liked about #EA was the emphasis on measuring the *effectiveness* of charitable work: are they helping people in the ways they claim to, how much help per dollar donated, etc. But for folks working in the #longtermism / #xrisk / #AIRisk space, how do they measure their effectiveness? How do you know if you've made it less likely that an evil AI will turn us all into paperclips, or made it more likely - or done nothing at all?
#ea #longtermism #xrisk #airisk
This is Michael K. Cohen's talk on Expected Behavior of Intelligent Artificial Agents
#ai #airisk #aiforgood https://youtu.be/3iXZYjDbd_0
and a link to his article published in #AI magazine
https://doi.org/10.1002/aaai.12064