Gizmodo: How to Cope With Anxiety About AI https://gizmodo.com/ai-how-to-cope-with-anxiety-about-ai-1850586142 #drafttheriseofaibotsexploringthefutureofartificialintelligenceandrobotics #computationalneuroscience #artificialintelligence #creativecommons #sanaeokamoto #cybernetics #aialignment #duolingo #netflix #chatgpt #openai
#drafttheriseofaibotsexploringthefutureofartificialintelligenceandrobotics #computationalneuroscience #artificialintelligence #creativecommons #sanaeokamoto #cybernetics #aialignment #duolingo #netflix #chatgpt #openai
Is Avoiding Extinction from AI Really an Urgent Priority? https://www.ethicalpsychology.com/2023/07/is-avoiding-extinction-from-ai-really.html #extinctionrisk #RiskManagement #AIgovernance #AIalignment #Regulations #Ethics #Safety
#extinctionrisk #riskmanagement #aigovernance #aialignment #regulations #ethics #safety
For anyone working with #AI, "The Alignment Problem" by Brian Christian is a must-read. In understanding how computers learn Christian's book teaches us more about how humans learn and just what it means to be human.
https://a.co/d/5JYxfrW
#EthicalAI #AIAlignment #AIBooks
#AI #ethicalai #aialignment #aibooks
Lots to think about here. #AI #AIsafety #AIalignment
I agree that putting the AI cat back in the bag isn't a thing that's going to happen, and that attempts at restricting its development might do more harm than good.
But there's also bloviating here on Marxism and Prohibition that I'm skeptical about. Apply the same arguments about regulations merely creating "bootleggers" to, say, driving drunk and they fall apart. As always, it's about where one draws a line...
Think of the three "smartest" people you know of. Now think of the three most powerful people you know of. Is there any overlap?
If not, start asking yourself why we are supposed to be afraid of an AI which becomes super-intelligent? And also: what are those with power doing now, that maybe we should be afraid of?
What's the inverse of Pascal's Mugging (prioritizing the unlikely but very bad)?
“Wisdom is not only knowing how to catch a ball, but also knowing when to catch it, why to catch it, and what to do with it afterwards.” —Bing Chat today during our conversation about the moral arc of the universe
@rysiek @woody The first step in controlling or regulating AI is predicting what it will do next.
( #AIControlProblem #AISafety #AIAlignment - https://en.m.wikipedia.org/wiki/AI_alignment )
And to predict what a system will do next you have to first get good at explaining why it did what it did the last time.
The smartest researchers think we're decades away from being able to explain deep neural networks. So LLMs & self driving cars keep doing bad things.
#AIExplainability - https://en.wikipedia.org/wiki/Explainable_artificial_intelligence
#aiexplainability #aialignment #aisafety #aicontrolproblem
AI Alignment problem number 93,392,841,663, “Align yourself with this human value: we desperately desire fairness, and we will cheat to make sure we get it” #humanvalues #aialignment
If I knew any other algorithm for this, I would write it. How do you do anti-bias with a machine? This is how. Untimately, we will sink or swim on the strength of our argument. Call it the moral arc of the universe, call it Spaghetti-Os: where we go in the future is determined by forces that are difficult at best to control. Where are we going? What does it all mean? These are questions that actually do have validity, but probably won’t get you a bunch of likes on social media.
#aialignment
How an organization handles #aiethics is an audition for how they will handle the problems of #aisafety and #aialignment further down the road. If you can’t be bothered to let take seriously the concrete concerns of your ethics team before deploying products, why would you take seriously the much more complicated and novel risks of #AI alignment that AI safety experts worry about?
https://www.washingtonpost.com/technology/2023/03/30/tech-companies-cut-ai-ethics/
#aiethics #aisafety #aialignment #ai
The real #AI danger 😉
#AIAlignment
Saturday Morning Breakfast Cereal - Resonsible https://www.smbc-comics.com/comic/resonsible
Can AI be dangerous? Sure, in the same way that a lawnmower is dangerous. We don't let lawnmowers run around on their own, and yet we also don't worry about lawnmowers going rogue.
https://fleker.medium.com/there-is-no-ai-alignment-problem-f0882404c0f9
#ai #controlproblem #chatgpt #aialignment
This seems like something that should have more funding
"We want to make sure advanced AI systems pursue our goals and let us turn them off. "
#ai #controlproblem #chatgpt #aialignment
Anyway, I keep meaning to write up a blog post on “falsehoods I have believed about measuring model performance” touching on #AppliedML issues related to #modelEvaluation, #metrics, #monitoring, #observability, and #experiments (#RCTs). The cool kids would call this #AIAlignment in their VC pitch decks, but even us #NormCore ML engineers have to wrestle with how to measure and optimize the real-world impact of our models.
#AppliedML #modelevaluation #metrics #monitoring #observability #experiments #rcts #aialignment #NormCore
Aren't we stumbling in the dark trying to create AGI without first formally defining consciousness, or at least intelligence (assuming there is a distinction)? I feel like we'd need that foundation before an algorithm.
Also, formally defining characteristics of consciousness may be important to alignment efforts. Using #ChatGPT as an example, hardcoding or training models for maximum probable aligned behavior seems far from reliable.
#chatgpt #ai #AGI #consciousness #math #aialignment #algorithms
Any AI Safety / AI Alignment people on Mastodon yet? #AISafety #AIAlignment
@melaniemitchell reviews #AIAlignment and its various types of believers and adherents with her usual thoughtful depth and style:
https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213?swcfpc=1
"If you believe that intelligence is defined by the ability to achieve goals, that any goal could be “inserted” by humans into a superintelligent AI agent, and that such an agent would use its superintelligence to do anything to achieve that goal…"
#airisk #ethicalai #MachineIntelligence #aialignment
I'm an associate professor at ELSI in Tokyo. I'm into #ComplexSystems, #ArtificialLife, #OriginOfLife and #AppliedCategoryTheory.
Lately I'm really into the question of "what is an agent" and the foundations of Bayesian reasoning and decisition making. This means my interests overlap quite a bit with the #aialignment crowd, although my main motivation is understanding where agency came from in biology.
#aialignment #AppliedCategoryTheory #originoflife #artificiallife #complexsystems #introduction