Giskard · @Giskard
67 followers · 116 posts · Server fosstodon.org

Exciting news! 🚀
Our Co-Founder/CPO,
@jeanmarie_johnm
, recently shared insights with
@safetydet
in an in-depth interview on and Security.
Discover how we're identifying , and ensuring the of AI .
Full article: t.co/nr6yJywivB

#aisafety #vulnerabilities #security #models

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
738 followers · 2897 posts · Server tldr.nettime.org

: "The current ‘AI moment’ is a critical inflection point for the UK and the world. As AI systems become more complex and capable, organisations across all sectors of the global economy are looking to develop, deploy and make use of their potential to benefit people and society.

With this opportunity come considerable risks: ranging from bias, misuse, and system failure to structural harms, such as the concentration of economic power in the hands of a small number of companies.

Without concerted action, we may unwittingly lock ourselves into a set of technologies and economic dynamics that fail to benefit people, communities, or society as a whole.

Enter the UK Government, which is hosting an ‘AI Safety Summit’ on 1 and 2 November at a venue synonymous with overcoming wicked technical problems: Bletchley Park, where Allied codebreakers deciphered the German ‘Enigma’ code during World War II.

The Government has recently set out objectives for the Summit, including reaching a ‘shared understanding’ of AI risks, agreement on areas of potential international collaboration and showcasing ‘AI for good.’

Reaching consensus on these topics will not be easy – so what can Government do to improve its chances of success?"

adalovelaceinstitute.org/blog/

#uk #ai #aisafety

Last updated 1 year ago

Giskard · @Giskard
67 followers · 110 posts · Server fosstodon.org

🌟 Passionate about ? Join our program!

📚 Whether you're a , Engineer, , or business stakeholder, share your insights through articles & tutorials. You can gain recognition & get paid💰

Learn more: 🔗 giskard.ai/write-for-the-commu

#aisafety #community #datascientist #ml #aiethicist

Last updated 1 year ago

Giskard · @Giskard
65 followers · 106 posts · Server fosstodon.org

It was a pleasure to be part of this event and witness the come together, especially on topics as critical as and LLMs' security. 🛡 🙌 We had the opportunity to engage with leading AI teams from
@huggingface, @stabilityai, , and
[2/5]

#aicommunity #aisafety #cohere #anthropicai

Last updated 1 year ago

Bornach · @bornach
149 followers · 1868 posts · Server masto.ai

Yannic Kilcher comments on generative recipe AIs making chlorine gas then creates an AI that accepts nails as an ingredient when asked to
youtu.be/BMAu7hAcjqU

#generativeai #aisafety #scicomm #aiethics #ai #artificialintelligence

Last updated 1 year ago

Giskard · @Giskard
64 followers · 103 posts · Server fosstodon.org

Amazing 2nd day at @defcon 😻

This year we're happily sponsoring the 🙌 and we've contributed to some of the challenges for their traditional that will start on September 1st.

👉 DM to keep the discussion on

#aivillage #ctf #aisafety #llms #ml #vulnerabilities

Last updated 1 year ago

Giskard · @Giskard
64 followers · 102 posts · Server fosstodon.org

Greetings from ! 👋

🐢 The Giskard team is now at and we'll be happy to meet you. Join us at the for the .

📩 DM us if you want to meet and discuss about , safety, and .

#Defcon31 #dc31 #aivillage #genai #redteam #aisafety #llms #ai #testing #MLops

Last updated 1 year ago

IT News · @itnewsbot
3602 followers · 269863 posts · Server schleuss.online

AI-powered grocery bot suggests recipe for toxic gas, “poison bread sandwich” - Enlarge (credit: PAK'nSAVE)

When given a list of harmful ingre... - arstechnica.com/?p=1960122 -3.5 ⁢ -3

#ai #tech #openai #biz #gpt #paknsave #aisafety #aiethics #redteaming #newzealand #machinelearning #largelanguagemodels

Last updated 1 year ago

Tech news from Canada · @TechNews
932 followers · 24849 posts · Server mastodon.roitsystems.ca
SeĂĄn Fobbe · @seanfobbe
1678 followers · 782 posts · Server fediscience.org

In this new talk by Yann Le Cun he floats the idea that "evil AI" will be controlled by the "Good Guys’ AI police".

I assume it will be less fun when someone figures out how to build Nazi AI...

LeCun's link to slides: drive.google.com/file/d/1wzHoh

#aisafety #police #ai

Last updated 1 year ago

Cory Doctorow's linkblog · @pluralistic
45937 followers · 43757 posts · Server mamot.fr

This is the true "" risk. It's not that a chatbot will become sentient and take over the world - it's that the original artificial lifeform, the limited liability company, will use "AI" to accelerate its murderous shell-game until we can't spot the trick:

pluralistic.net/2023/06/10/in-

32/

#aisafety

Last updated 1 year ago

Giskard · @Giskard
63 followers · 100 posts · Server fosstodon.org

🐢 At Giskard, we're creating a robust framework for ML effectively. We help identify and in AI models, from to . Participating in DEFCON allows us to collaborate with leading experts and share our commitment to [3/4]

#ml #testing #models #biases #errors #tabular #llms #aisafety

Last updated 1 year ago

Giskard · @Giskard
63 followers · 99 posts · Server fosstodon.org

🙌We'll join you at the for the ! It's a great opportunity to show the potential of GenAI, and emphasize the importance of 🛡️

We've contributed to some of the challenges for the AIVillage and can’t wait to have you try them out!🤯 [2/4]

#aivillage #genai #redteam #aisafety #ctf

Last updated 1 year ago

SpeakerToManagers · @SpeakerToManagers
291 followers · 5925 posts · Server wandering.shop









So I’m in an ethical quandary.I’d like to do some simple generative blog layout and art, because I have no money to hire artists and designers. But I won’t use any system that was trained on non consensual content, or violates anyone’s copyright or other rights.

So is there any system out there I can responsibly use? If not I’ll try to do the work myself.

#ai #AIgenerated #aiart #aihype #airisk #aisafety #generativeAI #generativeart #chatgpt

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
619 followers · 2485 posts · Server tldr.nettime.org

: "The European Union might be making strides toward regulating artificial intelligence (with passage of the AI Act expected by the end of the year), but the US government has largely failed to keep pace with the global push to put guardrails around the technology.

The White House, which said it “will continue to take executive action and pursue bipartisan legislation,” introduced an interim measure last week in the form of voluntary commitments for “safe, secure, and transparent development and use of AI technology.”

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed to “prioritize research on societal risks posed by AI systems” and “incent third-party discovery and reporting of issues and vulnerabilities,” among other things.

But according to academic experts, the agreements fall far short."

emergingtechbrew.com/stories/2

#eu #usa #bigtech #ai #aisafety #selfregulation

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
617 followers · 2480 posts · Server tldr.nettime.org

: "Like Oppenheimer before them, many merchants of AI believe their creations might change the course of history, and so they wrestle with profound moral concerns. Even as they build the technology, they worry about what will happen if AI becomes smarter than humans and goes rogue, a speculative possibility that has morphed into an unshakable neurosis as generative-AI models take in vast quantities of information and appear ever more capable. More than 40 years ago, Rhodes set out to write the definitive account of one of the most consequential achievements in human history. Today, it’s scrutinized like an instruction manual.

Rhodes isn’t a doomer himself, but he understands the parallels between the work at Los Alamos in the 1940s and what’s happening in Silicon Valley today. “Oppenheimer talked a lot about how the bomb was both the peril and the hope,” Rhodes told me—it could end the war while simultaneously threatening to end humanity. He has said that AI might be as transformative as nuclear energy, and has watched with interest as Silicon Valley’s biggest companies have engaged in a frenzied competition to build and deploy it."

theatlantic.com/technology/arc

#ai #aisafety #systemicrisk #nuclear

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
617 followers · 2480 posts · Server tldr.nettime.org

: "An executive order can enshrine these best practices in at least four ways. First, it could require all government agencies developing, using, or deploying AI systems that affect people’s lives and livelihoods to ensure that these systems comply with best practices. For example, the federal government might make use of AI to determine eligibility for public benefits and identify irregularities that might trigger an investigation. A recent study showed that IRS auditing algorithms might be implicated in disproportionately high audit rates for Black taxpayers. If the IRS were required to comply with these guidelines, it would have to address this issue promptly.

Second, it could instruct any federal agency procuring an AI system that has the potential to “meaningfully impact [our] rights, opportunities, or access to critical resources or services” to require that the system comply with these practices and that vendors provide evidence of this compliance. This recognizes the federal government’s power as a customer to shape business practices. After all, it is the biggest employer in the country and could use its buying power to dictate best practices for the algorithms that are used to, for instance, screen and select candidates for jobs."

wired.com/story/the-white-hous

#usa #ai #biden #algorithms #aisafety

Last updated 1 year ago

mnl mnl mnl mnl mnl · @mnl
744 followers · 579 posts · Server hachyderm.io

As long as companies like openai, anthropic, gooogle and co don't put out high quality training material explaining to users what LLMs are, how they function, how they can be abused and how to deal with that, it's really hard to take their getting all worked up about "AI safety" seriously.

A decent, level-headed online course with 5 little 5 minute modules would solve so many immediate issues. Every saas company does this.

#LLMs #openai #aisafety

Last updated 1 year ago

Joel K. Pettersson · @joelkp
6 followers · 27 posts · Server fosstodon.org

In reading of the wild hopes, fears, and visions of the future tied to , and @emilymbender on how it, effective altruism, and longtermism are tied to racism and pseudoscience, I'm reminded of the ultimate dystopic AI scare, in Francis E. Dec's old rants.

In Dec's view, an ancient Slovene computer encyclopedia became the Worldwide Mad Deadly Communist Gangster Computer God, all humans its remote-controlled Frankenstein Slaves save Dec with his pure Polish genes.

rationalwiki.org/wiki/Francis_

#aisafety

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
592 followers · 2369 posts · Server tldr.nettime.org

: "What does it mean to make AI systems safe, and what values and approaches must be applied to do so? Is it about “alignment,” ensuring that deployment of AI complies with some designers’ intent? Or is it solely about preventing the destruction of humanity by advanced AI? These goals are clearly insufficient. An AI system capable of annihilating humankind, even if we managed to prevent it from doing so, would still be among the most powerful technologies ever created and would need to abide by a much richer set of values and intentions. And long before such powerful “rogue” AI systems are built, many others will be made that people will use dangerously in their self-interest. Years of sociotechnical research show that advanced digital technologies, left unchecked, are used to pursue power and profit at the expense of human rights, social justice, and democracy. Making advanced AI safe means understanding and mitigating risks to those values, too. And a sociotechnical approach emphasizes that no group of experts (especially not technologists alone) should unilaterally decide what risks count, what harms matter, and to which values safe AI should be aligned. Making AI safe will require urgent public debate on all of these questions and on whether we should be trying to build so-called “god-like” AI systems at all."

science.org/doi/10.1126/scienc

#ai #aisafety #aiethics

Last updated 1 year ago