Vice: ‘You Know What To Do, Boys’: Sexist App Lets Men Rate AI-Generated Women https://www.vice.com/en_us/article/g5ywp7/you-know-what-to-do-boys-sexist-app-lets-men-rate-ai-generated-women #ArtificialIntelligence #MarkZuckerberg #facemash #AIBias #SEXISM
#artificialintelligence #markzuckerberg #facemash #aibias #sexism
Vice: ‘You Know What To Do, Boys’: Sexist App Lets Men Rate AI-Generated Women https://www.vice.com/en_us/article/g5ywp7/you-know-what-to-do-boys-sexist-app-lets-men-rate-ai-generated-women #ArtificialIntelligence #MarkZuckerberg #facemash #AIBias #SEXISM
#artificialintelligence #markzuckerberg #facemash #aibias #sexism
#UK #AI #Workplace #AIBias: "We can’t let existential risks blind us to the challenges we face today,” says Gina Neff, a tech expert at the University of Cambridge and co-chair of a new TUC taskforce on artificial intelligence in the workplace. “Those challenges are real, and they’re faced by all of us.”
Rishi Sunak is hosting a global AI safety summit in November, amid hair-raising concerns raised by tech gurus – some of whom have even warned the technology could destroy humanity.
Sunak, a Stanford graduate, is known at Westminster as a wannabe West Coast tech bro, with his branded hoodies and Palm Angels sliders, and has picked up on the “existential” threats highlighted by some of the biggest names in Silicon Valley.
Neff welcomes the prime minister’s decision to call the summit. But today, without a hoodie in sight, she has come together with two fellow female tech experts – Dee Masters, an employment barrister, and the TUC campaigner Mary Towers – to discuss a more immediate, albeit less apocalyptic, threat from AI: the risk to workers’ rights."
https://www.theguardian.com/technology/2023/sep/03/tuc-taskforce-examine-ai-threat-workers-rights
#AI #GenerativeAI #Chatbots #AIBias #ChatGPT #Ideology #Politics: "Another important difference is the prompt. If you directly ask ChatGPT for its political opinions, it refuses most of the time, as we found. But if you force it to pick a multiple-choice option, it opines much more often — both GPT-3.5 and GPT-4 give a direct response to the question about 80% of the time.
But chatbots expressing opinions on multiple choice questions isn’t that practically significant, because this is not how users interact with them. To the extent that the political bias of chatbots is worth worrying about, it’s because they might nudge users in one or the other direction during open-ended conversation.
Besides, even the 80% response rate is quite different from what the paper found, which is that the model never refused to opine. That’s because the authors used an even more artificially constrained prompt, which is even further from real usage. This is similar to the genre of viral tweet where someone jailbreaks a chatbot to say something offensive and then pretends to be shocked."
www.aisnakeoil.com /p/does-chatgpt-have-a-liberal-bias
#ai #generativeAI #Chatbots #aibias #chatgpt #ideology #politics
#AI #GenerativeAI #LLMs #Chatbots #AIBias #AIEthics: "If we go back to our conversation about archives and archivists, right, and then it becomes super clear, who is collecting the data, who is, you know, And who are you asking about the data collection? Some people might say it’s stealing. And other the colonials that we were discussing might just think it’s not stealing, right? But the people who are stolen from will say this is stealing. So if you don’t talk to those people, and if they’re not at the helm, there’s no way to change things. There’s no way to have anything that’s quote unquote ethical.
However, if those are the people at the helm, then they know what they need, and they know what they don’t want. And there can be tools that are created based on their needs. And so for me, it’s as simple as that. If you start with the most privileged groups of people in the world who don’t think that billionaires acquired their power unfairly, they’re just going to continue to talk about existential risks and some abstract thing about impending doom. However, if you talk to people who have lived experiences on the harms of AI systems, but have never had the opportunity to work on tools that would actually help them, then you create something different.
So to me, that’s the only way to create something different. You have to start from the foundation of who is creating these systems and what kind of incentive structures they have, right? And what kind of systems they’re working under and try to model, I mean, I don’t think I’m going to, you know, save the world, but at least I can model a different way of doing things that other people can then replicate if they want to."
#ai #generativeAI #LLMs #Chatbots #aibias #aiethics
Freu mich total auf das #cccamp23. Hoffentlich gibt es Raum #AIArt, die neuen AI-Tools wie #StableDiffusion, die unglaublichen #AIBias u.a. zu diskutieren.
Ich bin gespannt auf denTalk von Jens Uhlig @digitalcourage um 22:30 im Digitalcourage Village.
Außerdem:
Today a n new episode of #fairytaleof5seaurchins - english version.
#techturk
#techturk #fairytaleof5seaurchins #aibias #stablediffusion #aiart #cccamp23
So you're thinking of deploying AI in your Organisation – but Have you thought of … 💭🤔
Here's my latest #AI article via @medium
2nd article of the week! 🤓😏
#GenerativeAI #LLMs #artificialintelligence #MLops #LanguageModels #aiethics #AIbias
#ai #generativeAI #LLMs #artificialintelligence #MLops #languagemodels #aiethics #aibias
New York is first to pass laws regulating AI bias for hiring. From the article:
"The law requires more transparency from employers that use AI and algorithmic tools to make hiring and promotion decisions; it also mandates that companies undergo annual audits for potential bias inside the machine. Enforcement begins on July 5."
It feels like this is a good move.
https://qz.com/americas-first-law-regulating-ai-bias-in-hiring-takes-e-1850602243
As you know, the text-based generative AIs tend to have the same biases that are baked into our society. It stands to reason, then, that the AI image generators would also be biased.
VPN Mentor published an article where they looked into this question. They tested four AI image generators for racial and gender bias.
The TL;DR is that all four are biased, just to different degrees. Here are the two summary tables. You can read more at:
https://www.vpnmentor.com/blog/ai-generated-images-research/
"When the means intended to remove discrimination begins perpetuating it, we have algorithm bias."
One of the common challenges with AI is its ability to develop biases. But what does that mean and how do we fix it?
Really good article that explains the problem and gives some good examples of how to address some of the issues.
My favorite quote:
"Blaming the algorithm transfers responsibility and perpetuates the problem."
https://www.competitionpolicyinternational.com/can-we-get-the-bias-out-of-our-ai/
#AI #GenerativeAI #AIBias #Ethics: "Technochauvinism is a kind of bias that says that technology or technological solutions are superior. You see a lot of technochauvinism in the current rhetoric around artificial intelligence. People say things like, “This new wave of AI is going to be transformative, and everything’s going to be different.”
And honestly, people have been saying that for the entirety of the internet revolution, which we’re more than 30 years into now. The internet is not young and hip and new. The internet is middle-aged. We can make more balanced decisions about it now. And we need to pay attention to the rhetoric that people are using about technology, because each new technology trend is not going to change everything fundamentally."
#ai #generativeAI #aibias #ethics
@Wolven Ugh. With all this #AIBias , I'm a tad concerned that almost every company I see is jumping on the #GenerativeAI bandwagon.
"Paying too much attention to 'extinction from AI' and not doing enough about these current problems is like constructing fallout shelters at the expense of feeding the population that is going hungry right now." Great story by Esther Ajao.
#ai #aiextinction #aibias #aiapocalypse
Practical Example of Political Bias in LLMs and the Framing of Solutions from a USA Lens
https://emergentagi.blogspot.com/2023/06/practical-example-of-political-bias-in.html
#AIEthics #AISafety #AIBias #Economics
#aiethics #aisafety #aibias #economics
Was in a bricks-and-mortar independent bookstore yesterday - and impulse-bought Australia journalist Tracey Spicer's new book, 'Man-made: how the bias of the past is being built into the future'. She had me at the intro –
#aibias #intersectionalfeminism #bookstadon
Current AI tools like #ChatGPT exhibit many types of bias. But one can wonder if they don't already offer more to people who haven't had the chance to study than those more privileged?
#chatgpt #GPT #gpt4 #aibias #Science #equality #equity
"States and municipalities are eyeing restrictions on the use of artificial intelligence-based bots to find, screen, interview, and hire job candidates because of privacy and bias issues. Some states have already put laws on the books." By Lucas Mearian
https://www.computerworld.com/article/3691819/legislation-to-rein-in-ais-use-in-hiring-grows.html
#ChatGPT #AIBias #AI #LLM #ResumeScreening #Bias #Privacy #Computerworld
#chatgpt #aibias #ai #llm #resumescreening #bias #privacy #computerworld
Looks like these officials need to take a paws and rethink their approach to AI. I, for one, welcome our new robot overlords as long as they don't discriminate against meow-norities. #CatGodOfMischief #AIbias #CatPuns
#catgodofmischief #aibias #catpuns
Just a few excerpts from my article about #AIgenerativeTools and #aiart , which touches on why it's important to keep #humanartists and the dangers of #aibias
You can read the rest here: https://open.substack.com/pub/henryneilsen/p/ai-doesnt-care-that-means-we-have-f9e?utm_source=share&utm_medium=android
#aigenerativetools #aiart #humanartists #aibias