Seeing more researchers using or proposing to use #AI in projects with people who are already being harmed by #AlgorithmicBias with no consideration of impact, ethics, or questionable outcomes.
Some researchers have zero understanding of how these tools work, but the graphs sure look pretty!
something that is seemingly obvious, but I think has huge consequences for AI and ML: "adversarial examples can arise as a result of perturbing
well-generalizing, yet brittle features. Given that such features are inherent to the data distribution, different classifiers trained on independent samples from that distribution are likely to utilize similar non-robust
features." https://arxiv.org/pdf/1905.02175.pdf #AI #machinelearning #algorithmicbias
#ai #machinelearning #algorithmicbias
"How would that risk [of ‘AI'] have changed if we’d listened to [Timnit] Gebru? What if we had heard the voices of the women like her who’ve been waving the flag about AI and machine learning?”
#AI #AlgorithmicBias #TimnitGebru #JoyBuolamwini #SafiyaNoble #RummanChowdhury #SeetaPeñaGangadharan #RolingStone #AIHarms
#AIHarms #RolingStone #seetapenagangadharan #rummanchowdhury #safiyanoble #joybuolamwini #TimnitGebru #algorithmicbias #ai
This month, Muse™ Grenoble in partnership with @AlexandreMartin, looks at the place of #ethics in #ArtificialIntelligence, and more specifically in #algorithms and #data.
Enjoy your reading 🙂
#gender #sexism #misogynistic #stereotype #discrimination #bias #racism #racial #CodeBias #EquitableAI #AccountableAI #InclusiveAI #algoethics #responsibleai #dataethics #aiethics #ethicalai #ethicaldesign #techethics #blackbox #algorithmicbias #bigdata #datalake #dataprivacy
#aiethics #ethics #artificialintelligence #algorithms #data #gender #sexism #misogynistic #stereotype #discrimination #bias #racism #racial #codebias #equitableai #accountableai #inclusiveai #algoethics #responsibleai #dataethics #ethicalai #ethicaldesign #techethics #blackbox #algorithmicbias #BigData #datalake #dataprivacy
#EU #AI #Algorithms #FraudDetection #DataJournalism #Welfare #AlgorithmicBias: "It has been a challenging endeavour that has involved more than a hundred public records requests across eight European countries. In March of 2023, we published a four-part series co-produced with WIRED examining the deployment of algorithms in European welfare systems across four axes: people, technology, politics, and business. The centrepiece of the series was an in-depth audit of an AI fraud detection algorithm deployed in the Dutch city of Rotterdam.
Far-reaching access to the algorithm’s source code, machine learning model and training data enabled us to not only prove ethnic and gender discrimination, but also show readers how discrimination works within the black box. Months of community-level reporting revealed the grave consequences for some of the groups disproportionately flagged as fraudsters by the system.
We have published a detailed technical methodology explaining how exactly we tested Rotterdam’s algorithm with the materials we had. Here, we will explain how we developed a hypothesis and how we used public records laws to obtain the technical materials necessary to test it. And, we will share some of the challenges we faced and how we overcame them."
https://pulitzercenter.org/how-we-did-it-unlocking-europes-welfare-fraud-algorithms
#eu #ai #algorithms #frauddetection #datajournalism #welfare #algorithmicbias
"Predictive policing algorithms are racist. They need to be dismantled.
Lack of transparency and biased training data mean these tools are not fit for purpose. If we can’t fix them, we should ditch them."
By Will Douglas Heaven
#predictivepolicing #compas #oas #systemicracism #racialbias #algorithmicbias #palantir #nypd #postact
#POSTAct #nypd #palantir #algorithmicbias #racialbias #systemicracism #OAS #compas #predictivepolicing
Authorities: Aha! We knew it! Just as we suspected.
AI Skeptics: But you trained your model on data representative of people burned as witches, not actual witches!
Authorities: Don't blame us. Our models are sound. We're just following where the data leads. [2/2]
#algorithmicbias #aiethics #ml #data
Authorities in Early Modern Europe want to find more witches and thus address the pervasive witch threat. They decide to train a machine learning model on a dataset of convicted witches in order to classify people as witches or not witches. Most of the hypothesized witches are independent-minded, unmarried women. [1/2]
#algorithmicbias #aiethics #ml
#algorithmicbias #aiethics #ml
"Furthermore, political tweets from the algorithm lead readers to perceive their political in-group more positively and their political out-group more negatively. Interestingly, while readers generally say they prefer tweets curated by the algorithm, they are less likely to prefer algorithm-selected political tweets." https://arxiv.org/abs/2305.16941 #Twitter #AI #algorithmicbias
How can researchers better collaborate internationally to work on issues like #algorithmicbias or #misinformation ❓ We can't wait for Phil Howard's (University of Oxford) keynote at the Weizenbaum Conference in June ❗️
Register now, for an exciting 2-day program ➡️ https://www.weizenbaum-conference.de/
#algorithmicbias #misinformation
🚨 Abstract Deadline Tomorrow!!! 🚨
#DigitalBias #DigitalHumanities #CommunityArchives #DigitalArt #Storytelling
#SpeculativeDesign #AlgorithmicBias #DesignJustice #CriticalFabulation #IntersectionalFeminism #CriticalMaking
#digitalbias #digitalhumanities #CommunityArchives #digitalart #storytelling #speculativedesign #algorithmicbias #designjustice #criticalfabulation #intersectionalfeminism #criticalmaking
There is no hotter topic than #AI, and in the #BADINPUT series by @ConsumerReports all the attention is put on the negative affects of #algorithmicbias.
Technology should be equitable and enhance people's lives, not exacerbate inequality. https://www.consumerreports.org/badinput/
Original tweet : https://twitter.com/mozilla/status/1662801647502872576
#algorithmicbias #badinput #ai
Uncover the impact of AI bias. 🔎
Join us on May 30th at 11 AM ET to explore the consequences of discriminatory tech practices and #algorithmicbias with @Abebab, @kaschm, @amirad, and @iamxavier.
👉 Watch #BADINPUT videos by @ConsumerReports at https://www.consumerreports.org/badinput/
Original tweet : https://twitter.com/mozillafestival/status/1661745974904233988
Happy Monday #BlackMastodon! Great interview of Timnit Gebru in the Guardian for your reading pleasure. https://www.theguardian.com/lifeandstyle/2023/may/22/there-was-all-sorts-of-toxic-behaviour-timnit-gebru-on-her-sacking-by-google-ais-dangers-and-big-techs-biases
#EthicalAI #Google #SiliconValley #AlgorithmicBias #Racism #LLM
#llm #racism #algorithmicbias #siliconvalley #google #ethicalai #blackmastodon
Currently attending the "Bias Detection Challenge Demo Day"
- https://expeditionhacks.com/bias-detection-healthcare/
- https://www.challenge.gov/?challenge=minimizing-bias-and-maximizing-long-term-accuracy-of-predictive-algorithms-in-healthcare&tab=overview
#latentbias #socialbias #predictivebias #algorithmicbias
And to no one's surprise, "DEWS isn’t improving outcomes for anybody labeled high risk, regardless of race."
Oh, there have been "off-label" uses of the data: "In the city of Racine, middle schools once used DEWS to select which students would be placed in a special “Violence Free Zone” program, which included sending disruptive students to a separate classroom." Good news is that Racine is no longer using DEWS data. Wonder what they're using now.
#predictiveanalytics #ml #algorithmicbias
"The algorithm’s false alarm rate—how frequently a student it predicted wouldn’t graduate on time actually did graduate on time—was 42 percentage points higher for Black students than White students, according to a DPI presentation summarizing the analysis, which we obtained through a public records request. The false alarm rate was 18 percentage points higher for Hispanic students than White students."
Automating Automaticity: How the Context of Human Choice Affects the Extent of #AlgorithmicBias
https://bfi.uchicago.edu/wp-content/uploads/2023/02/BFI_WP_2023-19.pdf
automatic behavior seems to lead to more biased algorithms:
…users report more automatic behavior when scrolling through News Feeds (people interact with friends’ posts fairly automatically) than when scrolling through potential friends (people choose friends deliberately).
…leads to significant out-group #bias in the News Feed algorithm
#ExperimentalEcon
#discrimination
#discrimination #ExperimentalEcon #bias #algorithmicbias
Customs and Border Patrol using an app that can't process darker skins. That's if you can even access the internet and download the app to your phone.
Slate podcast. "Glitchy" is used to describe the app. But it's More than a Glitch.
https://slate.com/podcasts/what-next-tbd/2023/04/the-app-for-asylum-seekers-sucks
#immigration #asylum #ml #algorithmicbias
Recommended reading: Meredith Broussard.
More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech.
#gender #race #algorithmicbias #tech