Coding Gardener · @cdavies
67 followers · 356 posts · Server mastodon.scot

Seeing more researchers using or proposing to use in projects with people who are already being harmed by with no consideration of impact, ethics, or questionable outcomes.

Some researchers have zero understanding of how these tools work, but the graphs sure look pretty!

#algorithmicbias #ai

Last updated 1 year ago

casey durfee · @caseydurfee
0 followers · 17 posts · Server masto.ai

something that is seemingly obvious, but I think has huge consequences for AI and ML: "adversarial examples can arise as a result of perturbing
well-generalizing, yet brittle features. Given that such features are inherent to the data distribution, different classifiers trained on independent samples from that distribution are likely to utilize similar non-robust
features." arxiv.org/pdf/1905.02175.pdf

#ai #machinelearning #algorithmicbias

Last updated 1 year ago

Andrew Shields · @AndrewShields
102 followers · 1258 posts · Server mas.to

"How would that risk [of ‘AI'] have changed if we’d listened to [Timnit] Gebru? What if we had heard the voices of the women like her who’ve been waving the flag about AI and machine learning?”

rollingstone.com/culture/cultu

#AIHarms #RolingStone #seetapenagangadharan #rummanchowdhury #safiyanoble #joybuolamwini #TimnitGebru #algorithmicbias #ai

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
617 followers · 2477 posts · Server tldr.nettime.org

: "It has been a challenging endeavour that has involved more than a hundred public records requests across eight European countries. In March of 2023, we published a four-part series co-produced with WIRED examining the deployment of algorithms in European welfare systems across four axes: people, technology, politics, and business. The centrepiece of the series was an in-depth audit of an AI fraud detection algorithm deployed in the Dutch city of Rotterdam.

Far-reaching access to the algorithm’s source code, machine learning model and training data enabled us to not only prove ethnic and gender discrimination, but also show readers how discrimination works within the black box. Months of community-level reporting revealed the grave consequences for some of the groups disproportionately flagged as fraudsters by the system.

We have published a detailed technical methodology explaining how exactly we tested Rotterdam’s algorithm with the materials we had. Here, we will explain how we developed a hypothesis and how we used public records laws to obtain the technical materials necessary to test it. And, we will share some of the challenges we faced and how we overcame them."

pulitzercenter.org/how-we-did-

#eu #ai #algorithms #frauddetection #datajournalism #welfare #algorithmicbias

Last updated 1 year ago

popcrimes · @popcrimes
36 followers · 315 posts · Server mstdn.social
David Mortensen · @davidmortensen
644 followers · 1462 posts · Server sigmoid.social

Authorities: Aha! We knew it! Just as we suspected.
AI Skeptics: But you trained your model on data representative of people burned as witches, not actual witches!
Authorities: Don't blame us. Our models are sound. We're just following where the data leads. [2/2]

#algorithmicbias #aiethics #ml #data

Last updated 1 year ago

David Mortensen · @davidmortensen
642 followers · 1438 posts · Server sigmoid.social

Authorities in Early Modern Europe want to find more witches and thus address the pervasive witch threat. They decide to train a machine learning model on a dataset of convicted witches in order to classify people as witches or not witches. Most of the hypothesized witches are independent-minded, unmarried women. [1/2]

#algorithmicbias #aiethics #ml

Last updated 1 year ago

Verwechslungsgefährte · @dichotomiker
183 followers · 2643 posts · Server dresden.network

"Furthermore, political tweets from the algorithm lead readers to perceive their political in-group more positively and their political out-group more negatively. Interestingly, while readers generally say they prefer tweets curated by the algorithm, they are less likely to prefer algorithm-selected political tweets." arxiv.org/abs/2305.16941

#twitter #ai #algorithmicbias

Last updated 1 year ago

Weizenbaum-Institut · @Weizenbaum_Institut
1198 followers · 131 posts · Server social.bund.de

How can researchers better collaborate internationally to work on issues like or ❓ We can't wait for Phil Howard's (University of Oxford) keynote at the Weizenbaum Conference in June ❗️
Register now, for an exciting 2-day program ➡️ weizenbaum-conference.de/

#algorithmicbias #misinformation

Last updated 1 year ago

Dr Izzy Fox · @DrIzzyFox
635 followers · 261 posts · Server mastodon.ie
Mozilla News & Updates · @moznews
804 followers · 1664 posts · Server noc.social

There is no hotter topic than , and in the series by @ConsumerReports all the attention is put on the negative affects of .

Technology should be equitable and enhance people's lives, not exacerbate inequality. consumerreports.org/badinput/

Original tweet : twitter.com/mozilla/status/166

#algorithmicbias #badinput #ai

Last updated 1 year ago

Mozilla News & Updates · @moznews
798 followers · 1648 posts · Server noc.social

Uncover the impact of AI bias. 🔎

Join us on May 30th at 11 AM ET to explore the consequences of discriminatory tech practices and with @Abebab, @kaschm, @amirad, and @iamxavier.

👉 Watch videos by @ConsumerReports at consumerreports.org/badinput/

Original tweet : twitter.com/mozillafestival/st

#badinput #algorithmicbias

Last updated 1 year ago

Wanda Whitney · @bibliotecaria
905 followers · 809 posts · Server blacktwitter.io
Paul M. Heider · @paulmheider
81 followers · 134 posts · Server fediscience.org
Wanda Whitney · @bibliotecaria
873 followers · 745 posts · Server blacktwitter.io

And to no one's surprise, "DEWS isn’t improving outcomes for anybody labeled high risk, regardless of race."

Oh, there have been "off-label" uses of the data: "In the city of Racine, middle schools once used DEWS to select which students would be placed in a special “Violence Free Zone” program, which included sending disruptive students to a separate classroom." Good news is that Racine is no longer using DEWS data. Wonder what they're using now.

#predictiveanalytics #ml #algorithmicbias

Last updated 2 years ago

Wanda Whitney · @bibliotecaria
873 followers · 745 posts · Server blacktwitter.io

"The algorithm’s false alarm rate—how frequently a student it predicted wouldn’t graduate on time actually did graduate on time—was 42 percentage points higher for Black students than White students, according to a DPI presentation summarizing the analysis, which we obtained through a public records request. The false alarm rate was 18 percentage points higher for Hispanic students than White students."

#algorithmicbias #ml

Last updated 2 years ago

Automating Automaticity: How the Context of Human Choice Affects the Extent of
bfi.uchicago.edu/wp-content/up
automatic behavior seems to lead to more biased algorithms:
…users report more automatic behavior when scrolling through News Feeds (people interact with friends’ posts fairly automatically) than when scrolling through potential friends (people choose friends deliberately).
…leads to significant out-group in the News Feed algorithm

#discrimination #ExperimentalEcon #bias #algorithmicbias

Last updated 2 years ago

Wanda Whitney · @bibliotecaria
835 followers · 674 posts · Server blacktwitter.io

Customs and Border Patrol using an app that can't process darker skins. That's if you can even access the internet and download the app to your phone.

Slate podcast. "Glitchy" is used to describe the app. But it's More than a Glitch.
slate.com/podcasts/what-next-t

#immigration #asylum #ml #algorithmicbias

Last updated 2 years ago

Wanda Whitney · @bibliotecaria
835 followers · 672 posts · Server blacktwitter.io

Recommended reading: Meredith Broussard.

More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech.

a.co/d/5QlQFOB

#gender #race #algorithmicbias #tech

Last updated 2 years ago