Even trace amounts of bias in the original training data get refined and magnified when they are output though a decision support system that directs humans to go an act on that output. Algorithms are to bias what centrifuges are to radioactive ore: a way to turn minute amounts of bias into pluripotent, indestructible toxic waste.
There's a great name for an AI that's trained on an AI's output, courtesy of #JathanSadowski: #HabsburgAI.
18/
The internet is increasingly full of garbage, much of it written by other confident habitual liar chatbots, which are extruding plausible sentences at vast scale. Future confident habitual liar chatbots will be trained on the output of these confident liar chatbots, producing #JathanSadowski's #HabsburgAI:
https://twitter.com/jathansadowski/status/1625245803211272194
But the declining quality of Google Search isn't merely a function of chatbot overload. For many years, Google's local business listings have been *terrible*.
2/
The #AdaLovelaceInstitute's giant "Rethinking data and rebalancing digital power" report is a *banger* on this subject, covering interoperability, privacy, equity, information security and more, with superb contributions from @1br0wn and #JathanSadowski:
https://www.adalovelaceinstitute.org/report/rethinking-data/
25/
#AdaLovelaceInstitute #jathansadowski
Speaking of Chiang's essay in this week's episode of #ThisMachineKills, #JathanSadowski expertly punctures the #ChatGPT4 hype bubble, which holds that the *next* version of the chatbot will be so amazing that any critiques of the current technology will be rendered obsolete:
21/
#thismachinekills #jathansadowski #chatgpt4