🔄 Positive Feedback Loops aren't always 'positive.' They can spiral up or spiral down. Are you caught in a vicious cycle or enjoying a virtuous loop? Know the difference. #FeedbackLoops #SoftwareEngineering
#softwareengineering #FeedbackLoops
🔄 A Reinforcing Loop can be a virtuous circle or a vicious cycle. Trying to sprint too fast in development? Get ready for a bug apocalypse that slows you down again. #SoftwareQuality #FeedbackLoops
#FeedbackLoops #softwarequality
🔄 If everything in your system is pushing in the same direction ('s' labels), be prepared for momentum—but also overshooting your goals. Balance is key. #SoftwareEngineering #FeedbackLoops
#FeedbackLoops #softwareengineering
"It is tempting to rely on crowdsourcing to validate LLM outputs or to create human gold-standard data for comparison. But what if crowd workers themselves are using LLMs, e.g., in order to increase their productivity, and thus their income, on crowdsourcing platforms?"
https://techcrunch.com/2023/06/14/mechanical-turk-workers-are-using-ai-to-automate-being-human/amp/
#aiethics #FeedbackLoops #llms
A recent paper investigates #feedbackLoops of generative AI trained on output of generative AI. Nice summary here:
https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/
The authors insist the solution is training on human-created data. But this underestimates the problem. From now on, human predictions also are polluted by our encounters with artificial predictions.
The potential for self-fulfilling prophecies runs deep. And, as I've argued, they can be hard to catch. https://doi.org/10.1007/s10677-022-10359-9
"the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself."
Ted Chiang's New Yorker article from Thursday is the best articulation I've seen so far of the downsides of #feedbackloops threatened by #LLMs.
The lossy compression interpretation of LLMs is spot on.
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
My new article on self-fulfilling prophecies and feedback loops, co-authored with Mayli Mertens, was featured today in New Work In Philosophy.
https://newworkinphilosophy.substack.com/p/owen-c-king-unc-chapel-hill-and-mayli
#philosophy #ethics #aiethics #FeedbackLoops
@boris_steipe Thanks for pointing me to the #ChatGPT watermarking plan!
That addresses one part of the challenge of keeping future training data unpolluted by AI-generated text: It allows exclusion by way of diction and punctuation.
But the deeper worry, I think, has to do with the content/meaning of the AI-generated text. If someone rephrased the ChatGPT output before publishing it, then that content would still be out there for future training, yielding #feedbackLoops.
The immediate need is for people to start marking #LLM output. If a paragraph, or even part of it, was generated by an LLM, it should be marked in a machine-readable way. Then it can be excluded from future training sets. Just something simple like this: "Composition assisted by AI"
Obviously not everyone will be honest about their use of AI-generated text, and not everyone training models will abide by the exclusion. Still, it's a start.
#llm #generativeAI #aiethics #FeedbackLoops
Our dire need is to uncover the differences. We need research on systematic differences between #LLM output and human-generated language. That can't be just NLP; it can't be just statistical. The question, after all, is how statistically generated language differs from human-generated language.
Thorough research would involve linguistics, philosophy, sociology, literary theory, psychology, etc. No single discipline is well-positioned.
#llm #generativeAI #aiethics #FeedbackLoops
To begin, we do know #LLM output is composed differently from human-generated language.
First, LLMs are no more than #stochasticParrots https://dl.acm.org/doi/10.1145/3442188.3445922
Second, in general, ML interprets humans via the statistical stance, not the intentional stance. https://doi.org/10.1007/s10676-019-09512-3
But, despite these differences, we don't know how exactly human and LLM outputs differ.
#llm #stochasticparrots #generativeAI #aiethics #FeedbackLoops
I'm worrying about #feedbackLoops and #generativeAI. #ChatGPT responses are all over the public web. People are already using it to help write journal articles. https://www.theverge.com/2023/1/5/23540291/chatgpt-ai-writing-tool-banned-writing-academic-icml-paper These will be sources of new training data for #LLMs.
What happens when LLMs train on the output of LLMs? Well, any tendencies and biases of LLMs will intensify. But we're not even in a position to anticipate the details. So, what to do?
#AIEthics 🧵 1/4
#FeedbackLoops #generativeAI #chatgpt #llms #aiethics
#FOSS Voices: Tim Cochran, Technical Director at ThoughtWorks, outlines a framework for maximizing developer effectiveness, identifying key feedback loops for optimization. https://buff.ly/365EO4t #SoftwareDevelopment #DevOps #teams #performance #FeedbackLoops
#FeedbackLoops #performance #teams #DevOps #softwaredevelopment #foss