Alex Fedorov | 10xSWE · @tdd_fellow
3 followers · 390 posts · Server mas.to

🔄 Positive Feedback Loops aren't always 'positive.' They can spiral up or spiral down. Are you caught in a vicious cycle or enjoying a virtuous loop? Know the difference.

#softwareengineering #FeedbackLoops

Last updated 1 year ago

Alex Fedorov | 10xSWE · @tdd_fellow
3 followers · 371 posts · Server mas.to

🔄 A Reinforcing Loop can be a virtuous circle or a vicious cycle. Trying to sprint too fast in development? Get ready for a bug apocalypse that slows you down again.

#FeedbackLoops #softwarequality

Last updated 1 year ago

Alex Fedorov | 10xSWE · @tdd_fellow
3 followers · 352 posts · Server mas.to

🔄 If everything in your system is pushing in the same direction ('s' labels), be prepared for momentum—but also overshooting your goals. Balance is key.

#FeedbackLoops #softwareengineering

Last updated 1 year ago

Owen King · @OwenK
321 followers · 478 posts · Server fosstodon.org

"It is tempting to rely on crowdsourcing to validate LLM outputs or to create human gold-standard data for comparison. But what if crowd workers themselves are using LLMs, e.g., in order to increase their productivity, and thus their income, on crowdsourcing platforms?"

techcrunch.com/2023/06/14/mech

#aiethics #FeedbackLoops #llms

Last updated 1 year ago

Owen King · @OwenK
321 followers · 477 posts · Server fosstodon.org

A recent paper investigates of generative AI trained on output of generative AI. Nice summary here:
venturebeat.com/ai/the-ai-feed

The authors insist the solution is training on human-created data. But this underestimates the problem. From now on, human predictions also are polluted by our encounters with artificial predictions.

The potential for self-fulfilling prophecies runs deep. And, as I've argued, they can be hard to catch. doi.org/10.1007/s10677-022-103

#FeedbackLoops #aiethics

Last updated 1 year ago

Owen King · @OwenK
273 followers · 267 posts · Server fosstodon.org

"the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself."

Ted Chiang's New Yorker article from Thursday is the best articulation I've seen so far of the downsides of threatened by .

The lossy compression interpretation of LLMs is spot on.

newyorker.com/tech/annals-of-t

#FeedbackLoops #llms

Last updated 2 years ago

Owen King · @OwenK
264 followers · 219 posts · Server fosstodon.org

My new article on self-fulfilling prophecies and feedback loops, co-authored with Mayli Mertens, was featured today in New Work In Philosophy.

newworkinphilosophy.substack.c

#philosophy #ethics #aiethics #FeedbackLoops

Last updated 2 years ago

Owen King · @OwenK
253 followers · 196 posts · Server fosstodon.org

@boris_steipe Thanks for pointing me to the watermarking plan!

That addresses one part of the challenge of keeping future training data unpolluted by AI-generated text: It allows exclusion by way of diction and punctuation.

But the deeper worry, I think, has to do with the content/meaning of the AI-generated text. If someone rephrased the ChatGPT output before publishing it, then that content would still be out there for future training, yielding .

#chatgpt #FeedbackLoops

Last updated 2 years ago

Owen King · @OwenK
253 followers · 196 posts · Server fosstodon.org

The immediate need is for people to start marking output. If a paragraph, or even part of it, was generated by an LLM, it should be marked in a machine-readable way. Then it can be excluded from future training sets. Just something simple like this: "Composition assisted by AI"

Obviously not everyone will be honest about their use of AI-generated text, and not everyone training models will abide by the exclusion. Still, it's a start.

4/4

#llm #generativeAI #aiethics #FeedbackLoops

Last updated 2 years ago

Owen King · @OwenK
253 followers · 196 posts · Server fosstodon.org

Our dire need is to uncover the differences. We need research on systematic differences between output and human-generated language. That can't be just NLP; it can't be just statistical. The question, after all, is how statistically generated language differs from human-generated language.

Thorough research would involve linguistics, philosophy, sociology, literary theory, psychology, etc. No single discipline is well-positioned.

3/4

#llm #generativeAI #aiethics #FeedbackLoops

Last updated 2 years ago

Owen King · @OwenK
253 followers · 196 posts · Server fosstodon.org

To begin, we do know output is composed differently from human-generated language.

First, LLMs are no more than dl.acm.org/doi/10.1145/3442188

Second, in general, ML interprets humans via the statistical stance, not the intentional stance. doi.org/10.1007/s10676-019-095

But, despite these differences, we don't know how exactly human and LLM outputs differ.

2/4

#llm #stochasticparrots #generativeAI #aiethics #FeedbackLoops

Last updated 2 years ago

Owen King · @OwenK
253 followers · 196 posts · Server fosstodon.org

I'm worrying about and . responses are all over the public web. People are already using it to help write journal articles. theverge.com/2023/1/5/23540291 These will be sources of new training data for .

What happens when LLMs train on the output of LLMs? Well, any tendencies and biases of LLMs will intensify. But we're not even in a position to anticipate the details. So, what to do?

🧵 1/4

#FeedbackLoops #generativeAI #chatgpt #llms #aiethics

Last updated 2 years ago

FOSSlife · @FOSSlife
864 followers · 1026 posts · Server mastodon.fosslife.org

Voices: Tim Cochran, Technical Director at ThoughtWorks, outlines a framework for maximizing developer effectiveness, identifying key feedback loops for optimization. buff.ly/365EO4t

#FeedbackLoops #performance #teams #DevOps #softwaredevelopment #foss

Last updated 4 years ago