Miguel Afonso Caetano · @remixtures
735 followers · 2865 posts · Server tldr.nettime.org

: "An artificial intelligence company, whose founder Forbes included in a 30 Under 30 list recently, promises to use machine learning to convert clients’ 2D illustrations into 3D models. In reality the company, called Kaedim, uses human artists for “quality control.” According to two sources with knowledge of the process interviewed by 404 Media, at one point, Kaedim often used human artists to make the models. One of the sources said workers at one point produced the 3D design wholecloth themselves without the help of machine learning at all.

The news pulls back the curtain on a hyped startup and is an example of how AI companies can sometimes overstate the capabilities of their technology. Like other AI startups, Kaedim wants to use AI to do tedious labor that is currently being done by humans. In this case, 3D modeling, a time consuming job that video game companies are already outsourcing to studios in countries like China."

404media.co/kaedim-ai-startup-

#ai #fauxtomation #3dmodelling

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
581 followers · 2319 posts · Server tldr.nettime.org

: "Google’s Bard artificial intelligence chatbot will answer a question about how many pandas live in zoos quickly, and with a surfeit of confidence.

Ensuring that the response is well-sourced and based on evidence, however, falls to thousands of outside contractors from companies including Appen Ltd. and Accenture Plc, who can make as little as $14 an hour and labor with minimal training under frenzied deadlines, according to several contractors, who declined to be named for fear of losing their jobs.

The contractors are the invisible backend of the generative AI boom that’s hyped to change everything. Chatbots like Bard use computer intelligence to respond almost instantly to a range of queries spanning all of human knowledge and creativity. But to improve those responses so they can be reliably delivered again and again, tech companies rely on actual people who review the answers, provide feedback on mistakes and weed out any inklings of bias.

It’s an increasingly thankless job. Six current Google contract workers said that as the company entered a AI arms race with rival OpenAI over the past year, the size of their workload and complexity of their tasks increased. Without specific expertise, they were trusted to assess answers in subjects ranging from medication doses to state laws. Documents shared with Bloomberg show convoluted instructions that workers must apply to tasks with deadlines for auditing answers that can be as short as three minutes."

bnnbloomberg.ca/google-s-ai-ch

#ai #generativeAI #Chatbots #google #bard #fauxtomation

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
500 followers · 1996 posts · Server tldr.nettime.org

: "The reality residing in the machine is alienated from the reality in which the human operates. The unavoidable process of technological evolution is driven by the introduction of nonlinear causality, allowing machines to deal with contingency. A learning machine is one that can discern contingent events such as noise and failure. It can distinguish unorganized inputs from necessary ones. And by interpreting contingent events, the learning machine improves its model of decision-making. But even here the machine needs humans to distinguish right decisions from wrong ones in order to continue improving. In developing countries, a new type of cheap labor employs humans to tells machines whether results are correct, be they facial recognition scans or ChatGPT responses. This new form of labor, which exploits workers who toil invisibly behind the machines we interact with, is often overlooked by very general criticisms of capitalism that lament insufficient automation. This is the weakness of today’s Marxian critique of technology."

e-flux.com/journal/137/544816/

#ai #generativeAI #chatgpt #automation #fauxtomation

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
478 followers · 1826 posts · Server tldr.nettime.org

: "If the problem with “AI” (neither “artificial,” nor “intelligent”) is that it is about to become self-aware and convert the entire solar system to paperclips, then we need a moonshot to save our species from these garish harms.

If, on the other hand, the problem is that AI systems just suck and shouldn’t be trusted to fly drones, or drive cars, or decide who gets bail, or identify online hate-speech, or determine your creditworthiness or insurability, then all those AI companies are out of business.

Take away every consequential activity through which AI harms people, and all you’ve got left is low-margin activities like writing SEO garbage, lengthy reminisces about “the first time I ate an egg” that help an omelette recipe float to the top of a search result. Sure, you can put 95 percent of the commercial illustrators on the breadline, but their total wages don’t rise to one percent of the valuation of the big AI companies.

For those sky-high valuations to remain intact until the investors can cash out, we need to think about AI as a powerful, transformative technology, not as a better autocomplete."

doctorow.medium.com/ayyyyyy-ey

#ai #generativeAI #hype #bigtech #fauxtomation

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
449 followers · 1665 posts · Server tldr.nettime.org

: "The type of work Mathenge performed has been crucial for bots like ChatGPT and Google’s Bard to function and to feel so magical. But the human cost of the effort been widely overlooked. In a process called “Reinforcement Learning from Human Feedback,” or RLHF, bots become smarter as humans label content, teaching them how to optimize based on that feedback. A.I. leaders, including OpenAI’s Sam Altman, have praised the practice’s technical effectiveness, yet they rarely talk about the cost some humans pay to align the A.I. systems with our values. Mathenge and his colleagues were on the business end of that reality.

Mathenge earned a degree from Nairobi’s Africa Nazarene University in 2018 and quickly got to work in the city’s technology sector. In 2021, he applied for work with Sama, an A.I. annotation service that’s worked for companies like OpenAI. After Sama hired Mathenge, it put him to work labeling LiDAR images for self-driving cars. He’d review the images and pick out people, other vehicles, and objects, helping the models better understand what they encountered on the road.

When that project wrapped, Mathenge was transferred to work on OpenAI’s models. And there, he encountered the disturbing texts. OpenAI told me it believed it was paying its Sama contractors $12.50 per hour, but Mathenge says he and his colleagues earned approximately $1 per hour, and sometimes less. Spending their days steeped in depictions of incest, bestiality, and other explicit scenes, the team began growing withdrawn."

slate.com/technology/2023/05/o

#ai #generativeAI #LLMs #openai #kenya #fauxtomation

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
294 followers · 634 posts · Server tldr.nettime.org

: "AI has all the hallmarks of a classic pump-and-dump, starting with terminology. AI isn't "artificial" and it's not "intelligent." "Machine learning" doesn't learn. On this week's Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete – not our new robot overlord.

We all know that autocomplete is a decidedly mixed blessing. Like all statistical inference tools, autocomplete is profoundly conservative – it wants you to do the same thing tomorrow as you did yesterday (that's why "sophisticated" ad retargeting ads show you ads for shoes in response to your search for shoes). If the word you type after "hey" is usually "hon" then the next time you type "hey," autocomplete will be ready to fill in your typical following word – even if this time you want to type "hey stop texting me you freak":"

pluralistic.net/2023/03/09/aut

#ai #generativeAI #chatgpt #bigtech #fauxtomation

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
291 followers · 603 posts · Server tldr.nettime.org

: "When you talk about AI in your advertising, the FTC may be wondering, among other things:

Are you exaggerating what your AI product can do? Or even claiming it can do something beyond the current capability of any AI or automated technology?

- Are you promising that your AI product does something better than a non-AI product?

- Are you aware of the risks?

- Does the product actually use AI at all?"

ftc.gov/business-guidance/blog

#ftc #usa #ai #advertising #snakeoil #fauxtomation

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
260 followers · 464 posts · Server tldr.nettime.org

: "There’s a certain cruel irony in the fact that as the highest-profile technology in years makes its debut, the ones best suited to keep it on the rails are also the most precarious at the companies that need them. That’s no accident. A chatbot is a sort of magic trick; for the illusion to work properly, the assistants curled up inside the box must remain hidden from the audience, their contribution unremarked.

While Google and Microsoft want you to forget that they exist, for the workers, forgetting doesn’t come so easily."

latimes.com/business/technolog

#ai #generativeAI #google #bard #microsoft #contentmoderation #fauxtomation #ghostwork #wageslavery

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
259 followers · 460 posts · Server tldr.nettime.org

: "OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. We therefore think a lot about the behavior of AI systems we build in the run-up to AGI, and the way in which that behavior is determined.

Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable. In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address. We’ve also seen a few misconceptions about how our systems and policies work together to shape the outputs you get from ChatGPT.

Below, we summarize:

How ChatGPT’s behavior is shaped;
How we plan to improve ChatGPT’s default behavior;
Our intent to allow more system customization; and
Our efforts to get more public input on our decision-making."

openai.com/blog/how-should-ai-

#openai #ai #generativeAI #fauxtomation #fakeai #chatgpt

Last updated 2 years ago

Miguel Afonso Caetano · @remixtures
198 followers · 231 posts · Server tldr.nettime.org

Surprise, Surprise!!!

: “To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.
OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.
The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance.”
time.com/6247678/openai-chatgp

#openai #ai #chatgpt #contentmoderation #kenya #gigeconomy #fauxtomation

Last updated 2 years ago

Luk · @Luk
178 followers · 5100 posts · Server mamot.fr

RT @AntonioCasilli@twitter.com

Au Japon, la chaîne de magasins de proximité Family Mart déploie des robots-magasiniers "autonomes"… qui sont en réalité manœuvrés par des télétravailleurs. europe1.fr/emissions/L-innovat

🐦🔗: twitter.com/AntonioCasilli/sta

#fauxtomation

Last updated 4 years ago

Marko · @Marko
66 followers · 460 posts · Server mamot.fr

A piece by the brilliant journalist Astra Taylor, about the ideology of , , and how critical happen to have answers about such issues.

"The socialist feminist tradition is a powerful resource because it's centrally concerned with what work is—and in particular how capitalism lives and grows by concealing certain kinds of work, refusing to pay for it, and pretending it's not, in fact, work at all. "

logicmag.io/05-the-automation-

#automation #fauxtomation #digitallabour #feminists

Last updated 6 years ago