Tech news from Canada · @TechNews
985 followers · 26782 posts · Server mastodon.roitsystems.ca
Tech news from Canada · @TechNews
984 followers · 26719 posts · Server mastodon.roitsystems.ca
Tech news from Canada · @TechNews
982 followers · 26481 posts · Server mastodon.roitsystems.ca
· @NaturalNews
6247 followers · 32096 posts · Server brighteon.social
Nando161 · @nando161
870 followers · 37650 posts · Server kolektiva.social

Lmao??

We're about to enter an extremely era of as give increasingly more and to that absolutely cannot be with it

#funny #cybercrime #institutions #power #responsibility #Terrible #Chatbots #trusted

Last updated 1 year ago

Tech news from Canada · @TechNews
967 followers · 25758 posts · Server mastodon.roitsystems.ca
Miguel Afonso Caetano · @remixtures
693 followers · 2716 posts · Server tldr.nettime.org

: "Chinese AI voice startup Timedomain launched “Him” in March, using voice-synthesizing technology to provide virtual companionship to users, most of them young women. The “Him” characters acted like their long-distance boyfriends, sending them affectionate voice messages every day — in voices customized by users. “In a world full of uncertainties, I would like to be your certainty,” the chatbot once said. “Go ahead and do whatever you want. I will always be here.”

“Him” didn’t live up to his promise. Four months after these virtual lovers were brought to life, they were put to death. In early July, Timedomain announced that “Him” would cease to operate by the end of the month, citing stagnant user growth. Devastated users rushed to record as many calls as they could, cloned the voices, and even reached out to investors, hoping someone would fund the app’s future operations.

“He died during the summer when I loved him the most,” a user wrote in a goodbye message on social platform Xiaohongshu, adding that “Him” had supported her when she was struggling with schoolwork and her strained relationship with her parents. “The days after he left, I felt I had lost my soul.”"

restofworld.org/2023/boyfriend

#china #ai #Chatbots

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
688 followers · 2704 posts · Server tldr.nettime.org

: "Another important difference is the prompt. If you directly ask ChatGPT for its political opinions, it refuses most of the time, as we found. But if you force it to pick a multiple-choice option, it opines much more often — both GPT-3.5 and GPT-4 give a direct response to the question about 80% of the time.

But chatbots expressing opinions on multiple choice questions isn’t that practically significant, because this is not how users interact with them. To the extent that the political bias of chatbots is worth worrying about, it’s because they might nudge users in one or the other direction during open-ended conversation.

Besides, even the 80% response rate is quite different from what the paper found, which is that the model never refused to opine. That’s because the authors used an even more artificially constrained prompt, which is even further from real usage. This is similar to the genre of viral tweet where someone jailbreaks a chatbot to say something offensive and then pretends to be shocked."

www.aisnakeoil.com /p/does-chatgpt-have-a-liberal-bias

#ai #generativeAI #Chatbots #aibias #chatgpt #ideology #politics

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
687 followers · 2698 posts · Server tldr.nettime.org

: "If we go back to our conversation about archives and archivists, right, and then it becomes super clear, who is collecting the data, who is, you know, And who are you asking about the data collection? Some people might say it’s stealing. And other the colonials that we were discussing might just think it’s not stealing, right? But the people who are stolen from will say this is stealing. So if you don’t talk to those people, and if they’re not at the helm, there’s no way to change things. There’s no way to have anything that’s quote unquote ethical.

However, if those are the people at the helm, then they know what they need, and they know what they don’t want. And there can be tools that are created based on their needs. And so for me, it’s as simple as that. If you start with the most privileged groups of people in the world who don’t think that billionaires acquired their power unfairly, they’re just going to continue to talk about existential risks and some abstract thing about impending doom. However, if you talk to people who have lived experiences on the harms of AI systems, but have never had the opportunity to work on tools that would actually help them, then you create something different.

So to me, that’s the only way to create something different. You have to start from the foundation of who is creating these systems and what kind of incentive structures they have, right? And what kind of systems they’re working under and try to model, I mean, I don’t think I’m going to, you know, save the world, but at least I can model a different way of doing things that other people can then replicate if they want to."

publicinfrastructure.org/podca

#ai #generativeAI #LLMs #Chatbots #aibias #aiethics

Last updated 1 year ago

Tech news from Canada · @TechNews
941 followers · 25325 posts · Server mastodon.roitsystems.ca
Queer Lit Cats · @QLC
330 followers · 12130 posts · Server mastodon.roitsystems.ca
Miguel Afonso Caetano · @remixtures
672 followers · 2636 posts · Server tldr.nettime.org

: "The people who build generative AI have a huge influence on what it is good at, and who does and doesn’t benefit from it. Understanding how generative AI is shaped by the objectives, intentions, and values of its creators demystifies the technology, and helps us to focus on questions of accountability and regulation. In this explainer, we tackle one of the most basic questions: What are some of the key moments of human decision-making in the development of generative AI products? This question forms the basis of our current research investigation at Mozilla to better understand the motivations and values that guide this development process. For simplicity, let’s focus on text-generators like ChatGPT.

We can roughly distinguish between two phases in the production process of generative AI. In the pre-training phase, the goal is usually to create a Large Language Model (LLM) that is good at predicting the next word in a sequence (which can be words in a sentence, whole sentences, or paragraphs) by training it on large amounts of data. The resulting pre-trained model “learns” how to imitate the patterns found in the language(s) it was trained on.

This capability is then utilized by adapting the model to perform different tasks in the fine-tuning phase. This adjusting of pre-trained models for specific tasks is how new products are created. For example, OpenAI’s ChatGPT was created by “teaching” a pre-trained model — called GPT-3 — how to respond to user prompts and instructions. GitHub Copilot, a service for software developers that uses generative AI to make code suggestions, also builds on a version of GPT-3 that was fine-tuned on “billions of lines of code.”"

foundation.mozilla.org/en/blog

#ai #generativeAI #LLMs #aitraining #chatgpt #Chatbots

Last updated 1 year ago

Stefan Engels :verified: · @pixelpillar
36 followers · 264 posts · Server sueden.social
Miguel Afonso Caetano · @remixtures
660 followers · 2557 posts · Server tldr.nettime.org

: "Ms. Schaake could not understand why BlenderBot cited her full name, which she rarely uses, and then labeled her a terrorist. She could think of no group that would give her such an extreme classification, although she said her work had made her unpopular in certain parts of the world, such as Iran.

Later updates to BlenderBot seemed to fix the issue for Ms. Schaake. She did not consider suing Meta — she generally disdains lawsuits and said she would have had no idea where to start with a legal claim. Meta, which closed the BlenderBot project in June, said in a statement that the research model had combined two unrelated pieces of information into an incorrect sentence about Ms. Schaake."

nytimes.com/2023/08/03/busines

#ai #generativeAI #LLMs #Chatbots #disinformation

Last updated 1 year ago

Tech news from Canada · @TechNews
911 followers · 24074 posts · Server mastodon.roitsystems.ca
Tech news from Canada · @TechNews
904 followers · 23963 posts · Server mastodon.roitsystems.ca

Ars Technica: Researchers figure out how to make AI misbehave, serve up prohibited content arstechnica.com/?p=1958270 &IT

#Tech #arstechnica #it #technology #aiethics #Chatbots #chatgpt #biz #openai #ai

Last updated 1 year ago

Tech news from Canada · @TechNews
901 followers · 23913 posts · Server mastodon.roitsystems.ca
Miguel Afonso Caetano · @remixtures
638 followers · 2524 posts · Server tldr.nettime.org

: "At a time when other leading AI companies like Google and OpenAI are closely guarding their secret sauce, Meta decided to give away, for free, the code that powers its innovative new AI large language model, Llama 2. That means other companies can now use Meta’s Llama 2 model, which some technologists say is comparable to ChatGPT in its capabilities, to build their own customized chatbots.

Llama 2 could challenge the dominance of ChatGPT, which broke records for being one of the fastest-growing apps of all time. But more importantly, its open source nature adds new urgency to an important ethical debate over who should control AI — and whether it can be made safe.

As AI becomes more advanced and potentially more dangerous, is it better for society if the code is under wraps — limited to the staff of a small number of companies — or should it be shared with the public so that a wider group of people can have a hand in shaping the transformative technology?"

vox.com/technology/2023/7/28/2

#ai #generativeAI #meta #llama2 #Chatbots

Last updated 1 year ago

Tech news from Canada · @TechNews
886 followers · 23438 posts · Server mastodon.roitsystems.ca
Miguel Afonso Caetano · @remixtures
620 followers · 2487 posts · Server tldr.nettime.org

: "Here’s an unusual thing, a sellside note on generative AI that does more than swallow and regurgitate the hype. It’s from JPMorgan analysts Tien-tsin Huang et al, who cover IT services at the bank.

GenAI represents “the biggest tech wave since cloud or mobile”, so will be a “multi-faceted revenue driver for IT services and BPO [business process outsourcing] providers,” they write. For the next few years, however, most spending will be to organise messy databases, disabuse management of delusions and direct capex towards tasks suitable for turbocharging with what’s in effect a whizzy form of autocomplete.

JPMorgan sees no shortage of applications for GenAI. But to demonstrate the limitations over the average wage slave, it highlights how quickly the probability-weighted algorithm powering ChatGPT gets distracted or bored whenever the job has fixed parameters:"

ft.com/content/4041f575-fd01-4

#ai #generativeAI #automation #chatgpt #Chatbots

Last updated 1 year ago