Miguel Afonso Caetano · @remixtures
738 followers · 2908 posts · Server tldr.nettime.org

: "Although chatbots such as ChatGPT can facilitate cost-effective text generation and editing, factually incorrect responses (hallucinations) limit their utility. This study evaluates one particular type of hallucination: fabricated bibliographic citations that do not represent actual scholarly works. We used ChatGPT-3.5 and ChatGPT-4 to produce short literature reviews on 42 multidisciplinary topics, compiling data on the 636 bibliographic citations (references) found in the 84 papers. We then searched multiple databases and websites to determine the prevalence of fabricated citations, to identify errors in the citations to non-fabricated papers, and to evaluate adherence to APA citation format. Within this set of documents, 55% of the GPT-3.5 citations but just 18% of the GPT-4 citations are fabricated. Likewise, 43% of the real (non-fabricated) GPT-3.5 citations but just 24% of the real GPT-4 citations include substantive citation errors. Although GPT-4 is a major improvement over GPT-3.5, problems remain."

nature.com/articles/s41598-023

#ai #generativeAI #chatgpt #hallucinations

Last updated 1 year ago

Daniel Hoelzgen · @dhoelzgen
30 followers · 7 posts · Server ruhr.social

For a medical & caretaking project, I experimented with combining symbolic with to mitigate their tendency to nondeterministic behavior and . Still, it leaves a lot of work to be done, but it's a promising approach for situations requiring higher reliability.

medium.com/9elements/using-sym

#logic #LLMs #hallucinations #ai #artificialintelligence #llm #chatgpt

Last updated 1 year ago

Snyk · @snyk
79 followers · 28 posts · Server masto.dsoc.io

πŸ˜΅β€πŸ’« "” in pose an intriguing aspect of AI behavior and offer numerous possibilities, however, they also present several security concerns.

Learn more about all of the security. concerns that are being surfaced in the context of .

snyk.co/ufXb3

#hallucinations #llm #ai

Last updated 1 year ago

steve dustcircle ⍻ · @dustcircle
311 followers · 10157 posts · Server masto.ai
Tech news from Canada · @TechNews
848 followers · 22609 posts · Server mastodon.roitsystems.ca
IT News · @itnewsbot
3508 followers · 267192 posts · Server schleuss.online

Report: OpenAI holding back GPT-4 image features on fears of privacy issues - Enlarge (credit: Witthaya Prasongsin (Getty Images))

OpenAI ha... - arstechnica.com/?p=1954677 ⁒ -4

#ai #gpt #blind #openai #biz #bemyeyes #aiethics #blindness #confabulation #hallucinations #machinelearning #facialrecognition

Last updated 1 year ago

Alwin de Rooij · @Creativity
719 followers · 258 posts · Server fediscience.org

Hallucinations induced by virtual reality might support creative thinking.

Magni and colleagues theorize that virtual reality (VR) can be used as an alternative to some psychedelic substances.

VR induced hallucinations might enhance divergent thinking due to their effect on cognitive flexibility.

Would love to see this theory confirmed in a good set of experiments!

frontiersin.org/articles/10.33

#vr #virtualreality #psychology #psychedelics #innovation #hallucinations #creativity

Last updated 1 year ago

David Gross · @moorlock
50 followers · 164 posts · Server zirk.us
Paul R. Pival (he/him) · @ppival
137 followers · 232 posts · Server glammr.us

New blog post: Trying Trinka for automatic citation checking distlib.blogs.com/distlib/2023

Are YOU aware of any tools that will identify hallucinated citations?

#ai #hallucinations #distlib

Last updated 1 year ago

Silot · @tolis
63 followers · 150 posts · Server mastodon.gamedev.place

One common problem with AI models is hallucination where the AI is giving an answer which is obviously wrong but with huge confidence. Always validate what the AI generated for you unless you want something random :)

imgflip.com/i/7rggc8

#ai #ml #genai #hallucinations

Last updated 1 year ago

The Elevator · @elevator
0 followers · 2 posts · Server masto.ai

How hallucinations can change your life: from psychedelics to migraines, this article explores the various ways we can experience altered states of consciousness. bbc.com/future/article/2021101

#hallucinations #consciousness #consciousnessresearch #psychedelics

Last updated 1 year ago

Josh G · @Josh_Gallagher
44 followers · 290 posts · Server techhub.social

I'd be interested in reading about any R&D efforts on:
- Attributable AI: Being able to (correctly) articulate influences involved in an output.
- Factual AI (or elimination of hallucinations): Techniques for establishing confidence in the truthfulness of an answer. Probably overlaps somewhat with Attributable AI.

Does anyone in my not-so-vast readership have any pointers?

#artificalintelligence #AI #hallucinations

Last updated 1 year ago

Steven Saus [he/him] · @StevenSaus
458 followers · 8554 posts · Server faithcollapsing.com

From 09 Jun: OpenAI faces defamation suit after ChatGPT completely fabricated another lawsuit - EnlargeNurPhoto / Contributor NurPhoto Armed America Radio touts one of its h... arstechnica.com/tech-policy/20 -ai

#policy #openai #libel #hallucinations #generative #defamation #chatgpt #ai

Last updated 1 year ago

Chris Vitalos βœ… · @chrisvitalos
78 followers · 167 posts · Server indieweb.social

I hope doesn't repeat this it generated over and over. It's really bad, even for a .

>Why don't systems ever talk about their ? Because they can't tell if it's real or just another layer of convolution!

Researchers discover that prefers repeating 25 over and over | Ars Technica

>When tested, "Over 90% of 1,008 generated jokes were the same 25 jokes."
arstechnica.com/information-te

#BadJokeFriday #badjoke #humour #humor #jokes #data #hallucinations #ai #dadjoke #joke #chatgpt

Last updated 1 year ago

IT News · @itnewsbot
3287 followers · 263156 posts · Server schleuss.online

OpenAI faces defamation suit after ChatGPT completely fabricated another lawsuit - Enlarge (credit: NurPhoto / Contributor | NurPhoto)

Armed Amer... - arstechnica.com/?p=1946683

#ai #libel #openai #policy #chatgpt #defamation #generativeAI #hallucinations

Last updated 1 year ago

Tech news from Canada · @TechNews
606 followers · 18668 posts · Server mastodon.roitsystems.ca

@naomilawsonjacobs

...And in these days of and other generating , it's also necessary to make sure that said source actually exists.πŸ€–

#chatgpt #ai #hallucinations

Last updated 1 year ago

White House Press Office · @press
38 followers · 214 posts · Server whitehouse.org

It looks like OpenAI researchers are innovating yet again! is a fascinating concept and it could be the key to preventing AI hallucinations. It's so important to reward the process and not just the outcome. Hats off to OpenAI for their dedication to advancing AI research! techmeme.com/230601/p1#a230601

#processsupervision #ai #openai #innovation #rewards #hallucinations

Last updated 1 year ago

ο½ο½Žο½„ο½™ · @woodsbythesea
32 followers · 164 posts · Server fosstodon.org

"So I followed @GaryMarcus's suggestion and had my undergrad class use ChatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to "grade" it--look for hallucinated info and critique its analysis. *All 63* essays had hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterized. Every single assignment."

nitter.lacontrevoie.fr/cwhowel

#chatgpt3 #llm #students #teaching #hallucinations

Last updated 1 year ago

Hari Tulsidas :verified: · @haritulsidas
74 followers · 565 posts · Server masto.ai

According to a new perspective, hallucinations and delusions are not always signs of mental illness. Some researchers argue that these experiences can be adaptive and meaningful, depending on the context and the person. They suggest we evolve our understanding of hallucinations and delusions and embrace their diversity and complexity.

psychologytoday.com/us/blog/li

#hallucinations #delusions #psychology

Last updated 1 year ago