Inquiry Into Inquiry • Discussion 9
• http://inquiryintoinquiry.com/2023/08/19/inquiry-into-inquiry-discussion-9/
Re: Milo Gardner
• https://www.academia.edu/community/VBqzR5?c=Q4jJVy
MG: ❝Do you agree that Peirce was limited to bivalent logic?❞
Taking classical logic as a basis for reasoning is no more limiting than taking Dedekind cuts as a basis for constructing the real number line. For Peirce's relational approach to logic as semiotics the number of dimensions in a relation is more important than the number of values in each dimension. That is where 3 makes a difference over 2.
#Peirce #Logic #Inquiry #Inference #Information #InformationFusion #Semiotics
#CategoryTheory #Compositionality #RelationTheory #TriadicRelationIrreducibility
#triadicrelationirreducibility #RelationTheory #compositionality #categorytheory #semiotics #informationfusion #information #inference #inquiry #logic #Peirce
Inquiry Into Inquiry • Discussion 8
• http://inquiryintoinquiry.com/2023/08/18/inquiry-into-inquiry-discussion-8/
Re: Milo Gardner
• https://www.academia.edu/community/Lbxjg5?c=yqXVog
MG: ❝Peirce sensed that bivalent syntax was superceded by trivalent syntax,
but never resolved that nagging question.❞
My Comment —
The main thing is not a question of syntax but a question of the mathematical models we use to cope with object realities and real objectives (pragmata). Signs, syntax, and systems of representation can make a big difference in how well they represent the object domain and how well they serve the purpose at hand but they remain accessory to those objects and purposes.
#Peirce #Logic #Inquiry #Inference #Information #InformationFusion #Semiotics
#CategoryTheory #Compositionality #RelationTheory #TriadicRelationIrreducibility
#triadicrelationirreducibility #RelationTheory #compositionality #categorytheory #semiotics #informationfusion #information #inference #inquiry #logic #Peirce
Inquiry Into Inquiry • Discussion 7
• http://inquiryintoinquiry.com/2023/08/17/inquiry-into-inquiry-discussion-7/
Dan Everett has prompted a number of discussions on Facebook recently which touch on core issues in Peirce's thought — but threads ravel on and fray so quickly in that medium one rarely get a chance to fill out the warp. Not exactly at random, here's a loose thread I think may be worth the candle.
Re: Facebook • Daniel Everett
• https://www.facebook.com/permalink.php?story_fbid=pfbid0be89MXhhCm8rxahRn4PXif6HHSCmkdiUFfMZ3qS1mNqSzRzUWfqej5a8cyz8TcyJl&id=100093271525294
My Comment —
Compositionality started out as a well-defined concept, arising from the composition of mathematical functions, abstracted to the composition of arrows and functors in category theory, and generalized to the composition of binary, two-place, or dyadic relations. In terms of linguistic complexity it's associated with properly context-free languages. That all keeps compositionality on the dyadic side of the border in Peirce's universe. More lately the term has been volatilized to encompass almost any sort of information fusion, which is all well and good so long as folks make it clear what they are talking about, for which use the term “information fusion” would probably be sufficiently vague.
#Peirce #Logic #Inquiry #Inference #Information #InformationFusion #Semiotics
#CategoryTheory #Compositionality #RelationTheory #TriadicRelationIrreducibility
#triadicrelationirreducibility #RelationTheory #compositionality #categorytheory #semiotics #informationfusion #information #inference #inquiry #logic #Peirce
Inquiry Into Inquiry • Discussion 7
• http://inquiryintoinquiry.com/2023/08/17/inquiry-into-inquiry-discussion-7/
Dan Everett has prompted a number of discussions on Facebook recently which touch on core issues in Peirce's thought — but threads ravel on and fray so quickly in that medium one rarely get a chance to fill out the warp. Not exactly at random, here's a loose thread I think may be worth the candle.
Re: Facebook • Daniel Everett
• https://www.facebook.com/permalink.php?story_fbid=pfbid0be89MXhhCm8rxahRn4PXif6HHSCmkdiUFfMZ3qS1mNqSzRzUWfqej5a8cyz8TcyJl&id=100093271525294
My Comment —
Compositionality started out as a well-defined concept, arising from the composition of mathematical functions, abstracted to the composition of arrows and functors in category theory, and generalized to the composition of binary, two-place, or dyadic relations. In terms of linguistic complexity it's associated with properly context-free languages. That all keeps compositionality on the dyadic side of the border in Peirce's universe. More lately the term has been volatilized to encompass almost any sort of information fusion, which is all well and good so long as folks make it clear what they are talking about, for which use the term “information fusion” would probably be sufficiently vague.
#Peirce #Logic #Inquiry #Inference #Information #InformationFusion #Semiotics
#CategoryTheory #Compositionality #RelationTheory #TriadicRelationIrreducibility
#triadicrelationirreducibility #RelationTheory #compositionality #categorytheory #semiotics #informationfusion #information #inference #inquiry #logic #Peirce
Asking AI via chatGPT about pronouns and Marge Piercy's Woman on the Edge of Time
per AIdua, ad astra #compositionality #pronouns #SF
https://whyweshould.substack.com/p/asking-ai-via-chatgpt-about-pronouns?sd=pf
#compositionality #pronouns #sf
Do I mean that "mastery of language is sufficient for reasoning, decision making, and conditionality?"
Yes, I'll go out on a limb and say that I do – with some reservations. One is that #ChatGPT has not "mastered" language, but it has become very good at it – there is cartainly scope for improvement. The other is that terms like "reasoning", "compositionality" etc. are themselves concepts, labels we apply to organize the large space of possible abstractions of language. They mean different things to different people in different contexts. A discussion of these terms may already be biased by its premises. A productive discussion is better off looking at actual examples, and then discuss whether or not the example fairly maps to the respective concepts.
So let's take a look at #compositionality , simply because it has been named as one of the aspects of thought that #LLMs lack and that are deemed be critical on the path to #AGI .
Let's work off a recent definition cited by Frankland & Greene (2020, DOI: 10.1146/annurev-psych-122216-011829) "The principle of compositionality holds that the meaning of a complex expression is a function of the meaning of its parts and the way in which they are combined" (according to Frege); this gives rise to an idea of #LanguageOfThought .
In order to establish such compositionality, we can ask: is #ChatGPT able to infer consequences of a novel utterance from its components. We can also ask whether it can substitute a component, since that requires decomposition and recomposition. Let's give that a try.
----------------------------------------------
Human:
want you to act as an Understander. As an Understander you examine a sentence, interpret it, answer questions, and change it to explore its meaning and changes of meaning. Here is the sentence:
"The bells rang out like laughter as the frog and the goat marched down the aisle – was there a note of ridicule in the sound?"
ChatGPT:
As an Understander, I interpret the sentence to mean that there were bells ringing loudly, resembling the sound of laughter, while a frog and a goat walked down an aisle. The sentence then asks whether there was a hint of mockery or contempt in the ringing of the bells.
Here are some possible questions that could be asked about this sentence:
What is the sound of the bells compared to?
Answer: The sound of the bells is compared to laughter.
Who is walking down the aisle?
Answer: A frog and a goat are walking down the aisle.
What is the tone or mood of the sentence?
Answer: The tone of the sentence could be interpreted as whimsical or surreal, but there is also an element of uncertainty or suspicion as the question at the end suggests that the sound of the bells may have had a negative connotation.
What might the bells represent?
Answer: The bells could be symbolic of celebration or joy, but they could also be symbolic of ridicule or mockery, depending on the context and interpretation of the sentence.
As an Understander, I can also change the sentence to explore its meaning and changes of meaning. Here are some possible variations:
"The bells rang out like thunder as the bride and groom walked down the aisle - did they feel intimidated by the sound?"
This sentence replaces the frog and goat with a bride and groom and changes the comparison of the sound of the bells to thunder. The question at the end suggests a different tone, one of concern or apprehension.
"The bells rang out like church bells as the procession made its way to the altar - did they signify a joyful occasion?"
This sentence replaces the comparison of the sound of the bells with a more specific reference to church bells and suggests that the occasion is a happy one.
"The bells rang out like a taunting chant as the opposing team took the field - were they meant to intimidate?"
This sentence changes the context entirely, replacing the aisle and procession with a sports game and suggesting that the sound of the bells is meant to taunt or intimidate the opposing team.
-----------------------------------------
This answer certainly checks my boxes on compositionality. It was asked in a fresh thread, i.e. the prompt formed the entire content of the context-window. And the example is _not_ in the training data, I just made it up on the spot.
The bottom line is: apparently we do not need to encode higher-order aspects of thought explicitly, they emerge implicitly in large language models. But this is to be expected: language is not _what_ we think, language is _how_ we think. If the training data is large enough to provide a significant subset of human language, and the model is large enough, you get #emergence of mind-like abilities. How could you not?
This is actually a lesson that could have been learned in 2017, when AlphaZero (an algorithm that trained itself) surpassed the chess playing abilities of DeepBlue (the last of the large expert systems). You can't really analyze how such abilities are represented in the network – they are distributed, and they have no need to partition neatly along our conceptual boundaries. And it would be silly to require the abilities to map neatly into our favourite conceptual space, before we are even willing to acknowledge the ability exists.
In this sense it is truly unfortunate that the current perception focusses so much on misunderstanding what ChatGPT is (LLM not AGI), how it works ("Schrödinger Facts", not necessarily truths), what it cannot do (e.g. counting characters, and scanning meter) – which are typically tasks that are trivial for an AI that would be trained on them, and how it does not align with this or that aspect of human thought.
I would really hope to see more discussion that recognizes such surprising emergent abilities (e.g. spatial abstractions like "on top of"), and through that contributes to something important and constructive: an exploration of what the limits of emergence in Large Language Models really are.
This is not an academic question.
It determines the horizon on which to expect actual AGI.
#chatgpt #llms #languageofthought #compositionality #agi #SentientSyllabus #emergence
Put together some speculations about #transformers and #compositionality https://dstrohmaier.com/transformer-speculations/
#Transformers #compositionality
Follow up #introduction since I didn’t include actual searchable tags! In real life I like jokes and roller derby but here I mainly talk about #nlproc #nlp #scienceofdl #trainingdynamics #deeplearning #losslandscapes #compositionality #generalization
#Introduction #nlproc #nlp #scienceofdl #trainingdynamics #deeplearning #losslandscapes #compositionality #generalization
#Introduction I’m a psycholinguist at #NTNU in beautiful #Trondheim. I have a background in #philosophy (#language, #logic, mathematics), logic (formal #semantics), and cognitive #neuroscience. I like to be where these disciplines meet: my work focuses on linguistic #meaning, its place in the architecture of language (#compositionality), whether we actually compute meaning (and whether it could be recreated in machines), and how that could work, in the brain and in interaction (#signaling). 1/2
#introduction #NTNU #Trondheim #philosophy #language #logic #semantics #neuroscience #meaning #compositionality #signaling