Benjamin Han · @BenjaminHan
475 followers · 1317 posts · Server sigmoid.social

Cade Metz’s NYTimes article on Doug Lenat, featuring quote from Ken Forbus and @garymarcus. The connection between the game Traveller and is interesting and is new to me.

Douglas Lenat, Who Tried to Make Computers More Human, Dies at 72 nytimes.com/2023/09/04/technol

#cyc #ai #GOFAI #reasoning #commonsense #knowledge

Last updated 1 year ago

Benjamin Han · @BenjaminHan
472 followers · 1301 posts · Server sigmoid.social

Is just symbol pushing?

When Computers Write Proofs, What's the Point of Mathematicians? youtu.be/3l1RMiGeTfU?si=sQMFAK

#math #ai #reasoning #mathematics #proofs #generativeAI

Last updated 1 year ago

Benjamin Han · @BenjaminHan
471 followers · 1284 posts · Server sigmoid.social

Dog Lenat, founder of , passed away earlier this week. From Professor Ken Forbus:

"People in AI often don't give the Cyc project the respect it deserves. Whether or not you agree with an approach, understanding what has happened in different lines of work is important. The Cyc project was the first demonstration that symbolic representations and reasoning could scale to capture significant portions of commonsense…”

linkedin.com/posts/forbus_ai-k

#cyc #ai #KnowledgeGraphs #reasoning #nlp #nlproc

Last updated 1 year ago

gprimola$ :idle: · @giorgiolucas
49 followers · 672 posts · Server techhub.social

I want to share something with you.
My therapist once told me, in regards to my relationship with my father:
“Like it or not, you have a relationship with your father. Everybody has a relationship with A father, even if they don’t have or ever had a father. Having a father is part of everyone’s psyche, therefore you have a relationship with a or your father. The only thing you need to do is define this relationship the best way it works for you. It doesn’t have to be ideal, it just needs to not jeopardize your life.”

#relationships #relationship #father #dad #daddy #therapy #psychology #emotions #son #thoughts #thinking #reasoning

Last updated 1 year ago

Answers in Reason · @AnswersInReason
31 followers · 186 posts · Server masto.nu

What is the Rationality Debate? by Answers In Reasonhttps://www.youtube.com/watch?v=80U_aafts1g#rationality

#godlessgranny #bias #highlight #highlights #fallacies #logic #reason #reasoning

Last updated 1 year ago

Benjamin Han · @BenjaminHan
449 followers · 1210 posts · Server sigmoid.social

Large degradations observed from when tasks are reframed into counterfactuals. Only basic syntax, logic and music chords are decent w/ counterfactuals.

(see also: lnkd.in/gje_WkR3 )

Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, and Yoon Kim. 2023. or Reciting? Exploring the Capabilities and Limitations of Language Models Through Tasks. arxiv.org/abs/2307.02477

#LLMs #reasoning #counterfactual #paper #nlp #nlproc #generativeAI

Last updated 1 year ago

José A. Alonso · @Jose_A_Alonso
844 followers · 1753 posts · Server mathstodon.xyz

Backward reasoning in large language models for verification. ~ Weisen Jiang et als. arxiv.org/abs/2308.07758

#reasoning #LLMs

Last updated 1 year ago

PirateLSAT · @PirateLSAT
1 followers · 14 posts · Server esq.social

And that, my friends, is my guide for the very first when to prepare for the .

The next step is laying out the 3 sections, , and explaining how to approach each one.

Then we can drill down into the various types of questions, , , etc.

There are so many ways to analyze and understand the , it really is a and challenging !

Comments welcome!

#steps #starting #lsat #logical #reasoning #reading #analytical #games #passages #fascinating #exam #guide

Last updated 1 year ago

PirateLSAT · @PirateLSAT
1 followers · 12 posts · Server esq.social


Isolate the types of questions that give you consistent trouble and build up special approaches to them.

There are a few types of for each , we'll cover them here on .

in particular has many types, but we need *actionable* information. Defining is only the first step.

So, the types are grouped by the they require. There are only a few required for the , the challenge is to switch processes on cue :)

#starting #questions #section #mastodon #logical #reasoning #processes #mental #lsat

Last updated 1 year ago

PirateLSAT · @PirateLSAT
1 followers · 4 posts · Server esq.social

The best process for starting is usually first, then the or , according to which is harder. Don't put off the section to the end or you will stress. It's ok to go with the easiest section out, but that just pushes the back.
Some may try to study all sections simultaneously, but that doesn't work for most . Going for the hardest section first is also dangerous, it will lead to a lot of .

#lsat #logical #reasoning #reading #games #harder #starting #Heavy #lifting #students #frustration

Last updated 1 year ago

José A. Alonso · @Jose_A_Alonso
841 followers · 1726 posts · Server mathstodon.xyz

GPT-4 can't reason. ~ Konstantine Arkoudas. preprints.org/manuscript/20230

#reasoning #ai #LLMs #gpt4

Last updated 1 year ago

José A. Alonso · @Jose_A_Alonso
833 followers · 1702 posts · Server mathstodon.xyz

GPT-3 aces tests of reasoning by analogy (Undergrads get beaten on questions like those that helped get them into college). ~ John Timmer. arstechnica.com/science/2023/0

#reasoning #gpt

Last updated 1 year ago

JBRoss · @jbross
21 followers · 500 posts · Server mstdn.party

"There is giant untapped potential in disagreement, especially if the disagreement is between two or more thoughtful people." — Ray Dalio — — —

#raydalio #quote #quotes #potential #future #reasoning #earnest #thoughtful #commonality

Last updated 1 year ago

steve dustcircle ⍻ · @dustcircle
294 followers · 9481 posts · Server masto.ai

6 Proofs for God's Existence???
youtube.com/watch?v=8HoNJVNbcS

When we consider the most profound question of life, “Does exist?” we should follow the wherever it leads. In this video, presents six "evidences" proving God’s existence, from the complexity and order of our to the , , and in . And a former responds.

#god #evidence #kylebutt #universe #morality #freewill #reasoning #humanity #christian

Last updated 1 year ago

Benjamin Han · @BenjaminHan
395 followers · 1067 posts · Server sigmoid.social

10/end

[4] linkedin.com/posts/benjaminhan

[5] Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona Diab, and Bernhard Schölkopf. 2023. Can Large Language Models Infer Causation from Correlation? arxiv.org/abs/2306.05836

#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research

Last updated 1 year ago

Benjamin Han · @BenjaminHan
395 followers · 1066 posts · Server sigmoid.social

9/

[2] Antonio Valerio Miceli-Barone, Fazl Barez, Ioannis Konstas, and Shay B. Cohen. 2023. The Larger They Are, the Harder They Fail: Language Models do not Recognize Identifier Swaps in Python. arxiv.org/abs/2305.15507

[3] Emre Kıcıman, Robert Ness, Amit Sharma, and Chenhao Tan. 2023. Causal Reasoning and Large Language Models: Opening a New Frontier for Causality. arxiv.org/abs/2305.00050

#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research

Last updated 1 year ago

Benjamin Han · @BenjaminHan
395 followers · 1065 posts · Server sigmoid.social

8/

[1] Xiaojuan Tang, Zilong Zheng, Jiaqi Li, Fanxu Meng, Song-Chun Zhu, Yitao Liang, and Muhan Zhang. 2023. Large Language Models are In-Context Semantic Reasoners rather than Symbolic Reasoners. arxiv.org/abs/2305.14825

#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research

Last updated 1 year ago

Benjamin Han · @BenjaminHan
395 followers · 1065 posts · Server sigmoid.social

7/

The results? Both and perform worse than BART fine-tuned with MNLI, and not much better than the uniform random baseline (screenshot).

#gpt4 #alpaca #paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research

Last updated 1 year ago

Benjamin Han · @BenjaminHan
395 followers · 1062 posts · Server sigmoid.social

5/

This shows the semantic priors learned from these function names have totally dominated, and the models don’t really understand what they are doing.

How about LLMs on ? There have been reports of extremely impressive performance of 3.5 and 4, but these models also lack consistency in performance and even possibly have cheated by memorizing the tests [3], as discussed in a previous post [4].

#causalreasoning #gpt #paper #nlp #nlproc #causation #reasoning #research

Last updated 1 year ago

Benjamin Han · @BenjaminHan
395 followers · 1061 posts · Server sigmoid.social

4/

The same tendency is borne out by another paper focusing on testing code-generating LLMs when function names are *swapped* in the input [2] (screenshot 1). They not only found almost all models failed completely, but also most of them exhibit an “inverse scaling” effect: the larger a model is, the worse it gets (screenshot 2).

#paper #nlp #nlproc #codegeneration #causation #causalreasoning #reasoning #research

Last updated 1 year ago