Many mental calories later, this is progressing.
It's all Cliche and Gimmick ;)
#TTRPG #GameDesign #Fluff #FictionFirst #Tapestry #NarrativeThread #WholeCloth #Ingredients #Inference
#ttrpg #gamedesign #fluff #fictionfirst #Tapestry #narrativethread #wholecloth #ingredients #inference
Causal Parrots: Large Language Models May Talk Causality But Are Not Causal
Matej Zečević, Moritz Willig, Devendra Singh Dhami, Kristian Kersting
Action editor: Frederic Sala.
'Model-based Causal Discovery for Zero-Inflated Count Data', by Junsouk Choi, Yang Ni.
http://jmlr.org/papers/v24/21-1476.html
#causal #genomics #inference
Inquiry Into Inquiry • Discussion 9
• http://inquiryintoinquiry.com/2023/08/19/inquiry-into-inquiry-discussion-9/
Re: Milo Gardner
• https://www.academia.edu/community/VBqzR5?c=Q4jJVy
MG: ❝Do you agree that Peirce was limited to bivalent logic?❞
Taking classical logic as a basis for reasoning is no more limiting than taking Dedekind cuts as a basis for constructing the real number line. For Peirce's relational approach to logic as semiotics the number of dimensions in a relation is more important than the number of values in each dimension. That is where 3 makes a difference over 2.
#Peirce #Logic #Inquiry #Inference #Information #InformationFusion #Semiotics
#CategoryTheory #Compositionality #RelationTheory #TriadicRelationIrreducibility
#triadicrelationirreducibility #RelationTheory #compositionality #categorytheory #semiotics #informationfusion #information #inference #inquiry #logic #Peirce
Inquiry Into Inquiry • Discussion 8
• http://inquiryintoinquiry.com/2023/08/18/inquiry-into-inquiry-discussion-8/
Re: Milo Gardner
• https://www.academia.edu/community/Lbxjg5?c=yqXVog
MG: ❝Peirce sensed that bivalent syntax was superceded by trivalent syntax,
but never resolved that nagging question.❞
My Comment —
The main thing is not a question of syntax but a question of the mathematical models we use to cope with object realities and real objectives (pragmata). Signs, syntax, and systems of representation can make a big difference in how well they represent the object domain and how well they serve the purpose at hand but they remain accessory to those objects and purposes.
#Peirce #Logic #Inquiry #Inference #Information #InformationFusion #Semiotics
#CategoryTheory #Compositionality #RelationTheory #TriadicRelationIrreducibility
#triadicrelationirreducibility #RelationTheory #compositionality #categorytheory #semiotics #informationfusion #information #inference #inquiry #logic #Peirce
Inquiry Into Inquiry • Discussion 7
• http://inquiryintoinquiry.com/2023/08/17/inquiry-into-inquiry-discussion-7/
Dan Everett has prompted a number of discussions on Facebook recently which touch on core issues in Peirce's thought — but threads ravel on and fray so quickly in that medium one rarely get a chance to fill out the warp. Not exactly at random, here's a loose thread I think may be worth the candle.
Re: Facebook • Daniel Everett
• https://www.facebook.com/permalink.php?story_fbid=pfbid0be89MXhhCm8rxahRn4PXif6HHSCmkdiUFfMZ3qS1mNqSzRzUWfqej5a8cyz8TcyJl&id=100093271525294
My Comment —
Compositionality started out as a well-defined concept, arising from the composition of mathematical functions, abstracted to the composition of arrows and functors in category theory, and generalized to the composition of binary, two-place, or dyadic relations. In terms of linguistic complexity it's associated with properly context-free languages. That all keeps compositionality on the dyadic side of the border in Peirce's universe. More lately the term has been volatilized to encompass almost any sort of information fusion, which is all well and good so long as folks make it clear what they are talking about, for which use the term “information fusion” would probably be sufficiently vague.
#Peirce #Logic #Inquiry #Inference #Information #InformationFusion #Semiotics
#CategoryTheory #Compositionality #RelationTheory #TriadicRelationIrreducibility
#triadicrelationirreducibility #RelationTheory #compositionality #categorytheory #semiotics #informationfusion #information #inference #inquiry #logic #Peirce
Inquiry Into Inquiry • Discussion 7
• http://inquiryintoinquiry.com/2023/08/17/inquiry-into-inquiry-discussion-7/
Dan Everett has prompted a number of discussions on Facebook recently which touch on core issues in Peirce's thought — but threads ravel on and fray so quickly in that medium one rarely get a chance to fill out the warp. Not exactly at random, here's a loose thread I think may be worth the candle.
Re: Facebook • Daniel Everett
• https://www.facebook.com/permalink.php?story_fbid=pfbid0be89MXhhCm8rxahRn4PXif6HHSCmkdiUFfMZ3qS1mNqSzRzUWfqej5a8cyz8TcyJl&id=100093271525294
My Comment —
Compositionality started out as a well-defined concept, arising from the composition of mathematical functions, abstracted to the composition of arrows and functors in category theory, and generalized to the composition of binary, two-place, or dyadic relations. In terms of linguistic complexity it's associated with properly context-free languages. That all keeps compositionality on the dyadic side of the border in Peirce's universe. More lately the term has been volatilized to encompass almost any sort of information fusion, which is all well and good so long as folks make it clear what they are talking about, for which use the term “information fusion” would probably be sufficiently vague.
#Peirce #Logic #Inquiry #Inference #Information #InformationFusion #Semiotics
#CategoryTheory #Compositionality #RelationTheory #TriadicRelationIrreducibility
#triadicrelationirreducibility #RelationTheory #compositionality #categorytheory #semiotics #informationfusion #information #inference #inquiry #logic #Peirce
Thrilled to announce the Regular Expression Inference Challenge (REIC), with Mojtaba Valizadeh, Ignacio Iacobacci, Martin Berger.
REI is a supervised machine learning (#ML) and program synthesis task, and poses the problem of finding minimal regular expressions from examples: Given two finite sets of strings P and N and a cost function cost(⋅), the task is to generate an expression r that accepts all strings in P and rejects all strings in N, while no other such expression r' exists with cost(r')<cost(r).
Turns out, this sort of inference seems to be really hard for current DL (#llms ) approaches. Prompting StarChat-beta -- a SOTA large LM for code with 15.5B parameters -- yields extremely low results.
Even a fully supervised 300M parameter model, which we call ReGPT, only achieves around 14% precise and minimal expressions.
Check out our preprint on arXiv: https://arxiv.org/abs/2308.07899
The challenge is available on CodaLab: https://codalab.lisn.upsaclay.fr/competitions/15096
We formally define the problem, and provide training and validation data, as well as starter code for all our baselines.
We invite researchers anywhere to participate in tackling our challenge.
#machinelearning #inference #challenge #AI #ML #llm #llms #huawei
#ml #llms #machinelearning #inference #challenge #ai #llm #huawei
Thrilled to announce the Regular Expression Inference Challenge (REIC), with Mojtaba Valizadeh, Ignacio Iacobacci, Martin Berger.
REI is a supervised machine learning (#ML) and program synthesis task, and poses the problem of finding minimal regular expressions from examples: Given two finite sets of strings P and N and a cost function cost(⋅), the task is to generate an expression r that accepts all strings in P and rejects all strings in N, while no other such expression r' exists with cost(r')<cost(r).
Turns out, this sort of inference seems to be really hard for current DL (#llms ) approaches. Prompting StarChat-beta -- a SOTA large LM for code with 15.5B parameters -- yields extremely low results.
Even a fully supervised 300M parameter model, which we call ReGPT, only achieves around 14% precise and minimal expressions.
Check out our preprint on arXiv: https://lnkd/e-G3sVH3
The challenge is available on CodaLab: https://lnkd/ewB9nucj
We formally define the problem, and provide training and validation data, as well as starter code for all our baselines.
We invite researchers anywhere to participate in tackling our challenge.
#machinelearning #inference #challenge #AI #ML #llm #llms #huawei
#llms #ml #machinelearning #ai #llm #huawei #inference #challenge
Survey of Definition and Determination • 2
• https://inquiryintoinquiry.com/2023/04/06/survey-of-definition-and-determination-2/
In the early 1990s, “in the middle of life's journey” as the saying goes, I returned to grad school in a systems engineering program with the idea of taking a more systems-theoretic approach to my development of Peircean themes, from signs and scientific inquiry to logic and information theory.
Two of the first questions calling for fresh examination were the closely related concepts of definition and determination, not only as Peirce used them in his logic and semiotics but as researchers in areas as diverse as computer science, cybernetics, physics, and systems sciences were finding themselves forced to reconsider the concepts in later years. That led me to collect a sample of texts where Peirce and a few other writers discuss the issues of definition and determination. There are copies of those selections at the following sites.
Collection Of Source Materials
• https://oeis.org/wiki/User:Jon_Awbrey/EXCERPTS
Excerpts on Definition
• https://oeis.org/wiki/User:Jon_Awbrey/EXCERPTS#Definition
Excerpts on Determination
• https://oeis.org/wiki/User:Jon_Awbrey/EXCERPTS#Determination
What follows is a Survey of blog and wiki posts on Definition and Determination, with a focus on the part they play in Peirce's interlinked theories of signs, information, and inquiry. In classical logical traditions the concepts of definition and determination are closely related and their bond acquires all the more force when we view the overarching concept of constraint from an information-theoretic point of view, as Peirce did beginning in the 1860s.
#Peirce #Logic #Definition #Determination #DifferentialLogic
#Inference #Information #Inquiry #Semiotics #SignRelations
#AI #Cybernetics #IntelligentSystems #InquiryDrivenSystems
#InquiryDrivenSystems #IntelligentSystems #cybernetics #ai #SignRelations #semiotics #inquiry #information #inference #DifferentialLogic #determination #definition #logic #Peirce
My air-cooled DIY Dual #RTX3090 #NVLink #AI workstation. You can never have enough #VRAM with these exciting LLMs! #LLM #LLaMA #Falcon #finetuning #lora #gptq #inference #cuda #pytorch
#rtx3090 #nvlink #ai #vram #llm #llama #falcon #finetuning #lora #gptq #inference #cuda #pytorch
Causal Parrots: Large Language Models May Talk Causality But Are Not Causal
Meta designed the “Meta Training and Inference Accelerator” which meets demands of deep learning recommender models at inference time. The accelerator is powered by a grid of processing elements where each element contains two RISC-V cores (one of these has a vector extension): https://www.nextplatform.com/2023/05/18/meta-platforms-crafts-homegrown-ai-inference-chip-ai-training-next/
#meta #mtia #DLRM #inference #ai #hpc
Break this up into 2,000 character chunks and prompt ChatGPT to “rewrite using language appropriate for a 5th grade reading level”: https://a16z.com/2023/04/27/navigating-the-high-cost-of-ai-compute/
#compute #TCO #GenerativeAI #COGS #product #productmanagement #LLMs #AIgovernance #infrastructure #training #inference #parameters #GPT4 #GPT
#compute #tco #generativeAI #cogs #Product #productmanagement #LLMs #aigovernance #infrastructure #training #inference #parameters #gpt4 #GPT
'Inference for a Large Directed Acyclic Graph with Unspecified Interventions', by Chunlin Li, Xiaotong Shen, Wei Pan.
http://jmlr.org/papers/v24/21-0855.html
#inference #nodewise #ancestors
#inference #nodewise #ancestors
Jacobian-based Causal Discovery with Nonlinear ICA
Patrik Reizinger, Yash Sharma, Matthias Bethge, Bernhard Schölkopf, Ferenc Huszár, Wieland Brendel
‟How Reckless Cops Relying On Questionable Facial Recognition Tech Can Destroy Lives''
There are stories never revealed.
https://www.techdirt.com/2023/04/07/how-reckless-cops-relying-on-questionable-facial-recognition-tech-can-destroy-lives/ #facialrecognition #inference
The results of the MLPerf Inference v3.0 and Mobile v3.0 benchmark suite have been released:
https://mlcommons.org/en/news/mlperf-inference-1q2023/
https://www.forbes.com/sites/karlfreund/2023/04/05/nvidia-performance-trounces-all-competitors-who-have-the-guts-to-submit-to-mlperf-inference-30/
#MLPerf #inference #nvidia #qualcomm #deci #ai
Nvidia highlights the quantization techniques VS-Quant and OCTAV for high-performance AI inference: https://www.nextplatform.com/2023/03/31/a-peek-into-the-future-of-ai-inference-at-nvidia/
"Police Relied on Hidden Technology and Put the Wrong Person in Jail" https://www.nytimes.com/2023/03/31/technology/facial-recognition-false-arrests.html #facialrecognition #baddata #inference #lawenforcement
#facialrecognition #baddata #inference #lawenforcement