Representations and Computations in Transformers that Support Generalization on Structured Tasks
Yuxuan Li, James McClelland
Action editor: Stefan Lee.
#attention #learns #representations
Norm-count Hypothesis: On the Relationship Between Norm and Object Count in Visual Representations
Design of the topology for contrastive visual-textual alignment
#softmax #embedding #representations
Contrastive Attraction and Contrastive Repulsion for Representation Learning
Huangjie Zheng, Xu Chen, Jiangchao Yao et al.
Action editor: Yanwei Fu.
#softmax #representations #ImageNet
Self-Supervised Graph Representation Learning for Neuronal Morphologies
Marissa A. Weis, Laura Pede, Timo Lüddecke, Alexander S Ecker
Action editor: Robert Legenstein.
#neurons #Graphs #representations
DORA: Exploring Outlier Representations in Deep Neural Networks
Kirill Bykov, Mayukh Deb, Dennis Grinwald, Klaus Robert Muller, Marina MC Höhne
Action editor: Antonio Vergari.
#outliers #outlier #representations
Representations and Computations in Transformers that Support Generalization on Structured Tasks
#attention #tasks #representations
ContraSim – Analyzing Neural Representations Based on Contrastive Learning
#similarity #representations #benchmark
Contrastive Attraction and Contrastive Repulsion for Representation Learning
#softmax #representations #ImageNet
#AI generated people cannot #grow or #change. They remain static #representations with a single point of view.
If these were real #nurses or even #models, they could decide they oppose the message and say as much in public. But now instead, they have no voices of their own.
#growth #ai #grow #change #representations #nurses #models
#AI generated people cannot #grow or #change. They remain static #representations with a single point of view.
If these were real #nurses or even #models, they could decide they oppose the message and say as much in public. But now instead, they have no voices of their own.
#growth #ai #grow #change #representations #nurses #models
Latest papers: Rosa Cao & Jared Warren argue against the common slogan that mental #representations ”stand in for” the things they represent @philosophy @philosophyofmind https://doi.org/10.1080/09515089.2023.2207594
A Measure of the Complexity of Neural Representations based on Partial Information Decomposition
David Alexander Ehrlich, Andreas Christian Schneider, Viola Priesemann et al.
Action editor: Jean Barbier.
#complexity #neurons #representations
Invariant Feature Coding using Tensor Product Representation
#representations #tensor #classifier
Inducing Meaningful Units from Character Sequences with Dynamic Capacity Slot Attention
#attention #representations #characters
Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient
Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda
#spiking #representations #Recognition
Extreme Masking for Learning Instance and Distributed Visual Representations
Zhirong Wu, Zihang Lai, Xiao Sun, Stephen Lin
#attention #masking #representations
Self-Supervised Graph Representation Learning for Neuronal Morphologies
#neurons #Graphs #representations
#Computerphile - #Glitch #Tokens In #LargeLanguageModels
#RobMiles talks about '#GlitchTokens', those mysterious words, which result in gibberish when entered into some large #LanguageModels.
https://www.youtube.com/watch?v=WO2X3oZEJOA&ab_channel=Computerphile
#GPT #ChatGPT #Language #Interpretability #OpenAI #AI #ArtificialIntelligence #Representation #Representations
#representations #representation #artificialintelligence #ai #openai #interpretability #language #chatgpt #gpt #languagemodels #glitchtokens #robmiles #largelanguagemodels #tokens #glitch #computerphile
DORA: Exploring outlier representations in Deep Neural Networks
#representations #outlier #representation