Benjamin Han · @BenjaminHan
477 followers · 1327 posts · Server sigmoid.social
Benjamin Han · @BenjaminHan
477 followers · 1326 posts · Server sigmoid.social
Benjamin Han · @BenjaminHan
477 followers · 1326 posts · Server sigmoid.social

2/ One particular recent paper in the discussion proposed an ICL approach named PromptNER [3], and reported SOTA results among its ICL peers on well-known datasets such as CoNLL 2003 and GENIA. Compared its results with the SOTA *fine-tuning* (FT) solutions, FT outperforms on CoNLL by 10+ points (94.6 vs. 83.48; screenshot 1) [4], and outperforms on GENIA too by 20+ points (80.8 vs. 58.44; screenshot 2) [5].

#nlproc #knowledgegraph #paper

Last updated 1 year ago

Harald Sack · @lysander07
748 followers · 461 posts · Server sigmoid.social

NeMiG is a bilingual (en/de) news on the topic of , which can be used, among others, to conduct controlled experiments on by news . arxiv.org/abs/2309.00550 Joint work in the project w/
@dwsunima @fizise
@kitkarlsruhe
via HeikoPaulheim@twitter

#dataset #migration #polarization #recommendersystems #renewrs #knowledgegraph #recsys

Last updated 1 year ago

Harald Sack · @lysander07
748 followers · 461 posts · Server sigmoid.social

Of course I know RDF2vec ;-)
However, Heiko Paulheim's RDF2vec website has grown nicely and has become a rather valuable resource for this embedding method, including implementations, models and services, variations and extensions, as well as more than 150 references in scientific papers!
RDF2vec website: rdf2vec.org/
original paper: madoc.bib.uni-mannheim.de/4130
Petar Ristoski's PhD thesis (with RDF2vec): ub-madoc.bib.uni-mannheim.de/4

#knowledgegraph #kge #deeplearning #embeddings

Last updated 1 year ago

Harald Sack · @lysander07
741 followers · 454 posts · Server sigmoid.social

Nice visualizations in the SemOpenAlex Explorer enabling the exploration of 249M works, 135M authors, 109K institutions, and 65K topics from SemOpenAlex
Explorer website: semopenalex.org/
SemOpenAlex ontology: semopenalex.org/resource/?uri=
paper: arxiv.org/pdf/2308.03671.pdf
RDF dump of 26B triples: semopenalex.s3.amazonaws.com/b

#knowledgegraph #semanticweb #exploratorysearch #SemanticSearch #rdf

Last updated 1 year ago

Harald Sack · @lysander07
741 followers · 454 posts · Server sigmoid.social

Hang in there, my fellow researchers and practitioners, soon (in a few years) we will reach the "Slope of Enlightenment" ;-)
The new Gartner hype cycle for AI positions knowledge graphs right in the middle of the "Through of Disillusionment" ... while placing and at the peak of the hype
gartner.com/en/articles/what-s(AI,most%20credible%20cases%20for%20investment.

#knowledgegraph #generativeAI #foundationmodels #LLMs #semanticweb #ai #hypecycle #artificialintelligence

Last updated 1 year ago

Benjamin Han · @BenjaminHan
460 followers · 1259 posts · Server sigmoid.social

7/ REFERENCES

[1] Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D. Manning. 2021. Fast Model Editing at Scale. arxiv.org/abs/2110.11309

[2] Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2021. Knowledge Neurons in Pretrained Transformers. arxiv.org/abs/2307.14988

#papers #nlp #nlproc #knowledge #knowledgegraph

Last updated 1 year ago

Benjamin Han · @BenjaminHan
460 followers · 1259 posts · Server sigmoid.social

6/ It’d be interesting to see how “complete” various model editing methods can achieve (screenshot), and how to achieve a better tradeoff between completeness and efficiency.

#papers #nlp #nlproc #knowledge #knowledgegraph

Last updated 1 year ago

Benjamin Han · @BenjaminHan
460 followers · 1259 posts · Server sigmoid.social

5/ Another related recent work is on how to properly evaluate model editing [5]. Instead of just assessing whether an individual fact has been successfully injected or if similar predictions for other subjects have not changed, the *consequences* of these updates, aka “ripple effects”, should also be evaluated (screenshot).

#papers #nlp #nlproc #knowledge #knowledgegraph

Last updated 1 year ago

Benjamin Han · @BenjaminHan
460 followers · 1256 posts · Server sigmoid.social

4/ They then propose PMET (Precise Model Editing in a Transformer), which effectively moves weight optimization upstream. And while trying to optimize both MHSA and FFN, it only updates on FFN weights to preserve specificity (screenshot 1). The result is much more stable performance as number of edits increases (screenshot 2), and better overall results (screenshot 3).

#papers #nlp #nlproc #knowledge #knowledgegraph

Last updated 1 year ago

Benjamin Han · @BenjaminHan
460 followers · 1256 posts · Server sigmoid.social

3/ By computing the similarities of the hidden states of MHSA and FFN before and after each layer, they observe FFN stabilizes much earlier than MHSA, thus concluding that MHSA encodes more general knowledge extraction patterns while FFN captures more factual aspect of knowledge (screenshot).

#papers #nlp #nlproc #knowledge #knowledgegraph

Last updated 1 year ago

Benjamin Han · @BenjaminHan
460 followers · 1256 posts · Server sigmoid.social

2/ Building on ROME/MEMIT, the authors of a more recent work [4] hypothesize that optimizing transformer layer hidden states to update knowledge may be too much, as these parameters contain simultaneously the effect from Multi-head Self-Attention (MHSA), feed-forward network (FFN), and residual connections.

#papers #nlp #nlproc #knowledge #knowledgegraph

Last updated 1 year ago

Benjamin Han · @BenjaminHan
460 followers · 1256 posts · Server sigmoid.social

1/ Can we "edit" to update incorrect/outdated facts without costly retraining? Recent works such as training auxiliary models to predict weight changes in the main model (MEND) [1], locating "knowledge neurons" [2], using causal intervention to identify feed-forward network (FFN) weights to edit in ROME [3], and scaling up editing operations to thousands of associations in MEMIT [4] have proven it's doable, even practical.

#LLMs #papers #nlp #nlproc #knowledge #knowledgegraph

Last updated 1 year ago

OpenBiblioJobs · @obj
639 followers · 13860 posts · Server openbiblio.social
Benjamin Han · @BenjaminHan
449 followers · 1210 posts · Server sigmoid.social

Using contrastive loss and binned BARTScore as input improves faithfulness of knowledge-to-text generation.

Tahsina Hashem, Weiqing Wang, Derry Tanti Wijaya, Mohammed Eunus Ali, and Yuan-Fang Li. 2023. Generating Faithful Text From a with Noisy Reference Text. arxiv.org/abs/2308.06488

#knowledgegraph #paper #nlp #nlproc #generativeAI

Last updated 1 year ago

Benjamin Han · @BenjaminHan
449 followers · 1210 posts · Server sigmoid.social

Generating positive AND negative walks through relations subclassOf and superClassOf improves protein-protein interaction and gene-disease association classification tasks.

Rita T. Sousa, Sara Silva, Heiko Paulheim, and Catia Pesquita. 2023. Biomedical Knowledge Graph Embeddings with Negative Statements. arXiv [cs.AI]. arxiv.org/abs/2308.03447

#ontological #paper #knowledgegraph #nlp #nlproc #biomedical

Last updated 1 year ago

Harald Sack · @lysander07
715 followers · 429 posts · Server sigmoid.social
Harald Sack · @lysander07
679 followers · 417 posts · Server sigmoid.social

"InteractOA: Showcasing the representation of knowledge from scientific literature in Wikidata" by Muhammad Elhossary and Konrad Förstner.
InteractOA is a frontend interface displaying visualizations of prokaryotic regulatory small RNA interaction networks, with Central article citations for evidence based on .
paper: semantic-web-journal.net/syste
InteractOA demo: interactoa.toolforge.org/
GitHub: github.com/foerstner-lab/Inter
@ZBMED

#pubmed #wikidata #knowledgegraph #semanticweb #demo

Last updated 1 year ago

Tane Piper · @tanepiper
1153 followers · 4143 posts · Server tane.codes