EMNLP 2023 in Singapore!
RT @casszzx@twitter.com
Maybe see you again next year, at Sentosa island, Singapore #emnlp #emnlp2023 @HuiyinXue@twitter.com @nikaletras@twitter.com @SheffieldNLP@twitter.com
Minor underline complaint for #EMNLP... the website autocorrects some queries instead of doing exact string match, making it impossible to look up some names and paper titles.
Maybe too late for this rendition, but would be great if #EMNLP program included ACs for each session and poster numbers for each paper.
Congratulations Jenny Kunz, Martin Jirenius, Oskar Holmström and Marco Kuhlmann on winning the best paper award at #BlackboxNLP 2022! #NLProc #EMNLP2022 #EMNLP #XAI
#blackboxnlp #nlproc #emnlp2022 #EMNLP #xai
RT @mark_riedl@twitter.com
Reminder: I'm giving an invited talk at the #EMNLP workshop on Novel Ideas in Learning-to-Learn through Interactionhttps://www.cs.mcgill.ca/~pparth2/nilli_workshop_2022/
Talk title: Dungeons and DQNs
08 December 2022
16:45 Abu Dabi
7:45am Eastern US
🐦🔗: https://twitter.com/mark_riedl/status/1600670945919832069
Another very cool EMNLP paper showing that grammar induction help discover shortcut features/spurious correlations: http://arxiv.org/abs/2210.11560. It's nice knowing that parsing is still useful in this era of end-to-end training.
Reminder: I'm giving an invited talk at the #EMNLP workshop on Novel Ideas in Learning-to-Learn through Interactionhttps://www.cs.mcgill.ca/~pparth2/nilli_workshop_2022/
Talk title: Dungeons and DQNs
08 December 2022
16:45 Abu Dabi
7:45am Eastern US
Cool EMNLP 2022 paper on negation-focussed QA dataset by @lasha_nlp: http://arxiv.org/abs/2211.00295. Evaluations on the dataset show that current large models still perform a lot worse compared to humans. We need more datasets that similarly target specific linguistic capabilities.
The rest of the world is at #EMNLP and somehow I am home preparing four brand-new lectures on diverse subjects on which I am not an expert.
I guess I would get to learn something either way.
➤ Invited talk at the #EMNLP Novel Ideas in Learning-to-Learn through Interaction workshop (https://www.cs.mcgill.ca/~pparth2/nilli_workshop_2022/):
4) Talk titled "Dragons and DQNs"
➤ Findings paper at the #EMNLP Blackbox NLP Workshop (https://blackboxnlp.github.io/):
3) Calibrating Trust of Multi-Hop Question Answering Systems with Decompositional Probes
https://arxiv.org/abs/2204.07693
With Kaige Xie and Sarah Wiegreffe
This paper looks at new technique for XAI that helps people determine when a question-answering system might be giving the wrong answer when they themselves might not know the answer
➤ Findings papers at the #EMNLP Blackbox NLP Workshop (https://blackboxnlp.github.io/):
3) Calibrating Trust of Multi-Hop Question Answering Systems with Decompositional Probes
https://arxiv.org/abs/2204.07693
With Kaige Xie and Sarah Wiegreffe
This paper looks at new technique for XAI that helps people determine when a question-answering system might be giving the wrong answer when they themselves might not know the answer
#EMNLP workshops!
➤ Two Findings papers at the Generation, Evaluation & Metrics (GEM) Workshop (https://gem-benchmark.com/workshop):
1) Inferring the Reader: Guiding Automated Story Generation with Commonsense Reasoning https://arxiv.org/abs/2112.08596
With Xiangyu Peng, Sarah Wiegreffe, Siyan Li
2) Guiding Neural Story Generation with Reader Models https://arxiv.org/abs/2105.01311
With Xiangyu Peng, Kaige Xie, Amal Alabdulkarim
These papers look at story generators that model the reader's understanding of the story
@ #conll #EMNLP talk to me about
ColD Fusion & https://ibm.github.io/model-recycling/
BabyLM shared task
https://www.label-sleuth.org/
Enhancing decoders with syntax
And guided work (talk to them too)
Estimating #neuralEmpty quality with source only
Controlling structure in - neuron level
Details:
Started my journey to Abu Dhabi for #emnlp2022! Really excited to learn about the latest advancements in machine translation, speech tech, dialogue systems, and more. I've got my notepad ready and working hard to markup the 300 page conference handbook. Haha