Logo
  • People
  • Research
  • Activities
  • Contact
Join us!

Meetings Archive (1)

Date
Presenter
Topic
2025/03/19
Kenshiro Tanaka

Paper reading: Presentation of 3 papers from NLP2025

2024/12/11
Naoya Inoue

Paper reading: The Super Weight in Large Language Models https://arxiv.org/abs/2411.07191

2024/11/27
Tien Dang Huu

Progress Report: Unlearning

2024/11/20
Naoya Inoue

Paper reading: Lennart et al. Truth is Universal: Robust Detection of Lies in LLMs https://arxiv.org/abs/2407.12831

2024/07/03
Chau Nguyen

Paper reading: Langedijk et al. DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers. NAACL2024.

2024/06/26
Naoya Inoue

Paper reading: Huh et al. The Platonic Representation Hypothesis. ICML2024.

2024/06/19
Mariko Kato

Paper reading: Bansal et al. Revisiting Model Stitching to Compare Neural Representations. NeurIPS2021.

2024/06/12
Kenshiro Tanaka

Progress report: Metacognitive reasoning

2024/05/29
Tien Dang Huu

Progress report: Machine Unlearning

2024/05/22
Naoya Inoue

Paper discussion: Hernandez et al. Linearity of Relation Decoding in Transformer Language Models. ICLR2024.

2024/05/15
Yoshihiro Sakai

Paper discussion: Kossen et al. In-Context Learning Learns Label Relationships but Is Not Conventional Learning. ICLR2024.

2024/05/08
Yufeng Zhao

Paper reading: Du et al. Understanding Emergent Abilities of Language Models from the Loss Perspective. arXiv2024.

2024/04/24
Tien Dang Huu

Paper reading: Li et al. 2024. The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning. arXiv2024. + Progress report

2024/04/17
Chau Nguyen

Paper reading: Ladhak et al. When Do Pre-Training Biases Propagate to Downstream Tasks? A Case Study in Text Summarization. EACL2023.

2024/04/10
Kenshiro Tanaka

Paper reading: Li et al. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model. NeurIPS2023.

2024/04/03
Naoya Inoue

Paper reading: Dai et al. Knowledge Neurons in Pretrained Transformers. ACL2022.

Logo

©RebelsNLU at JAIST

X