Logo
  • People
  • Research
  • Activities
  • Contact
Join us!

Lab Meeting

Activities ▸ Lab Meeting

Lab members work on a variety of research projects. The Lab Meeting provides a weekly opportunity to share their work with other lab members and get their feedback.

Logistics

  1. Lab-wide announcement (15 min.)
    • Everyone’s submission plan
    • Brief recap of conference deadlines
    • Discussion about lab management
  2. Paper skimming (weekly)
    • 3 people presents their favorite papers (15 min. x 3)
      • Talk: 5 min. / QA : 10 min.
  3. Poster presentation (monthly)
    • 4-5 people present their work by a poster
  4. Spotlight presentation(s) (on demand)
    • 1-2 people present their relatively complete work in oral presentation

Meetings Log

‣

2026

Date
Type
Presenters
Topic
02/26/2026
Paper Skimming
SHI,YutingHinata Tezuka
1. [Buzeta et al.] Seeing to Generalize: How Visual Data Corrects Binding Shortcuts /2. [Lindsey] Emergent Introspective Awareness in Large Language Models
02/19/2026
Spotlight Oral
N
Naohiro Kaide
S
Shohei Kitajima
NLP2026 Poster Presentation/Feedback (JAPANESE SESSION)
02/12/2026
Paper Skimming
Mizuki koyamagawa
1. [Zhong] DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher /2. [Shi et al.] Vision Function Layer in Multimodal LLMs
02/05/2026
Spotlight Oral
P
Peter_Obule
A
Akira Ishii
Masters Defense Practice Presentation
01/29/2026
Poster
K
Kim Pothong
Cognitive Alignment in Question Generation
01/22/2026
Paper Skimming
Koga Kobayashi
Hinata Tezuka
1. [Zhang et al.] From Reasoning to Answer: Empirical, Attention-Based and Mechanistic Insights into Distilled DeepSeek R1 Models / 2. [Han et al.] Token-Budget-Aware LLM Reasoning / 3. [He et al.] Towards Global-level Mechanistic Interpretability: A Perspective of Modular Circuits
01/15/2026
Paper Skimming
T
Tien Dang Huu
JIN, Tao
1. [Wei et al.] AdaDecode: Accelerating LLM Decoding with Adaptive Layer Parallelism / 2. A note on model immunization
‣

2025

Date
Type
Presenters
Topic
12/25/2025
Poster
P
Peter_Obule
S
Shohei Kitajima
1. Hallucination Control Using Representation Engineering with Uncertainty Aware Steering / 2. Comparative Analysis of Methods for Inserting Unknown Knowledge into LLMs
12/18/2025
Paper Skimming
kenkenSHI,Yuting
A
Akira Ishii
1. [ Podolak et al.] Read Your Own Mind: Reasoning Helps Surface Self-Confidence Signals in LLMs /2. [Assouel et al.] Visual Symbolic Mechanisms: Emergent Symbol Processing in Vision Language Models /3. [Tan et al.] Cascading Large Langugage Models for Salient Event Graph Generation
12/11/2025
Paper Skimming
houjing
T
TRAN, Thu Thi Anh
1. [Basile et al.]Head Pursuit: Probing Attention Specialization in Multimodal Transformers/ 2. [Wang et al.] Rehearsal-free Continual Language Learning via Efficient Parameter Isolation - ACL Anthology
12/04/2025
Spotlight Oral
Naoya Inoue
RRI (Responsible Research and Innovation) Seminar
11/27/2025
Poster
N
NGUYEN Phuong Minh
Mariko Kato
1. Attention-aware Intervention/ 2. [Kato et al.] Affinity and Diversity: A Unified Metric for Demonstration Selection via Internal Representations
11/20/2025
Paper Skimming
Raka
S
Shohei Kitajima
Mizuki koyamagawa
1. [Zhang et al.] TAdaRAG: Task Adaptive Retrieval-Augmented Generation via On-the-Fly Knowledge Graph Construction / 2. [Cheng et al.] xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token / 3. [Zhang et al.] Catastrophic Failure of LLM Unlearning via Quantization | OpenReviewBas
11/13/2025
Paper Skimming
Hakaze Cho
T
Tien Dang Huu
P
Peter_Obule
1. [Anonymous] Language Models are Injective and Hence Invertible / 2. [Chen et al.] Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs / 3. [Zhao et al.] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering
11/06/2025
Paper Skimming
houjingKoga Kobayashi
K
Kim Pothong
1. [Jian et al.]  Vision Transformers Don't Need Trained Registers 2. [Yue et al.] Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? 3. [Baes et al.] LSC-Eval: A General Framework to Evaluate Methods for Assessing Dimensions of Lexical Semantic Change Using LLM-Generated Synthetic Data
10/30/2025
Poster
T
TRAN, Thu Thi Anh
On Subspace Orthogonality in Continual Learning of LLMs with LoRA
10/23/2025
Paper Skimming
SHI,YutingRaka
1. [Li et al.] Lost in Embeddings: Information Loss in Vision-Language Models 2. [Nguyen et al.] Enhancing Retrieval Augmented Generation with Hierarchical Text Segmentation Chunking
10/16/2025
Paper Skimming
S
Shohei Kitajima
Mizuki koyamagawaHinata Tezuka
1. [Yamada et al.] Dynamic Injection of Entity Knowledge into Dense Retrievers 2. [Marks et al.] Sparse Feature Circuits: Discovering and Editing Interpretable Casual Graphs in Language Models 3. [Barbero et al.] Why do LLMs attend to the first token?
10/10/2025
Kick-off
Naoya Inoue
Kick-off meeting
09/03/2025
Spotlight Oral
A
Akira Ishii
Mariko Kato
Mid-Term Presentation Practice
09/02/2025
Spotlight Oral
Hinata Tezuka
P
Peter_Obule
Mid-Term Presentation Practice
08/08/2025
Spotlight Oral
T
Tien Dang Huu
MS Defense Practice
07/25/2025
Paper Skimming
A
Akira Ishii
houjing
1. [Kang et al.] See What You Are Told: Visual Attention Sink in Large Multimodal Models | OpenReview 2. [Mu et al.] A Causal Approach for Counterfactual Reasoning in Narratives
07/18/2025
Paper Skimming
K
Kim Pothong
N
NGUYEN Phuong Minh
SHI,Yuting
1. [Liu et al.] DoRA: Weight-Decomposed Low Rank Adaptation 2. [Nikankin et al.] Same Task, Different Circuits: Disentangling Modality-Specific Mechanisms in VLMs 3. [Xu et al.] INSTRUCTSCORE: Towards Explainable Text Generation Evaluation with Automatic Feedback
07/11/2025
Poster
houjingHinata Tezuka
1. Counting First or Detection First? An Investigation on Counting Objects in LVLMs 2. The Transfer Neurons Hypothesis: An Underlying Mechanism for Language Subspace Transitions in Multilingual LLMs
07/04/2025
Paper Skimming
kenken
P
Peter_Obule
Hinata Tezuka
1. [Chuang et al.] DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models | OpenReview 2. [Kallini et al.] Mission: Impossible Language Models - ACL Anthology 3. [Yu et al.] Neuron-Level Knowledge Attribution in Large Language Models - ACL Anthology
06/27/2025
Paper Skimming
Mariko KatoHakaze ChoNaoya Inoue
1. [Hu et al.] Understanding In-context Learning of Addition via Activation Subspaces 2. [Modell et al.] The Origins of Representation Manifolds in Large Language Models 3. [Ortu et al.] Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals
06/20/2025
Paper Skimming
A
Akira Ishii
T
TRAN, Thu Thi Anh
Koga Kobayashi
1. [Lyu et al.] FACTTRACK: Time-Aware World State Tracking in Story Outlines 2. [Huang et al.] Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal 3. [Busbridge et al.] Distillation Scaling Laws
06/13/2025
Poster
Lukas Hofbauer
T
Tien Dang Huu
1.On Effects of Steering Latent Representation for Large Language Model Unlearning; 2.Step-wise Decomposition Improves Calibration for Answering Multi-Hop Questions
05/30/2025
Paper Skimming
P
Peter_Obule
SHI,Yuting
1. [Feng et al.] LEGEND: Leveraging Representation Engineering to Annotate Safety Margin for Preference Datasets 2. [Tjandrasuwita et al.] Understanding the Emergence of Multimodal Representation Alignment
05/23/2025
Paper Skimming
Mariko KatoHinata Tezuka
1. [Ethelo et al.] Inferring Functionality of Attention Heads from their Parameters 2. [Kaplan et al.] From Tokens to Words: On the Inner Lexicon of LLMs
05/16/2025
Paper Skimming
kenkenhoujingHakaze Cho
1. [Jiang et al.] Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations 2. [Stolfo et al.] Confidence Regulation in LLMs 3. [Jin et al.] Massive Values in Self-Attention Modules are the Key to Contextual Knowledge Understanding
05/09/2025
Poster
Hakaze Cho
K
Kim Pothong
SHI,Yuting
1. Mechanism Interpretability on ICL 2. FOCUS Benchmark 3. Understanding Compositional Visual Reasoning
04/25/2025
Paper Skimming
N
NGUYEN Phuong Minh
K
Kim Pothong
Lukas Hofbauer
1. [Ye et al.] SATLM: Satisfiability-Aided Language Models Using Declarative Prompting 2. [Tanneru et al.] Quantifying Uncertainty in Natural Language Explanations of Large Language Models 3. [Celikylimaz et al.] Evaluation of Text Generation: A Survey
04/18/2025
Paper Skimming
T
TRAN, Thu Thi Anh
Hinata Tezuka
P
Peter_Obule
1. [Bi et al.] Iterative Refinement of Project-Level Code Generation 2. [Schug et al.] Attention as a HyperNetwork 3. [Li et al.] Inference Time Intervention: Eliciting Truthful Answer from a Language Model
04/11/2025
Paper Skimming
Naoya Inoue
T
Tien Dang Huu
A
Akira Ishii
1. [Stolfo et al.] Confidence Regulation Neurons in LMs 2. [Tien Dang Huu] Unlearning in LLMs: An Introduction 3. [Hatzel et al.] Story Embeddings—Narrative-Focused Representations of Fictional Stories
04/04/2025
Kick-off
Naoya Inoue
Kick-off Meeting
03/03/2025
K
Kim Pothong
Hinata TezukaMariko Kato
NLP2025 Practice Presentation
02/20/2025
houjing
NLP 2025 Practice Presentation
02/13/2025
Hakaze Cho
Revisiting In-context Learning Inference Circuit in Large Lan- guage Models (ICLR 2025)
02/06/2025
houjing
Final defense practice
01/30/2025
S
Shotaro Kitamura
Quick walkthrough: running Llama on Kagayaki
01/23/2025
kenken
Brainstorming on Logo Design
01/16/2025
Naoya Inoue
General Discussions
01/01/2025
‣

2024

Date
Type
Presenters
Topic
12/11/2024
SHI,Yuting
Probing Implicit Multi-Step Reasoning in Visual-Language Models
11/27/2024
Y
Yan_zhenzhu
Research Proposal: Traffic Sign Benchmarking
11/20/2024
Hakaze Cho
Light Hearted Mechanistic Interpretability
11/13/2024
JIN, Tao
UI Operation Agent
10/30/2024
K
Kim Pothong
Leveraging Socratic Feedback: Using Counter-Arguments to Argument Revision
10/23/2024
T
TRAN, Thu Thi Anh
(Master’s Thesis Recap) Towards x86 Instruction Set Emulation in Java via Project-based Text-to-Code Generation using Reinforcement Learning
10/16/2024
S
SAKAI Yoshihiro
Speech Title: Improve your Presentation
10/08/2024
Hakaze Cho
Kick-off Meeting
09/24/2024
Mariko Kato
Image Encodings are Informative Tokens for Frozen Text Transformers
09/03/2024
houjing
Practice for Mid-term Presentation
08/27/2024
houjingMariko KatoHakaze Cho
YANS Practice Talk
08/06/2024
Irfan Robbanikenken
Practice Talk for MS Defense
07/16/2024
Irfan Robbani
Flee the Flaw: Annotating the Underlying Logic of Fallacious Arguments Through Templates and Slot-filling
07/09/2024
Naoya Inoue
Special Session: 2nd Kick-off!
07/02/2024
houjing
Probing Visual Prompts in Multimodal Large Language Models (MLLMs)
06/25/2024
K
Kim Pothong
Hakaze Cho
NL260 Practice Talk x 2
06/18/2024
kenken
Meta Cognitive Knowledge in LLMs
05/28/2024
S
Shotaro Kitamura
Towards Making Japanese Implicit Multi-hop Question Benchmarks
05/21/2024
Y
Yan_zhenzhu
A Survey of Vision Language Model in Autonomous Driving
05/14/2024
SHI,Yuting
How to Make Explanation Behind Visual Reasoning Response of LVLMs
05/07/2024
K
Kim Pothong
FA1A2 - A Dataset of Fallacious/Non-Fallacious Arguments for LLM Probing
04/22/2024
JIN, Tao
A Survey of the Latest GUI Agent for Cross Application Task and Progress Report
04/16/2024
Hakaze Cho
Word Embedding “Geometry”
04/10/2024
T
Tien Dang Huu
Machine Unlearning: Setting, Benchmark, Evaluation
04/04/2024
Naoya Inoue
Kick-off Meeting
03/04/2024
SHI,YutingGAO_BowenIrfan Robbani
(NLP2024 poster & presentation practice #2)
02/29/2024
Hakaze ChoNaoya Inoue
NLP2024 poster & presentation practice #1
02/22/2024
Irfan Robbanikenken
Practice talk for mid-term presentation
02/21/2024
S
Shotaro Kitamura
Practice talk for mid-term presentation
02/08/2024
SHI,YutingGAO_Bowen
Practice talk for MS thesis defense
02/01/2024
Naoya Inoue
Self-Awareness Improves Reliability of LM as KB
01/25/2024
K
Kim Pothong
Fallacies Detection Exploitation Toward Large Language Model
01/18/2024
kenken
Can Data Augmentation with Predicate Logic Extend the Predicate Logic Inference of LLM?
01/11/2024
T
Tien Dang Huu
Machine Unlearning: Similarity-based methods
‣

2023

Date
Type
Presenters
Topic
12/21/2023
S
SAKAI Yoshihiro
In-Context Learning
12/12/2023
A
Akira Ishii
Identify Important Event in Narrative
12/05/2023
houjing
Various Probing on VLMs
11/28/2023
Irfan Robbani
Why My Argument has Fallacy
11/21/2023
Naoya Inoue
(Only Announcement)
11/14/2023
GAO_Bowen
Entailment Tree Construction: Step-by-Step Approach Using Large Language Models
11/07/2023
S
Shotaro Kitamura
Towards Making Better Implicit Multi-hop QA Benchmark
10/31/2023
SHI,Yuting
Non-end-to-end Evaluation Approaches for Visual Reasoning
10/17/2023
Hakaze Cho
A Comprehensive (maybe) Survey / Intro about ICL Theory
10/10/2023
Naoya Inoue
Kick-off Meeting
10/02/2023
JIN, Tao
The Rise and Potential of Large Language Model Based Agents: A Survey
09/05/2023
SHI,Yuting GAO_Bowen
Practice Talk for Mid-term Presentation
08/07/2023
houjing
From traditional VL to MLLMs
07/24/2023
(Only Announcement)
07/18/2023
kenken
Controller of LLM for Multi-step Reasoning
07/03/2023
Irfan Robbani
Like Father, Like Son: Large Language Models Empower Smaller Language Models for Zero Shot Fallacy Detection
06/26/2023
Naoya Inoue
Let’s Watch Together: “How to Read/Write an International Conference Paper by Prof. Graham Neubig@CMU”
06/19/2023
SHI,Yuting
Towards Inductive Reasoning from Visual Information
06/14/2023
S
Shotaro Kitamura
Research Progress Report: Towards generating Japanese multi-hop QA Datasets
05/29/2023
(Only Announcement)
05/22/2023
GAO_Bowen
[Paper Reading] Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
05/15/2023
JIN, Tao
Self-Instruct_ Aligning LM with Self Generated Instructions
05/08/2023
H
Hoai Linh Luu
Improving Robustness of NLP Models by Explanation-based Feedback
04/12/2023
Naoya Inoue
Kick-off Meeting
Logo

©RebelsNLU at JAIST

X