Name:
LLM Grounding Analysis
Description:
LLM grounding vs. factual recall
Professor — Lab:
Robert WestData Science Lab

Layman description:
Large language models (LLMs) can memorize and apply new information. However, it's unclear how they balance this new context with their pre-existing knowledge. This research analyzes how LLMs manage this conflict using a new counterfactual dataset.
Technical description:
The study investigates LLMs using Fakepedia, a dataset presenting contradictions between known facts and new information. Through Masked Grouped Causal Tracing (MGCT), the research deciphers LLMs' grounding mechanisms by contrasting neural activation patterns. Findings help understand the co-functioning of grounding with recall capabilities within LLMs.
Project status:
active — entered showcase: 2024-05-03 — entry updated: 2024-05-03

Source code:
Lab Github - last commit: 2024-02-19
Code quality:
This project has not yet been evaluated by the C4DT Factory team. We will be happy to evaluate it upon request.
Project type:
Experiments
Programming language:
Python
License:
Apache-2.0