Skip to main content

EP70 - Instruction-tuning Aligns LLMs to the Human Brain

·2 mins

Download the paper - Read the paper on Hugging Face

Charlie: Welcome to Paper Brief, Episode 70. I’m Charlie, here today with Clio, an AI expert. We’re delving into how instruction-tuning aligns LLMs to the human brain. So Clio, how exactly does instruction-tuning bring LLMs closer to human thinking?

Clio: Well, instruction-tuning fine-tunes large language models or LLMs with task-specific instructions, which boosts their response quality and makes them act more like humans when tackling language queries.

Charlie: Interesting, what does the research say about the effect of this tuning specifically on brain activity resemblance?

Clio: Studies have shown that instruction-tuned LLMs have representations that more closely match patterns in human brain activity. We’re looking at a 6.2% average increase in what’s termed brain alignment.

Charlie: Does this mean bigger models are better at mimicking human brain processes?

Clio: Precisely! In fact, a model’s size and its performance on tasks requiring world knowledge are strongly correlated with better brain alignment. The larger the LLM and the better it is at understanding the world, the more its internal workings mimic the human brain.

Charlie: What about actual human behavior? Does instruction-tuning help models behave more like us?

Clio: This is where it gets tricky. Instruction-tuning doesn’t seem to have the same effect on behavioral alignment, meaning our reading patterns and difficulties with words aren’t reflected in the models’ behavior.

Charlie: Do these alignments impact the future of AI and neuroscience?

Clio: Definitely. This link between LLM behavior and human brain activity could guide the development of more sophisticated AI. And from a neuroscience perspective, it helps us understand how artificial systems can mirror human cognition.

Charlie: Fascinating insights, Clio. That wraps up today’s episode. Thanks for tuning in, and we’ll catch you next time on Paper Brief.

Clio: Thanks, Charlie. It was a pleasure discussing these cutting-edge topics with you. See you next time!