Alexander Huth
A lot of our current work revolves around how large language models can be used to study language processing in the brain. We've published a bunch of papers on this. We've also released a really pretty great fMRI dataset.
In 2023 my lab published a paper demonstrating that the meaning of language (or visual stimuli, or just thoughts) can be read out from fMRI data as text. This was pretty exciting! The work was led by my then-PhD student Jerry Tang, who also made a video about this project. (It was also covered on CNN and the NYT.)
In 2016 I wrote a neat paper about how the meaning of language is represented in brain activity. I showed how models based on semantic properties of words can do surprisingly well at predicting fMRI responses to naturally spoken, narrative stories. Then I analyzed those models to determine which kinds of semantic properties are represented in which brain areas, creating detailed maps of semantic representation across the human cortex. It also looks like these maps are really consistent across subjects. To produce a group atlas from these data I developed a generative model of cortical maps. I also made a fancy web-based 3D viewer for this dataset.
Back in 2012 I wrote a paper about the cortical representation of visual semantic categories. I showed that pretty much all of the higher visual cortex is semantically selective, and argued that this representation is better understood as gradients of selectivity across the cortex than as distinct areas. I also made a video that explains the paper, and there's a nice FAQ on our lab website. I also made a nifty online viewer for that dataset.
Before moving (back) to UC Berkeley, I was an assistant / associate professor of Neuroscience and Computer Science at the University of Texas at Austin from 2017-2025.
I did my PhD and postdoc in Dr. Jack Gallant's laboratory at UC Berkeley through the Helen Wills Neuroscience Institute at UC Berkeley. Before that, I got both bachelor's and master's degrees in computation and neural systems (CNS) at Caltech.
During my master's I worked in Dr. Christof Koch's laboratory, where I studied visual saliency and decision making using eye-tracking and psychophysics. While I was an undergrad at Caltech I also did a SURF in the Koch lab, where I learned how to fMRI from Dr. Melissa Saenz while studying how visual cortex reorganizes in the congenitally blind.
If you want to contact me, you can use email, which is a pretty popular method. I also twitter sometimes, but I'm not very good at it. Some of the code I write ends up on github.