Example recordings using one Neuropixels probe, spanning three regions of the basal ganglia (striatum, GPe, and GPi). Each row shows the voltage on one of 384 recording channels. Channels are densely packed (hence the ‘pixels’ part of the name), and nearby channels thus show correlated voltages. The spikes of a given neuron have a distinct spatio-temporal ‘signature.’ For example, the magnified view in the inset reveals one GPe neuron (near the top) that spiked three times and another (near the bottom) that spiked four times.
(From left to right)
Hector Cho, Francisco Sacadura, Andrew Zimnik, Mark Churchland, Saurabh Vyas, Tala Fakhoury
Fig 1. Eight probes entering primary motor cortex. This custom system allows many probes to target a small region (~4 mm diameter).
The Grossman Center has always had a tripartite goal: promote the development and adoption of techniques for recording many neurons at once, promote the development and adoption of analysis techniques suitable for the resulting large-scale neural recordings, and leverage the resulting technical advances to make scientific progress in understanding how brains compute and produce intelligent behavior. As noted last year, the recent development of primate-specific Neuropixels probes – an endeavor that spanned many groups across multiple countries—has provided an excellent opportunity to advance all three goals. Many of our joint efforts over the last year were, in one way or another, linked to opportunities afforded by this recent technological advance.
As Neuropixels probes start to become widely adopted (see here for an example in rodents), an obvious goal is to increase the number of probes that can be used simultaneously, both to allow recordings from many areas simultaneously and to allow recordings from very large numbers of neurons (order of many thousands) from a single area. We have been expanding the limits of this approach in primates, as illustrated in Figure 1. A present limitation is the bulky nature of existing micro-drive systems, which places physical limits on how closely clustered the probes can be. Another limitation is the need for stereotax-free ways of targeting the probes to the brain areas of interest, some of which are small and/or deep. Our current solutions to these problems are custom, time-consuming, and not easily shared with the field as a whole. Thus, a current goal is a standardized solution that will allow any experimenter to target multiple probes to the brain areas of interest. This goal is being pursued with Tanya Tabachnik, the Director of Advanced Instrumentation at the Zuckerman Institute.
As the number of simultaneously recordable neurons increases—from a few neurons three decades ago, to ~150 two decades ago, to ~1000 now and possibly ~3000 within a year or two—various operations that used to be manual must become automated. For example, experimenters commonly achieved recording stability by watching the waveform of one or more neurons on an oscilloscope or computer screen, while manually adjusting the electrode depth in ~10 micron increments to maintain waveform stability. Waveforms were then “sorted” by shape and size, into groups corresponding to individual neurons. Sorting was either fully manual or manually supervised and curated. Both these goals—stability and accurate sorting— become difficult once the number of recorded neurons is large, and they become impossible at the scales we are presently pursuing. The Paninski laboratory has been at the forefront of automated stabilization and spike-sorting techniques that will allow the field to fully leverage the ability to record from very large numbers of neurons.
An additional challenge is the need for analyses that allow experimenters to make scientific sense of very large-scale recordings. This has always been a central goal of the Grossman Center, and our development of analysis tools that yield new scientific insights has always been a strength of the Center. One recent effort, led by Joshua Glaser (a postdoc with Professors Paninski and Cunningham, and now faculty at the University of Chicago) is particularly worth noting. For about a decade, it has been appreciated that a given neural population (e.g., motor cortex, or dorsolateral prefrontal cortex) can perform multiple distinct computations, and does so using distinct neural dimensions. Effectively, the principal components of neural activity are completely different across computations. The principal components that capture activity related to one computation (e.g., movement preparation) thus ignore activity related to a different computation (e.g., movement execution). This can be incredibly useful to the experimenter, as it makes it possible to isolate distinct computations just by using linear algebra. This discovery was made by a handful of groups (with Elsayed et al. 2016, a Grossman Center study, being the canonical early example) and has become an increasingly important analysis tool across our field. An obvious challenge is finding the relevant neural dimensions—this challenge is particularly acute in the many cases where one may be uncertain regarding what the underlying computations might be, or even how many there might be. In a collaboration involving all our laboratories, Joshua has developed SCA (Sparse Component Analysis) a novel PCA-like analysis approach that incorporates a sparsity constraint, and does a remarkable job of parsing neural activity into interpretable dimensions without the need for user supervision. SCA has now been used across multiple datasets spanning tasks, brain areas and species, and has proved remarkably adept at identifying dimensions that parse different computations or sub-computations.
Moving forward, an obvious goal is to leverage the advances described above to perform systems neuroscience in a new way, one that does not depend on recording many repetitions of the same behavior (or same decision, or same thought process) across many days. We are approaching a scenario where the number of simultaneously recorded neurons gives us a view of activity with sufficient signal-to-noise that there is little need (other than as a sanity check) to repeat the same cognitive events twice. This will allow us to study intelligent behavior in the way one always wished to: as animals find novel solutions to problems and figure out what to do next in a world where the same internal and external events rarely repeat themselves.
grossman-center-team-3.jpg
Click to see the photo description
Mark-Letter-Diagram-1.jpg
Click to view the description of Figure 1
Image A from Figure 2.jpeg
Click to view the description of Figure 2
Click to view the description of Figure 3
Mark M. Churchland Liam Paninski
LETTER FROM THE DIRECTORS
Grossman Center for the Statistics of Mind
We Move Science Forward
©2023 The Grossman Center