Sparseness:
The simplest, first. I originally found the sparseness calculation in Poo and Isaacson, 2009, where they calculated the sparseness of odor responses in piriform cortex. Sparseness simply measures how much of a stimulus space a neuron responds to, between 0 and 1 (1 being highly sparse). The equation is quite simple:
Sparseness equation.Where ri is the firing rate of the neuron to stimuli i, from the set of stimuli n. From Rolls and Tovee, 1995. |
A more interesting example comes from vision, from Vinje and Gallant. They recorded from visual cortex while presenting movies; the movies varied in size, with the smallest only stimulating the classical receptive field (CRF), while the largest movies covered an area 4x the CRF. For the small movies, the firing rate was moderate, and increased following some frames (below, left). For the large area movies, however, the firing rate was near zero except for specific frames.
Stimulation of larger areas increases sparseness of V1 neurons. Left. Recordings from macaque V1 during playback of movie in the CRF or an area 4xCRF. The firing rate was higher during the CRF stimulation, and more sparse for the 4xCRF stimulation. Right. Sparseness calculation for populations of neurons for stimulation of different size CRFs. As the stimulus area increased, the sparseness also increased. From Vinje and Gallant, 2000. |
Sparseness seems like a quick and dirty measure, but of limited application to olfaction or taste. First, chemosensory stimulus spaces are generally small enough that you can simply quantify the percent of stimuli a neuron responds to. Second, olfactory responses are highly phasic (or tonic-phasic as one guy at my poster at ISOT insisted), which makes measuring firing rate less useful. To adapt it to olfaction, you would probably need to substitute a different metric for responsiveness, such as "difference in spikes from control breath."
ROC analysis is used to tell to what degree, and under what conditions, two responses are discriminable. I first came across this in Cury and Uchida, and they have a nice supplemental figure that shows how it's useful in OB coding.
They recorded from the OB of rats while presenting various odors, and wanted to know: 1. whether a neuron fired differently during odor presentation than pre-odor breaths; 2. when the firing was most different; and 3. what time window allowed the best discrimination. To do this, they compared the firing rates between control an odor breaths during defined windows during the breath (red and blue areas in panels A/B below). For each trial, they counted the number of spikes in the epoch (panel C), and then plotted the distributions of spike counts for each trial (panel E).
Copied from Cury and Uchida:
"(A) PETH of an example M/T cell, aligned by the first odor inhalation onset. Black, odor; gray, blank. The red (1) and blue (2) shaded areas indicate two example analysisepochs. This neuron and these epochs are used in all subsequent panels.
(C) Raster plot of spike trains over multiple trials, aligned by the first odor inhalation onset. Odor trials (above) are separated from blank trials (below) by the horizontal black line.
(D) Response reliability (area under the ROC curve, auROC), calculated for varying epochs (bin size: 5 ms to 160 ms.; bin center: t = 0 to t = 160 ms), as in Figure 2D. auROC values are indicated using the color scale at right, with red and blue signifying increased and decreased spike counts, respectively. The black circles indicate the three example windows plotted in (B). Selection of the optimal epoch was restricted to occur within the 0 to 160 ms response window, the bounds of which are indicated by the diagonal dotted lines.
(E) Distribution of single trial spike counts for both odor(black) and blank (gray) for the same three example epochs.
(F) The corresponding ROC curves for the same three example epochs, comparing the hit rate to the false alarm rate as a discrimination threshold is slid across the distributions. The resulting auROC value is listed to the right." |
Then, they asked, if you had to classify a trial as "blank" or "odor" based on the number of spikes in a trial, how successful would various thresholds be? For example, in the top part of panel E, a threshold of 0.5 spikes would correctly classify most odor trials (a "hit"), while mis-classifying a few blank trials as odor ("false alarms"). By contrast, for the bottom part of panel E, no threshold could discriminate blank from odor.
For each threshold, you get a proportion of hits vs false alarms, and can plot this as the ROC curve (panel F). If two populations are easy to discriminate, you will get curves like those shown in blue or red; if the populations are hard to discriminate, you get a curve like that shown in black. You can collapse the ROC curve into a single measure of discriminability by integrating the area under the curve (auROC); values near 0 or 1 show strong discrimination for that epoch.
So that's how you get the auROC for a specific epoch. You can then repeat the procedure for different epoch lengths and onsets (panel D). This panel shows that there are two good epoch to discriminate odor from blank: ~60ms, and ~140ms, and that the optimal epoch size is 60-80ms. In general, they found that excitatory responses could be found throughout the sniff, with epoch sizes of ~40-60ms; in contrast, inhibitory responses tended to be between 80-120ms, and have epoch sizes of ~60-80ms.
ROC analysis is kinda funky, so I recommend playing with this demonstration to get a better feel for ROC curves.
For each threshold, you get a proportion of hits vs false alarms, and can plot this as the ROC curve (panel F). If two populations are easy to discriminate, you will get curves like those shown in blue or red; if the populations are hard to discriminate, you get a curve like that shown in black. You can collapse the ROC curve into a single measure of discriminability by integrating the area under the curve (auROC); values near 0 or 1 show strong discrimination for that epoch.
So that's how you get the auROC for a specific epoch. You can then repeat the procedure for different epoch lengths and onsets (panel D). This panel shows that there are two good epoch to discriminate odor from blank: ~60ms, and ~140ms, and that the optimal epoch size is 60-80ms. In general, they found that excitatory responses could be found throughout the sniff, with epoch sizes of ~40-60ms; in contrast, inhibitory responses tended to be between 80-120ms, and have epoch sizes of ~60-80ms.
ROC analysis is kinda funky, so I recommend playing with this demonstration to get a better feel for ROC curves.
Entropy/ information / bits!
In any case, the entropy of a firing rate can intuitively be understood as the unpredictability of the firing rate, and by inference how much information is in the firing rate. The relationship between unpredictability and information is easy to understand: a neuron that fires at exactly 10Hz in response to all stimuli is conveying no information; in contrast, a neuron that fires at 10Hz to one stimuli, but not to others, contains some information; more subtly, a neuron that fires on average at 1Hz, but with bursts and silent periods, depending on the stimulus, also contains information with each spike.
So how do you calculate the entropy of a spike train? (The best review of this I found was from Bhumbra and Dyball). First, you need to bin the spike train into a histogram where each bin can contain only one spike. This histogram has three features: the number of bins, the bin size, and the number of spikes in the histogram. The likelihood of a given spike train is defined by how often that pattern occurs out of all possible spike trains. If a neuron uses all possible spike trains, the entropy of all spike trains is simply log2(# of spike trains possible). Since calculating the potential number of spike trains involves factorials, and is unwieldy for large numbers of bins, this equation can be approximated as (skipping all the algebra here) log2(e/(M*dt)), where M is the mean firing rate, and dt is the bin size.
Of course, neurons don't use all possible spike trains possible, but only a subset of them. In that case, you simply multiply the probability of a spike train times that train's information:
The entropy of a spike train is the probability of a spike train i, times the log of that probability. From Wikipedia entry for entropy. |
To get a good measurement of entropy, you need to record from a neuron for a long time to get a large set of possible spike trains, preferably including many stimuli and trial repeats. This makes entropy ill suited for olfaction, where the unit of measure is one breath (~320ms), and odor presentation can take ten seconds for odor loading and clearance. Vision, however, does not have these problems: you can play a movie to a neuron, yielding 30Hz of stimuli, and can repeat this hundreds of times.
One of the earliest, most cited entropy papers in vision is Nirenberg et al, 2001. They recorded from many neurons from mouse retina while presenting movies 300 times. Some sets of cells had correlated firing, which they calculated as the correlated spikes that would appear above chance, the excess correlated fraction (ECF). To see whether these correlated spikes were informative, they calculated the entropy of the neuron pairs' spike train both including the correlated spikes, and excluding them (or rather, treating them independently). They found that even for neuron pairs with high ECF, the correlated spikes only contained <10% of the total information.
Retinal ganglia act independently. They calculated the information in a neuron pairs' spike train both including and excluding correlated spikes (y-axis), and compared this to neuron pairs' correlation (ECF). While higher ECF pairs' correlated firing did include some information, most information was independent of correlated firing. From Nirenberg et al, 2001. |
wrt information, the situation in olfaction reminded me of birdsong. Their, the behavioral unit is a 'motif' (a discrete building block of a song). I saw a talk by one of these authors a while ago:
ReplyDeleteJeanne et al 2011
http://www.jneurosci.org/content/31/7/2595.long
They presented another (much cooler) information-based analysis when I saw them, but I don't see the paper anywhere so I guess it's not published yet. I guess if you're interested in non-vision uses of information theory, watch them.
Birdsong probably has it slightly easier, as songs are longer (>2s), and there are fewer songs you can present, so you can increase trial number. The Jeanne paper seems to use entropy (and mutual information, which will probably be in the next compendium once I can understand it) as one of many ways to characterize a response, rather than using entropy as a centerpiece.
ReplyDeleteWhen I think "obscure analyses in songbirds," I think Theunissen, and lo and behold he's got one paper using mutual information (not entropy):
http://onlinelibrary.wiley.com/doi/10.1002/dneu.20783/abstract