"But no matter. I had my little revenge in due time. A man from Pasadena told me one day that Mrs. Maximovich née Zborovski had died in childbirth around 1945; the couple had somehow got over to California and had been used there, for an excellent salary, in a year-long experiment conducted by a distinguished American ethnologist. The experiment dealt with human and racial reactions to a diet of bananas and dates in a constant position on all fours. My informant, a doctor, swore he had seen with his own eyes obese Valechka and her colonel, by then gray-haired and also quite corpulent, diligently crawling about the well-swept floors of a brightly lit set of rooms (fruit in one, water in another, mats in a third and so on) in the company of several other hired quadrupeds, selected from indigent and helpless groups. I tried to find the results of these tests in the Review of Anthropology; but they appear not to have been published yet. These scientific products take of course some time to fructuate. I hope they will be illustrated with photographs when they do get printed, although it is not very likely that a prison library will harbor such erudite works."-Lolita, Chapter 6
Pages
▼
Saturday, July 30, 2011
I Don't Think the Ethics Committee Would Approve That
Monday, July 25, 2011
Walk Along the Paper Trail: Satiating Trail Mix
In the last trek we took, I wrote about two seminal papers that describe taste coding in gustatory cortex. Today I'm going to cover a followup paper which described satiety's effects on gustatory coding.
Let me fill you up
The neural systems involved in satiety, food intake, and body weight are incredibly intricate. First, there are endocrine responses: after food intake, leptin and insulin are released, blood sugar increases, and cannabinoid levels drop. Then, as you get hungry again, leptin, insulin, and blood glucose levels drop while endocannabinoid increases.
These endocrine molecules bind to a variety of receptors on the tongue, and throughout the brain, including the hypothalamus. On the tongue, leptin receptors are expressed on sweet taste bud cells, and leptin binding causes a decrease in firing, presumably decreasing perceived sweetness. In contrast, cannabinoid receptors on taste bud cells can sensitize the taste response, increasing perceived sweetness.
The other well described pathway for endocrine signaling is via the hypothalamus. I'll probably delve into this later, but in brief, the hypothalamus has multiple cell populations which express endocrine receptors, and can directly regulate feeding behaviour. For example, Aponte and Sternson recently showed that if you stimulate AGRP-expressing neurons in the hypothalamus, you can induce feeding behaviour; while stimulation of POMC-expressing neurons over 24h reduces food intake.
Given all these ways that satiety can be regulated, what happens in gustatory cortex?
CNS coding of taste
To see how satiety influences the CNS, you first need to establish a metric for satiety. de Araujo and colleagues chronically implanted rats with electrodes, and then gave them limited access to a sucrose solution. The sucrose was behind a gate that would only stay open for five seconds after a lick. Two seconds after closing, the gate would reopen, allowing the rat to lick again. They then measured the inter-trial-interval (ITI) to quantify how motivated the rats were to get some sugar (panel A).
mini-review on mouse licking, and showed that mice lick in bouts with a regular frequency. Here they also analyzed the microstructure of the licking bouts during hunger and satiety, and found the rats licked at ~6.5 Hz in both phases.
Once they categorized well-defined hunger and satiety phases, they then characterized the neural response in four brain areas: orbitofrontal cortex (OFC), gustatory cortex (INS; insular cortex), the lateral hypothalamus (LH), and the amygdala (AM). As you may remember from last time, 30-40% of neurons in gustatory cortex encode taste information; I'm not well read on taste responses in other areas, but I think the percentage is similar. Since they are applying a single tastant, sucrose, one might expect the percentage to be lower here.
Of the 625 neurons recorded in all the areas, they found 101 (16%) were licking related, while 152 (24%) responded to taste delivery. 179 (29%) neurons had firing changes related to satiety, but none of these were licking-related cells (panel F). Of these, most (104/179) responded simply by changing their baseline firing rate (panels D & E). For the other 75, their responses to sucrose were altered, with both increases and decreases in firing (panel A-C). Satiety-related responses were found in all four brain areas measured, but some had more responsive neurons than others. (LH > AM > INS > OFC).
taste responses could have complex temporal profiles, including periods of inhibition and excitation in the same response. Second, I wish they had used more than one tastant, given that there are leptin receptors on sucrose taste bud cells. Using other tastants would help differentiate between endocrine effects on the tongue versus effects via the hypothalamus or other areas. Finally, I would have liked to have seen a more sophisticated population analysis, even just looking at cross-correlation within ensembles.
Given those caveats, there are a lot of interesting results. First, there is the most common modulation, changes in baseline firing rate. This could influence coding in a few different ways, changing the gain or dynamic range of firing, or changing the potential for synchrony between neurons. That there are more modulations of baseline firing than taste responses seems significant.
Second, the changes in firing rates of these neurons are quite long (see panels C&D in the last figure), and can continue beyond the end of satiated behaviour. Satiety is a long-term process, changing gradually over hours, so it make sense that neurons' behaviour would also be long term. Perhaps mice's perception of hunger and satiety are faster given their metabolism.
Finally, it was interesting to see there were satiety changes in all areas (it would have been nice to see a table with a complete breakdown). The changes in hypothalamus make sense given its involvement in feeding, and orbitofrontal cortex makes sense given its executive function. But there were also changes in the amygdala and gustatory cortex. This could be due to common processes influencing all areas (e.g. reduction in taste receptor sensitivity), or could simply be indicative of how complicated body weight and food intake regulation is.
de Araujo IE, Gutierrez R, Oliveira-Maia AJ, Pereira A Jr, Nicolelis MA, & Simon SA (2006). Neural ensemble coding of satiety states. Neuron, 51 (4), 483-94 PMID: 16908413
Let me fill you up
The neural systems involved in satiety, food intake, and body weight are incredibly intricate. First, there are endocrine responses: after food intake, leptin and insulin are released, blood sugar increases, and cannabinoid levels drop. Then, as you get hungry again, leptin, insulin, and blood glucose levels drop while endocannabinoid increases.
These endocrine molecules bind to a variety of receptors on the tongue, and throughout the brain, including the hypothalamus. On the tongue, leptin receptors are expressed on sweet taste bud cells, and leptin binding causes a decrease in firing, presumably decreasing perceived sweetness. In contrast, cannabinoid receptors on taste bud cells can sensitize the taste response, increasing perceived sweetness.
The other well described pathway for endocrine signaling is via the hypothalamus. I'll probably delve into this later, but in brief, the hypothalamus has multiple cell populations which express endocrine receptors, and can directly regulate feeding behaviour. For example, Aponte and Sternson recently showed that if you stimulate AGRP-expressing neurons in the hypothalamus, you can induce feeding behaviour; while stimulation of POMC-expressing neurons over 24h reduces food intake.
Given all these ways that satiety can be regulated, what happens in gustatory cortex?
CNS coding of taste
To see how satiety influences the CNS, you first need to establish a metric for satiety. de Araujo and colleagues chronically implanted rats with electrodes, and then gave them limited access to a sucrose solution. The sucrose was behind a gate that would only stay open for five seconds after a lick. Two seconds after closing, the gate would reopen, allowing the rat to lick again. They then measured the inter-trial-interval (ITI) to quantify how motivated the rats were to get some sugar (panel A).
mini-review on mouse licking, and showed that mice lick in bouts with a regular frequency. Here they also analyzed the microstructure of the licking bouts during hunger and satiety, and found the rats licked at ~6.5 Hz in both phases.
Once they categorized well-defined hunger and satiety phases, they then characterized the neural response in four brain areas: orbitofrontal cortex (OFC), gustatory cortex (INS; insular cortex), the lateral hypothalamus (LH), and the amygdala (AM). As you may remember from last time, 30-40% of neurons in gustatory cortex encode taste information; I'm not well read on taste responses in other areas, but I think the percentage is similar. Since they are applying a single tastant, sucrose, one might expect the percentage to be lower here.
Of the 625 neurons recorded in all the areas, they found 101 (16%) were licking related, while 152 (24%) responded to taste delivery. 179 (29%) neurons had firing changes related to satiety, but none of these were licking-related cells (panel F). Of these, most (104/179) responded simply by changing their baseline firing rate (panels D & E). For the other 75, their responses to sucrose were altered, with both increases and decreases in firing (panel A-C). Satiety-related responses were found in all four brain areas measured, but some had more responsive neurons than others. (LH > AM > INS > OFC).
taste responses could have complex temporal profiles, including periods of inhibition and excitation in the same response. Second, I wish they had used more than one tastant, given that there are leptin receptors on sucrose taste bud cells. Using other tastants would help differentiate between endocrine effects on the tongue versus effects via the hypothalamus or other areas. Finally, I would have liked to have seen a more sophisticated population analysis, even just looking at cross-correlation within ensembles.
Given those caveats, there are a lot of interesting results. First, there is the most common modulation, changes in baseline firing rate. This could influence coding in a few different ways, changing the gain or dynamic range of firing, or changing the potential for synchrony between neurons. That there are more modulations of baseline firing than taste responses seems significant.
Second, the changes in firing rates of these neurons are quite long (see panels C&D in the last figure), and can continue beyond the end of satiated behaviour. Satiety is a long-term process, changing gradually over hours, so it make sense that neurons' behaviour would also be long term. Perhaps mice's perception of hunger and satiety are faster given their metabolism.
Finally, it was interesting to see there were satiety changes in all areas (it would have been nice to see a table with a complete breakdown). The changes in hypothalamus make sense given its involvement in feeding, and orbitofrontal cortex makes sense given its executive function. But there were also changes in the amygdala and gustatory cortex. This could be due to common processes influencing all areas (e.g. reduction in taste receptor sensitivity), or could simply be indicative of how complicated body weight and food intake regulation is.
de Araujo IE, Gutierrez R, Oliveira-Maia AJ, Pereira A Jr, Nicolelis MA, & Simon SA (2006). Neural ensemble coding of satiety states. Neuron, 51 (4), 483-94 PMID: 16908413
Saturday, July 23, 2011
Paper Trail Day Trip: Mouse Lick Throughs
Last week at lab meeting, I found myself having the absurd discussion of how to best train a mouse to lick a water spout. Mice don't have many ways to talk to us dumb humans, and head-fixed mice have even less. The best way we have is licking.
After lab meeting, I did some Google Scholaring, and found a great little article that quantified mouse lick rates. In two mouse strains! Comparative mouse licking ethology! It doesn't merit a full blog post, but I wanted to highlight the main findings for all you other mouse lickers out there.
They used a lickometer (pardon the technical jargon) to measure how often C57 and DBA/2J mice licked. They found mice licked in bouts of 1-20s, about once a minute.
As you can see in the inter-lick interval (ILI) histogram above (panel c), licking is highly regular, but mice occasionally miss a lick, either due to lickometer error, or actual pauses in licking. There is a much smaller third harmonic. During the bouts, C57 mice lick at 8Hz, while DBA mice lick at 10Hz.
Since C57 mice lick at a slower rate than DBA mice, they compensate by having more licking bursts (compare black and white bars). The average lick volume for both strains was 1.2uL.
After lab meeting, I did some Google Scholaring, and found a great little article that quantified mouse lick rates. In two mouse strains! Comparative mouse licking ethology! It doesn't merit a full blog post, but I wanted to highlight the main findings for all you other mouse lickers out there.
They used a lickometer (pardon the technical jargon) to measure how often C57 and DBA/2J mice licked. They found mice licked in bouts of 1-20s, about once a minute.
Mouse licking. A. Mice lick in bursts. B. Individual bursts have regular licking. C. Inter-lick-interval during bursts. From Boughter et al 2010. |
C57/B6 mice have more licking bouts than DBA/2J mice, but lick more slowly during them. From Boughter et. al. 2010. |
Since C57 mice lick at a slower rate than DBA mice, they compensate by having more licking bursts (compare black and white bars). The average lick volume for both strains was 1.2uL.
I love that papers like this exist, because their kinda handy, and kinda absurd. Another paper along these lines is, "Distribution of serotonin immunoreactivity int he main olfactory bulb of the Mongolian gerbil." Just don't tell Sarah Palin.
Boughter Jr, J., Baird, J. P., Bryant, J., St John, S., and Heck, D. (2007). C57BL/6J and DBA/2J mice vary in lick rate and ingestive microstructure. Genes, Brain and Behavior 6, 619–627.
Thursday, July 21, 2011
Spines Only Grow Once
Continuing my series of publishing extra data from my graduate work (probably only 1-2 more posts left), today I'm going to talk about my favourite experiment I ever did.
Spine size is a proxy for memory
The standard cellular model for learning and memory is that memories are stored in the synapses between neurons, and that learning changes the strength of these synapses. While this model makes sense, no one has actually been able to measure the strength of a synapse while an animal learns. However, you can use spine size as a proxy for synaptic strength: among other things, bigger spines have more AMPA receptors and stronger synaptic currents. If you image spines while animals are learning you can see all sorts of changes in spine number, and watch spines be created and destroyed.
Since spine size is correlated with synaptic strength, and synaptic strengths change, Haruo Kasai theorized that a spine's size actually represents the stimulation history of a synapse. That is, a small spine represents a synapse that is either newly formed, or has been depotentiated; and a large spine represents a synapse that has been repeatedly potentiated. There are lots of cool questions you can ask if this is true, like whether there are a discrete number of spine sizes, or if spine size is graded; and whether there is a continuous, random process of spine shrinkage that allows us to form new memories.
Double uncaging, OMG
If a spine's size can represent its stimulus history, this implies that a spine/synapse can repeatedly change size. Some people have tried to test this using a minimal stimulation technique, but because they could not identify the synapse they were recording, the results are not 100% conclusive. Another group used glutamate uncaging and found that synaptic strength changed in a step-wise fashion, but did not stimulate twice.
Two-photon glutamate uncaging allows you to address this question. You can measure a spine's size (i.e. synaptic strength), and stimulate it with glutamate to cause an increase in spine size.
FigShare.)
So that's at 15 minutes, what about longer time intervals? I repeated the experiment with a 60 minute interval, and saw the same basic result: following the second stimulation, there was no obvious increase in structural plasticity, for both the transient and sustained phases (panel C).
So how do I interpret these results? First, I think this shows that structural plasticity has a refractory period; that is, once a synapse changes strength, it is stable, and cannot be changed again for a while. How long this refractory period lasts is a great question, and could be a limiting factor in memory formation. I tried stimulating twice with an interval of 24 hours, but the slices got contaminated by bacteria.
A second interpretation is that structural plasticity is saturable. That is, the capacity for change has a limit at any given time point. Note that this does not really address the question of whether spine sizes are distributed over a continuous space, or discrete sizes.
What I really love about this experiment is how simple it is, and how many different directions you can go from here. And it was all enabled by new technology.
Spine size is a proxy for memory
The standard cellular model for learning and memory is that memories are stored in the synapses between neurons, and that learning changes the strength of these synapses. While this model makes sense, no one has actually been able to measure the strength of a synapse while an animal learns. However, you can use spine size as a proxy for synaptic strength: among other things, bigger spines have more AMPA receptors and stronger synaptic currents. If you image spines while animals are learning you can see all sorts of changes in spine number, and watch spines be created and destroyed.
Since spine size is correlated with synaptic strength, and synaptic strengths change, Haruo Kasai theorized that a spine's size actually represents the stimulation history of a synapse. That is, a small spine represents a synapse that is either newly formed, or has been depotentiated; and a large spine represents a synapse that has been repeatedly potentiated. There are lots of cool questions you can ask if this is true, like whether there are a discrete number of spine sizes, or if spine size is graded; and whether there is a continuous, random process of spine shrinkage that allows us to form new memories.
Double uncaging, OMG
If a spine's size can represent its stimulus history, this implies that a spine/synapse can repeatedly change size. Some people have tried to test this using a minimal stimulation technique, but because they could not identify the synapse they were recording, the results are not 100% conclusive. Another group used glutamate uncaging and found that synaptic strength changed in a step-wise fashion, but did not stimulate twice.
Two-photon glutamate uncaging allows you to address this question. You can measure a spine's size (i.e. synaptic strength), and stimulate it with glutamate to cause an increase in spine size.
FigShare.)
So that's at 15 minutes, what about longer time intervals? I repeated the experiment with a 60 minute interval, and saw the same basic result: following the second stimulation, there was no obvious increase in structural plasticity, for both the transient and sustained phases (panel C).
So how do I interpret these results? First, I think this shows that structural plasticity has a refractory period; that is, once a synapse changes strength, it is stable, and cannot be changed again for a while. How long this refractory period lasts is a great question, and could be a limiting factor in memory formation. I tried stimulating twice with an interval of 24 hours, but the slices got contaminated by bacteria.
A second interpretation is that structural plasticity is saturable. That is, the capacity for change has a limit at any given time point. Note that this does not really address the question of whether spine sizes are distributed over a continuous space, or discrete sizes.
What I really love about this experiment is how simple it is, and how many different directions you can go from here. And it was all enabled by new technology.
Monday, July 18, 2011
We Are All Obsolete
I've tried to keep this blog focused on academic science, but I've got an idea pinging around my head. I know it's not original (for one, my cousin mentioned it in a car ten years ago), but here it is: we are all obsolete. Every "successful" person today - whether they are a musician, scientist, programmer, or athlete - is going to be surpassed in our lifetime.
Historically, this is obvious. In athletics, records constantly fall. The average IQ scores which constantly go up. Groundbreaking papers are trivially reproducible.
Like dying, obsolescence happens slowly, so we can ignore it in our daily life. But it catches up to all of us.
There are a lot of factors pushing us towards obsolescence. We all lose intelligence as we age. There is the glacial force of natural selection (if those still work in an age of medicine and a social safety net). Today I'm going to focus on two factors that feed into this: the skyrocketing of the effective population; and the punctuated improvement of education.
Effective Population
I've had a fortunate education. I started at a Montessori elementary school, where play time motivated me to work hard. I went to a private high school that let me go to college a year early. Made up my own major, computational neuroscience, at Case Western. At Duke I was the first graduate student in a lab that eventually became a Howard Hughes lab. And now I'm in Geneva with free reign to do taste research in an effectively olfactory lab.
I got lucky. Lucky I was effectively an only child; that my parents were educated and valued education; that I generally had good teachers along the way. I'd guess only one in twenty Americans were as lucky as I was, but that could just be hubris.
When you think about the demographics of the world, it's easy to say that the US has 300 million people with the same opportunities, but that's not really the case. Some people estimate 25% of US children are in poverty. If you take inverse of that number, only about 225 million people in the US have ample opportunities; this is the US's effective population.
I got the idea of effective population from Information Processing. The idea is most applicable to a country like China, which has 1.3 billion people, but only about 300 million of them are able to compete in the global marketplace. That is, only 300 million have the nutrition, education, and financial stability to go to college, get educated, and try to create something in this world. The rest suffer from malnutrition, families that need the income, or simply from a lack of teachers necessary to educate a billion people.
If China's effective population is only 300 million, what about the rest of the world? I already estimated that the US's effective population is around 225 million. Rather than type this out, I'll just estimate the effective population of the world (gotta love a table that intimates billions of people don't exist; all of these numbers are pulled out of my ass. For example, how do I estimate Europe, which combines the well-developed West, and the still developing East?):
In total, about 1.2 billion, give our take a few hundred million. And this number is always going up.
A few pundits have made waves recently pointing this out (Hot, Flat, and Crowded; Post-American World). As a scientist this is both scary and exhilirating: the competition is going to get MUCH tougher; and hopefully the achievements will as well. But unfortunately, it means my effective place in the world will go down.
Educational Improvement
Some systems are so vast and hard to measure accurately that it's easy for anyone to have an opinion on how they should be run: health care; taxation; and for this post, education. Everyone has an idea how the education system should be run. School system funding should be ample, teachers should be held accountable, parents should read to their children, and the students themselves need to be measured (but we shouldn't teach to the test). We should teach people how to work together in groups, but not ignore basic skills. The subjects should include the three R's, but also newer things like psychology and computer programming. In the end, most people imprint on their own education, and have ideas about what did and did not work in theirs.
I have no idea how to improve education. But I do know the way we educate people now is vestigial and will be improved upon.
Right now, most education treats students like cogs in a factory (I generally sneer at those RSA videos as middlebrow, but holy cow that drawing struck a chord). We group people in classes because that's all we could do a century ago, if we wanted to educate as many people as possible. We continue doing so due to the inertia of institutions. And educational opportunities are almost non-existent for lower-class people, both in the US and around the world.
As I said, I don't know how to improve education, but there are lots of people trying different things. For example a multitude of individualized education programs are sprouting up (I am biased in favour of this, having started in Montessori school). There's the School of One in Brooklyn (more press here). The Gates Foundation is trying a lot of different models, supporting guys like Salman Khan. If one of these works, we can copy the model and disseminate it.
At the top of the post, I mentioned I had a good education, and that only one in twenty Americans might have had access to something like it. But the world is getting wealthier all the time, increasing the number of people who will get educated. And the education they're going to get continues to improve. It's easy to imagine thirty years from now, when I'm sixty, there will be a whole new generation of scientists, from around the world, that have a better education than me. I'm going to have to pit my ideas against theirs, hoping my experience can compensate. And eventually, I'm not going to even going to be proven wrong, I'm not even going to be able to compete.
Historically, this is obvious. In athletics, records constantly fall. The average IQ scores which constantly go up. Groundbreaking papers are trivially reproducible.
Like dying, obsolescence happens slowly, so we can ignore it in our daily life. But it catches up to all of us.
There are a lot of factors pushing us towards obsolescence. We all lose intelligence as we age. There is the glacial force of natural selection (if those still work in an age of medicine and a social safety net). Today I'm going to focus on two factors that feed into this: the skyrocketing of the effective population; and the punctuated improvement of education.
Effective Population
I've had a fortunate education. I started at a Montessori elementary school, where play time motivated me to work hard. I went to a private high school that let me go to college a year early. Made up my own major, computational neuroscience, at Case Western. At Duke I was the first graduate student in a lab that eventually became a Howard Hughes lab. And now I'm in Geneva with free reign to do taste research in an effectively olfactory lab.
I got lucky. Lucky I was effectively an only child; that my parents were educated and valued education; that I generally had good teachers along the way. I'd guess only one in twenty Americans were as lucky as I was, but that could just be hubris.
When you think about the demographics of the world, it's easy to say that the US has 300 million people with the same opportunities, but that's not really the case. Some people estimate 25% of US children are in poverty. If you take inverse of that number, only about 225 million people in the US have ample opportunities; this is the US's effective population.
I got the idea of effective population from Information Processing. The idea is most applicable to a country like China, which has 1.3 billion people, but only about 300 million of them are able to compete in the global marketplace. That is, only 300 million have the nutrition, education, and financial stability to go to college, get educated, and try to create something in this world. The rest suffer from malnutrition, families that need the income, or simply from a lack of teachers necessary to educate a billion people.
If China's effective population is only 300 million, what about the rest of the world? I already estimated that the US's effective population is around 225 million. Rather than type this out, I'll just estimate the effective population of the world (gotta love a table that intimates billions of people don't exist; all of these numbers are pulled out of my ass. For example, how do I estimate Europe, which combines the well-developed West, and the still developing East?):
Country/Continent | Population | Effective percentage | Effective Population |
USA | 300 million | 75% | 225 million |
China | 1.3 billion | 25% | 300 million |
India | 1.2 billion | 10% | 120 million |
Latin America | 600 million | 25% | 150 million |
Africa | 1.4 billion | 10% | 140 million |
Europe | 700 million | 60% | 420 million |
Asia ex-China/India | 1.5 billion | 10% | 150 million |
In total, about 1.2 billion, give our take a few hundred million. And this number is always going up.
A few pundits have made waves recently pointing this out (Hot, Flat, and Crowded; Post-American World). As a scientist this is both scary and exhilirating: the competition is going to get MUCH tougher; and hopefully the achievements will as well. But unfortunately, it means my effective place in the world will go down.
Educational Improvement
Some systems are so vast and hard to measure accurately that it's easy for anyone to have an opinion on how they should be run: health care; taxation; and for this post, education. Everyone has an idea how the education system should be run. School system funding should be ample, teachers should be held accountable, parents should read to their children, and the students themselves need to be measured (but we shouldn't teach to the test). We should teach people how to work together in groups, but not ignore basic skills. The subjects should include the three R's, but also newer things like psychology and computer programming. In the end, most people imprint on their own education, and have ideas about what did and did not work in theirs.
I have no idea how to improve education. But I do know the way we educate people now is vestigial and will be improved upon.
Right now, most education treats students like cogs in a factory (I generally sneer at those RSA videos as middlebrow, but holy cow that drawing struck a chord). We group people in classes because that's all we could do a century ago, if we wanted to educate as many people as possible. We continue doing so due to the inertia of institutions. And educational opportunities are almost non-existent for lower-class people, both in the US and around the world.
As I said, I don't know how to improve education, but there are lots of people trying different things. For example a multitude of individualized education programs are sprouting up (I am biased in favour of this, having started in Montessori school). There's the School of One in Brooklyn (more press here). The Gates Foundation is trying a lot of different models, supporting guys like Salman Khan. If one of these works, we can copy the model and disseminate it.
At the top of the post, I mentioned I had a good education, and that only one in twenty Americans might have had access to something like it. But the world is getting wealthier all the time, increasing the number of people who will get educated. And the education they're going to get continues to improve. It's easy to imagine thirty years from now, when I'm sixty, there will be a whole new generation of scientists, from around the world, that have a better education than me. I'm going to have to pit my ideas against theirs, hoping my experience can compensate. And eventually, I'm not going to even going to be proven wrong, I'm not even going to be able to compete.
Monday, July 11, 2011
A Walk Along the Paper Trail: Katzogenesis
While the last few walks have covered taste receptors, I'm more interested in the central representation of taste. When you taste something the information is relayed from the taste receptors by three facial nerves to the brainstem (NST), then to the thalamus (VPMpc), and from there to gustatory cortex (GC). The NST also projects to the amygdala and lateral hypothalamus, sending reward and feeding intake information.
There aren't a lot of labs that study taste coding in GC, but one of the best labs is Donald Katz's at Brandeis. He's done some interesting work on ensemble representations of taste, but today I will cover his neuroscience origin story. As a post-doc he worked with Sid Simon at Duke, and put out two nice papers about taste coding in GC, which I will review here.
GC neurons respond in three phases
In the first paper, they simply recorded from GC (and oral SC) neurons in awake rats while they licked four basic tastants: sweet, sour, salty, and bitter. When they calculated the taste responses over the 2.5s following a lick, 14% of the neurons responded to at least one taste, which was in line with previous reports. However, if they instead binned the responses into 500ms chunks, they found that the number of responsive neurons increased to 33%. The dynamics of the responses were different for each neuron, with some changing their "preferred tastants" in each time bin.
For example in the figure above, neurons 1 and 2 have different preferred tastants in different bins (counting inhibition as "preferred" since it's still information). And when they further smoothed their analysis by using a moving average of the response, 40% of the neurons responded to tastants.
Given the complex dynamics of the response, they next asked if different information was represented at different times. To do this they identified "modulations" in the continuous response, either inhibitory or excitatory. Then they made a histogram of the modulation onset times, and found they were bimodally distributed (panel A).
Some modulations started early, with an onset time of <0.5s, while others started later, >1s. They then looked even closer at the early onset times, and plotted just those within the first 0.8s (right panel above). Here they found that a set of onset times seemed to cluster <0.25s, while the rest were distributed >0.25s. Thus there were three modulation windows: 0-250ms, 0.25-1s, and >1s.
They hypothesized that the early and late onset times were due to orofacial movements like licking, or facial gestures made in response to palatable (or not) tastes. If this were true, since licking occurs at 5-10Hz, one would expect the early and late responses to have information at 5-10Hz range. To see this, they performed FFT on the responses, and looked at the power spectra of the early, middle, and late responses. And indeed the early and late responses had more power in the 5-10Hz range. Given this they concluded that GC neurons encoded different information at different times of the response.
This is probably hindsight bias, but I can't believe it took so long for GC scientists to look at the temporal dynamics of taste coding. In the discussion section, they cite research in the olfactory bulb, motor cortex, and visual cortex that all investigated temporal dynamics years earlier (frankly, mid-90s seems late). And using temporal dynamics completely changes the picture of GC: the number of taste-responsive neurons jumped from 10 to 40%! GC went from being a tangentially taste-related cortex to being obviously taste specific. All the papers since then have confirmed that indeed, 40% of neurons are taste responsive.
CC: GC neurons
To follow this work on single neuron representations, Katz Simon, and Nicolelis next turned to the population response. This was done by simply calculating the cross-correlation (CC) between the firing rates of pairs of neurons (the paper includes more sophisticated analyses like linear discriminant analysis, but the CC result is cleanest). They recorded 237 pairs of neurons, sometimes in both hemispheres, in 12 rats. Of the 237 pairs, 85 had changes in CC due to specific tastants.
Panel B shows a loss of correlation between the two neurons, perhaps due to inhibition. Panel B also shows a pair that was responsive to two tastants; of the 87 neuron pairs, 50 showed significant CC for more than one tastant. Panel C shows a short timescale interaction of a few ms, overlaid on top of a broader CC of hundreds of milliseconds. These short timescale interactions occurred 17% of the time. Finally, they showed that neurons in different hemispheres could have CC.
In the discussion they consider a few different sources for the CC, including common sources, coupled latency, or orofacial behaviours, but discard them due to the analyses I did not present. They mention that you could get changes in CC between neurons with different PSTHs (e.g. panel A), which shows this is due to CCs in single trials. Overall, they concluded that these CCs showed there was a population representation of GC information.
I'm curious about the identity of these pairs of neurons. The recordings were performed blind to the cortical layer being recorded, and whether the neurons were pyramidal or interneurons. This information would be hugely useful. For example, if neuron B lagged neuron A by 50ms, it would mean entirely different things if neuron A was in layer 4 and neuron B was in layer 2/3, or vice versa. In the latter case (neuron A in layer 4), you could simply chalk the delay up to normal circuit function; if neuron A was in layer 2/3 though, this would imply some more complicated feedback processing. Similarly, when you see a reduction in CC, one might guess that the pair includes an interneuron. To get at this information, we're going to need more sophisticated tools to record from identified cortical neurons.
In any case, those are the two papers from Donald Katz's postdoc in the Simon lab at Duke. He's revisited the theme of population coding many times (I'd recommend Jones et al 2007 for a Hidden Markov Model version of the story). Jusqu'à la prochaine fois.
Katz DB, Simon SA, & Nicolelis MA (2001). Dynamic and multimodal responses of gustatory cortical neurons in awake rats. The Journal of neuroscience : the official journal of the Society for Neuroscience, 21 (12), 4478-89 PMID: 11404435
Katz DB, Simon SA, & Nicolelis MA (2002). Taste-specific neuronal ensembles in the gustatory cortex of awake rats. The Journal of neuroscience : the official journal of the Society for Neuroscience, 22 (5), 1850-7 PMID: 11880514
There aren't a lot of labs that study taste coding in GC, but one of the best labs is Donald Katz's at Brandeis. He's done some interesting work on ensemble representations of taste, but today I will cover his neuroscience origin story. As a post-doc he worked with Sid Simon at Duke, and put out two nice papers about taste coding in GC, which I will review here.
GC neurons respond in three phases
In the first paper, they simply recorded from GC (and oral SC) neurons in awake rats while they licked four basic tastants: sweet, sour, salty, and bitter. When they calculated the taste responses over the 2.5s following a lick, 14% of the neurons responded to at least one taste, which was in line with previous reports. However, if they instead binned the responses into 500ms chunks, they found that the number of responsive neurons increased to 33%. The dynamics of the responses were different for each neuron, with some changing their "preferred tastants" in each time bin.
The responses of two neurons in 500ms bins. Neuron 1 has a similar response, scaled, to each tastants. Neuron 2 has different dynamics for each tastant. From Katz et al 2001 |
Given the complex dynamics of the response, they next asked if different information was represented at different times. To do this they identified "modulations" in the continuous response, either inhibitory or excitatory. Then they made a histogram of the modulation onset times, and found they were bimodally distributed (panel A).
A. GC neurons modulations are bimodally distributed <0.5s and 1-1.5s. B. A closer analysis of the early responses shows there is another division of responses around ~250ms. From Katz et al 2001. |
They hypothesized that the early and late onset times were due to orofacial movements like licking, or facial gestures made in response to palatable (or not) tastes. If this were true, since licking occurs at 5-10Hz, one would expect the early and late responses to have information at 5-10Hz range. To see this, they performed FFT on the responses, and looked at the power spectra of the early, middle, and late responses. And indeed the early and late responses had more power in the 5-10Hz range. Given this they concluded that GC neurons encoded different information at different times of the response.
GC neurons encode three types of responses over different time scales. Early, somatosensory licking info, "middle" taste info, and finally palatability. From Katz et al 2001. |
CC: GC neurons
To follow this work on single neuron representations, Katz Simon, and Nicolelis next turned to the population response. This was done by simply calculating the cross-correlation (CC) between the firing rates of pairs of neurons (the paper includes more sophisticated analyses like linear discriminant analysis, but the CC result is cleanest). They recorded 237 pairs of neurons, sometimes in both hemispheres, in 12 rats. Of the 237 pairs, 85 had changes in CC due to specific tastants.
Panel B shows a loss of correlation between the two neurons, perhaps due to inhibition. Panel B also shows a pair that was responsive to two tastants; of the 87 neuron pairs, 50 showed significant CC for more than one tastant. Panel C shows a short timescale interaction of a few ms, overlaid on top of a broader CC of hundreds of milliseconds. These short timescale interactions occurred 17% of the time. Finally, they showed that neurons in different hemispheres could have CC.
In the discussion they consider a few different sources for the CC, including common sources, coupled latency, or orofacial behaviours, but discard them due to the analyses I did not present. They mention that you could get changes in CC between neurons with different PSTHs (e.g. panel A), which shows this is due to CCs in single trials. Overall, they concluded that these CCs showed there was a population representation of GC information.
I'm curious about the identity of these pairs of neurons. The recordings were performed blind to the cortical layer being recorded, and whether the neurons were pyramidal or interneurons. This information would be hugely useful. For example, if neuron B lagged neuron A by 50ms, it would mean entirely different things if neuron A was in layer 4 and neuron B was in layer 2/3, or vice versa. In the latter case (neuron A in layer 4), you could simply chalk the delay up to normal circuit function; if neuron A was in layer 2/3 though, this would imply some more complicated feedback processing. Similarly, when you see a reduction in CC, one might guess that the pair includes an interneuron. To get at this information, we're going to need more sophisticated tools to record from identified cortical neurons.
In any case, those are the two papers from Donald Katz's postdoc in the Simon lab at Duke. He's revisited the theme of population coding many times (I'd recommend Jones et al 2007 for a Hidden Markov Model version of the story). Jusqu'à la prochaine fois.
Katz DB, Simon SA, & Nicolelis MA (2001). Dynamic and multimodal responses of gustatory cortical neurons in awake rats. The Journal of neuroscience : the official journal of the Society for Neuroscience, 21 (12), 4478-89 PMID: 11404435
Katz DB, Simon SA, & Nicolelis MA (2002). Taste-specific neuronal ensembles in the gustatory cortex of awake rats. The Journal of neuroscience : the official journal of the Society for Neuroscience, 22 (5), 1850-7 PMID: 11880514
Saturday, July 9, 2011
The Mechanisms Underlying Grantsmanship are Not Fully Understood
I was editing one of the lab's papers today, and came across the classic grant/paper sentence, "The mechanisms underlying ... are not fully understood." Do you ever see that sentence outside of science? So I went to Google Scholar and searched for "mechanisms underlying" and "not fully understood" to find the first usage of it.
If you search for each term individually, you will find hundreds of references dating to the nineteenth century. They were both common scientific phrases, but it took time for them to be combined.
If you search for the two phrases combined, the earliest link is to a book review from 1920, but skimming the document, I could not find either phrase.
The next reference comes from a 1950 paper, "THE SIGNIFICANCE OF THE "ONE-MINUTE" (PROMPT DIRECT REACTING) BILIRUBIN IN SERUM'," although they use each fragment in different sentences:
If you search for each term individually, you will find hundreds of references dating to the nineteenth century. They were both common scientific phrases, but it took time for them to be combined.
If you search for the two phrases combined, the earliest link is to a book review from 1920, but skimming the document, I could not find either phrase.
The next reference comes from a 1950 paper, "THE SIGNIFICANCE OF THE "ONE-MINUTE" (PROMPT DIRECT REACTING) BILIRUBIN IN SERUM'," although they use each fragment in different sentences:
"The mechanisms underlying the renal excretion of bilirubin are still obscure." (I like that twist, I'm going to steal it.)
"The factors governing the speed of diazotization of bilirubin in serum are not fully understood."It was not until 1962 that the full power of the phrase was unlocked almost simultaneously by two papers, "Physiology of acclimation to low temperature in poikilotherms:"
The degree of compensa- tion is different in different groups of animals (2, 3) and the mechanisms underlying this compensation are not fully understood.and "The inflammatory response to a foreign body within transplantable tumors."
This response seemingly lies in the stroma and, although mechanisms underlying the inflammatory reaction in normal tissues are not fully understood...The science world would never be the same.
Thursday, July 7, 2011
Do Whatcha Wanna*
While some PIs eventually learn to take pride in their "grantsmanship," I doubt anyone is happy with the grant system. Nominal scientists spend their time trying to raise money rather than doing actual science. We award grants based on people's paper trail, and then go tell them to teach, train, proselytize, and, oh yeah, publish.
I don't have a well thought out solution to the problem, but I do have a half-baked one: treat scientists like start-up companies. My idea comes from two strains.
Cause it makes you smile if it sounds dope
When I read The Double Helix, the biggest surprise to me was that Watson and Crick discovered the structure of DNA as a side project. They both were working on other projects - I can't remember what, but I think it had to do with invertebrates - and would sneak off together to try and piece together the crystallography data. And Watson kept having to appease his advisor that his main project was indeed moving along, and apply for fellowships.
The lessons I took from this (and this is simplistic) are that people work best on things their interested in, and trying to make them work on a specific project is counterproductive. This may be my experience, but I know many people who toil away on mediocre projects when they yearn to do something else.** Yet, when we apply for grants, we make people write up specific projects that by definition may not yield interesting results. So what do we do if we stop writing grant proposals?
Scientists and startups
One of my favourite essayists is Paul Graham. He's an angel investor (venture capitalist) who biannually runs a startup bootcamp to identify and train tech entrepeneurs. When he decides whether to invest in a company, he almost ignores their business plan, because nascent companies constantly change plans. What he focuses on are the founders, and he looks for specific traits: determination, flexibility, imagination, naughtiness, and friendship. Founding companies is extremely demanding, with a high failure rate.
Entrepreneurs and scientists share a lot of similarities. They're both trying to do something new, which means exploring a lot of idea space, and modifying the plan as results come in. They both have to overcome failure, whether it's experiments not working, or users not signing up. The rewards are asymmetric, with the best projects doing orders of magnitude better than the average. The best scientists and founders are not necessarily those that are the smartest, but the best hackers and hustlers. And both groups waste a lot of time trying to raise money.
Ten years ago, venture capitalists evaluated startup companies the same way we evaluate grants today: they'd ask for a business plan, and then fund based on that. But they've realized another model has better yield: ignore the business plan, and fund the founder. My proposal is that scientists do the same.
Rather than have people spend weeks writing a fellowship, filled with scientific justification and wedged-in hyphotheses, let's run a scientist boot camp. Take a month, send people off to Woods Hole (or wherever), and have them slap together a project. See who stays up late. See who tries something spectacular, fails, then whittles it down to something manageable. See who hacks together a solution to a problem. And fund them, for whatever the want to do, proposal unseen.
In the end, I don't think the cost is that high: some flights, a month's pay for the students, and a couple supervisors. You'd save a lot of grant reviewers' time. You'd build camaraderie between the students that may last as they venture back to their home institutes. And you might end up funding successful scientists rather than people with good credentials.
(Having slept on this, I am downplaying the logistics here. For example, working with mouse models would be difficult in a one month course. But there are pretty common, useful mouse lines like Thy1-GFP/ChR2, and you could even try a BYOM system if the mouse quarantines were modified for the unique situation.)
*In honor of Treme.
** I realize even mediocre projects need resolution. Sometimes its better, though, to just pull the plug.
I don't have a well thought out solution to the problem, but I do have a half-baked one: treat scientists like start-up companies. My idea comes from two strains.
Cause it makes you smile if it sounds dope
When I read The Double Helix, the biggest surprise to me was that Watson and Crick discovered the structure of DNA as a side project. They both were working on other projects - I can't remember what, but I think it had to do with invertebrates - and would sneak off together to try and piece together the crystallography data. And Watson kept having to appease his advisor that his main project was indeed moving along, and apply for fellowships.
The lessons I took from this (and this is simplistic) are that people work best on things their interested in, and trying to make them work on a specific project is counterproductive. This may be my experience, but I know many people who toil away on mediocre projects when they yearn to do something else.** Yet, when we apply for grants, we make people write up specific projects that by definition may not yield interesting results. So what do we do if we stop writing grant proposals?
Scientists and startups
One of my favourite essayists is Paul Graham. He's an angel investor (venture capitalist) who biannually runs a startup bootcamp to identify and train tech entrepeneurs. When he decides whether to invest in a company, he almost ignores their business plan, because nascent companies constantly change plans. What he focuses on are the founders, and he looks for specific traits: determination, flexibility, imagination, naughtiness, and friendship. Founding companies is extremely demanding, with a high failure rate.
Entrepreneurs and scientists share a lot of similarities. They're both trying to do something new, which means exploring a lot of idea space, and modifying the plan as results come in. They both have to overcome failure, whether it's experiments not working, or users not signing up. The rewards are asymmetric, with the best projects doing orders of magnitude better than the average. The best scientists and founders are not necessarily those that are the smartest, but the best hackers and hustlers. And both groups waste a lot of time trying to raise money.
Ten years ago, venture capitalists evaluated startup companies the same way we evaluate grants today: they'd ask for a business plan, and then fund based on that. But they've realized another model has better yield: ignore the business plan, and fund the founder. My proposal is that scientists do the same.
Rather than have people spend weeks writing a fellowship, filled with scientific justification and wedged-in hyphotheses, let's run a scientist boot camp. Take a month, send people off to Woods Hole (or wherever), and have them slap together a project. See who stays up late. See who tries something spectacular, fails, then whittles it down to something manageable. See who hacks together a solution to a problem. And fund them, for whatever the want to do, proposal unseen.
In the end, I don't think the cost is that high: some flights, a month's pay for the students, and a couple supervisors. You'd save a lot of grant reviewers' time. You'd build camaraderie between the students that may last as they venture back to their home institutes. And you might end up funding successful scientists rather than people with good credentials.
(Having slept on this, I am downplaying the logistics here. For example, working with mouse models would be difficult in a one month course. But there are pretty common, useful mouse lines like Thy1-GFP/ChR2, and you could even try a BYOM system if the mouse quarantines were modified for the unique situation.)
*In honor of Treme.
** I realize even mediocre projects need resolution. Sometimes its better, though, to just pull the plug.
Monday, July 4, 2011
A Walk Along the Paper Trail: A Cannabinoid Trail Mix
It's grant writing time here at the Paper Trail, which means reading lots of papers to cite in the background section of the grant. I'm going to cover my favourite paper that I've discovered, which shows that endocannabinoids can directly modulate taste receptors.
More than meets the tongue
Flavour is a tricky perception. It's obviously dominated by how things taste, but also influenced by olfaction, and internal states like hunger. Recordings from rat gustatory cortex show that other sensory modalities are represented as well, like sensorimotor information from the tongue, or temperature.
While some of these modalities are directly encoded in cortex, others are represented indirectly, through hormones and neuromodulators. The most famous of these is leptin. Leptin is released by fat cells (adipose tissue), and is bound by leptin receptors in the hypothalamus and sweet taste bud cells (TBCs). Leptin is an anorexigenic mediator, which means it suppresses appetite. What's really cool is that leptin doesn't just act centrally: if you record from TBCs in mice, you'll find that leptin decreases the firing of sweet TBCs.
The Munchies
In contrast to leptin, endocannabinoids are orexigenic mediators (appetite stimulants) that were known to work through CB1 receptors in the hypothalamus and forebrain. In the paper I'm covering today, Yoshida and colleagues showed that endocannabinoids (henceforward ECBs) can act orexigenically directly on sweet receptors themselves.
They started by recording from the taste nerve innervating the anterior tongue. In wild type mice, the taste nerve responded to a variety of tastants, including NaCl, sucrose, quinine, etc (panel A, below). To see the effects of ECBs, they injected the endocannabinoid 2-AG i.p., and again recorded from the taste nerve and found that 2-AG increased the response to sweet tastants (panels A, B). They also tested the dose-dependence of 2-AG, and found it saturated at approximately 1mg/kg body weight.
Next they repeated the experiment in CB1 knockout mice, and found that the knockout mice had normal responses to all tastants. However, when they injected the ECBs, there was no increase in the sweet response. This shows that ECBs can enhance the sweet response, and that the CB1 receptors are essential for that modulation.
To see the behavioural effects of ECBs, they measured how often the mice licked a liquid source. To make the task more interesting, they mixed quinine (a bitter tastant) with sucrose at different concentrations. At all the concentrations tested, the lick rate increased in mice injected with the ECBs. CB1 knockout mice, however, had no difference in lick rate.
Next, to verify that ECBs work directly on TBCs, they isolated TBCs and recorded from them directly. They used a transgenic line that expressed GFP in sweet cells, under the promoter for T1r3 (expressed in umami cells as well). Using a glass microelectrode, they recorded extracellular action potentials from the isolated TBCs in response to sweet tastants (see below). Then they bath applied 2-AG and found that the firing rate increased in response to sweet tastants.
They tested the response over a variety of concentrations, and found the EC50 for 2-AG was 0.1 ug/mL. They also verified the ECBs worked through the CB1 receptor by applying antagonists against CB1 and CB2. Only the CB1 antagonists were able to block the ECB enhancement. In the final figure of the paper, they performed RT-PCR and immunostaining to verify CB1 was present in sweet TBCs.
It's amazing to me how often the brain seems to modulate in depth. There are endocannabinoid receptors on the tongue, and in the hypothalamus and forebrain. And it occurs across modulators as well, as leptin is expressed in all these places.
While I jokingly titled the review "the munchies," the body expresses endogenous endocannabinoids, and these levels inversely correlate with leptin levels in the blood. And while the effects of endocannabinoids are obvious on the tongue, I don't think it is quite as clear in the brain. It would be interesting to record from gustatory cortex while animals were under the influence of endocannabinoids to see how the representation changes. You could sell it to the NIH under the drug addiction program.
Yoshida R, Ohkuri T, Jyotaki M, Yasuo T, Horio N, Yasumatsu K, Sanematsu K, Shigemura N, Yamamoto T, Margolskee RF, & Ninomiya Y (2010). Endocannabinoids selectively enhance sweet taste. Proceedings of the National Academy of Sciences of the United States of America, 107 (2), 935-9 PMID: 20080779
More than meets the tongue
Flavour is a tricky perception. It's obviously dominated by how things taste, but also influenced by olfaction, and internal states like hunger. Recordings from rat gustatory cortex show that other sensory modalities are represented as well, like sensorimotor information from the tongue, or temperature.
While some of these modalities are directly encoded in cortex, others are represented indirectly, through hormones and neuromodulators. The most famous of these is leptin. Leptin is released by fat cells (adipose tissue), and is bound by leptin receptors in the hypothalamus and sweet taste bud cells (TBCs). Leptin is an anorexigenic mediator, which means it suppresses appetite. What's really cool is that leptin doesn't just act centrally: if you record from TBCs in mice, you'll find that leptin decreases the firing of sweet TBCs.
The Munchies
In contrast to leptin, endocannabinoids are orexigenic mediators (appetite stimulants) that were known to work through CB1 receptors in the hypothalamus and forebrain. In the paper I'm covering today, Yoshida and colleagues showed that endocannabinoids (henceforward ECBs) can act orexigenically directly on sweet receptors themselves.
They started by recording from the taste nerve innervating the anterior tongue. In wild type mice, the taste nerve responded to a variety of tastants, including NaCl, sucrose, quinine, etc (panel A, below). To see the effects of ECBs, they injected the endocannabinoid 2-AG i.p., and again recorded from the taste nerve and found that 2-AG increased the response to sweet tastants (panels A, B). They also tested the dose-dependence of 2-AG, and found it saturated at approximately 1mg/kg body weight.
Endoannabinoids increase sweet response; CB1 -/- mice have no increase. From Yoshida et. al. 2010. |
To see the behavioural effects of ECBs, they measured how often the mice licked a liquid source. To make the task more interesting, they mixed quinine (a bitter tastant) with sucrose at different concentrations. At all the concentrations tested, the lick rate increased in mice injected with the ECBs. CB1 knockout mice, however, had no difference in lick rate.
Next, to verify that ECBs work directly on TBCs, they isolated TBCs and recorded from them directly. They used a transgenic line that expressed GFP in sweet cells, under the promoter for T1r3 (expressed in umami cells as well). Using a glass microelectrode, they recorded extracellular action potentials from the isolated TBCs in response to sweet tastants (see below). Then they bath applied 2-AG and found that the firing rate increased in response to sweet tastants.
2-AG enhances TBC response to sweet tastants. From Yoshida et. al. 2010. |
It's amazing to me how often the brain seems to modulate in depth. There are endocannabinoid receptors on the tongue, and in the hypothalamus and forebrain. And it occurs across modulators as well, as leptin is expressed in all these places.
While I jokingly titled the review "the munchies," the body expresses endogenous endocannabinoids, and these levels inversely correlate with leptin levels in the blood. And while the effects of endocannabinoids are obvious on the tongue, I don't think it is quite as clear in the brain. It would be interesting to record from gustatory cortex while animals were under the influence of endocannabinoids to see how the representation changes. You could sell it to the NIH under the drug addiction program.
Yoshida R, Ohkuri T, Jyotaki M, Yasuo T, Horio N, Yasumatsu K, Sanematsu K, Shigemura N, Yamamoto T, Margolskee RF, & Ninomiya Y (2010). Endocannabinoids selectively enhance sweet taste. Proceedings of the National Academy of Sciences of the United States of America, 107 (2), 935-9 PMID: 20080779