In our most recent lab meeting, I presented a recent paper from Science, CKAMP44: A Brain-Specific Protein Attenuating Short-Term Synaptic Plasticity in the Dentate Gyrus. This was a great, relatively straightforward paper that: 1.) did a proteomics screen to identify a novel AMPA receptor associating protein called CKAMP44; 2.) generated a CKAMP44 antibody 3.) performed immunostaining and northern blots to confirm it was expressed specifically in the brain; 4.) performed westerns to show that CKAMP indeed does associate with AMPA receptors in the brain; 5.) transfected oocytes with CKAMP and measured their modulation of AMPA receptor currents; 6.) generated a KO mouse of CKAMP; 7.) and used the KO mouse to show how CKAMP44 modulates synaptic currents in slices. For the details, I would recommend reading the paper, since it is relatively straightforward.
Normally, one would think reading such an interesting paper would be a delight. I, however, was annoyed. This paper represented years of work by the nine authors. I suspect that they initially identified the protein 3-4 years ago in the proteomics screen, and confirmed its importance using northern blots/antibodies shortly thereafter. Yet I had to wait until now to hear about it. People in the field may have known of CKAMP's existence from conferences, but the information had not disseminated through the community until the paper was published.
Isn't that insane? That in the age of the internet and instant communication, we as scientists are still waiting months and years to hear about others' research? Shouldn't we have a better system now?
I have many issues with the current publishing and review system, but the one this paper most applies to is the idea of how journal publishing works. Most of my problems with the system were inspired by Clay Shirky's recent book Here Comes Everybody about how the internet is changing our modes of communication and work. In one chapter of the book Mr. Shirky described how our models of news is changing. Before the internet, the model was that journalists would search out interesting stories (as well as be supplied them by publicists or interested parties), filter out the chaff, and publish the newsworthy items; simply put, they filtered then published. This was necessary becase the costs of gathering and transmitting information was high. For example, if you wanted court information, you had to actually travel to the courthouse, rather than calling them, or looking up the information online.
Now, however, the news model is radically different. With the internet, everyone has a voice (at least in theory), and can broadcast to their friends what they think is important. Many news stories now are broken on blogs, and then linked to by other blogs, until they are finally picked up by the major news outlets. In this model, then, everything is published, and then filtered by users to identify what is important and should be read.
So what does this have to do with science? The journal publishing system is stuck in the filter-then-publish mode, with editors and reviewers gatekeeping information. Their jobs (theoretically again) is to verify that scientific findings are true, and of interest. And to exceed their thresholds for publication, authors need to perform controls and do exciting experiments.
The problem, however, is they don't and can't perform those duties. It is literally impossible for a reviewer to verify any given work is true, either due to falsification or sloppiness. Journals are littered with papers that were retracted, or more commonly, never reproduced. And significance is completely arbitrary, and determined not by journal editors, but after the fact by citations. I can name many papers in prestigious journals I consider insignificant, and Journal of Neuroscience papers that have been cited one hundreds times (e.g. Rich Mooney's 2000 J Neuroscience paper).
And the cost of this antiquated system is time. It takes time for scientists to perform all the experiments, beyond the initial, interesting ones; it takes time for authors to put together "stories" (which is an issue for another time), write the paper, and put together pretty figures; it takes time for editors to decide whether to review it, and time for reviewers to pass judgment; and then it takes more time to actually publish it (although this time has lessened with internet publishing). And if you sum all these time together, you get year long delays between when people do interesting experiments, and the scientific community finds out about them.
Unfortunately, despite my dislike of the current publishing system, I have no simple alternative system. Whatever the new system entails however, I hope it includes faster publishing times so we can learn of the information faster.
Pages
▼
Thursday, April 22, 2010
Wednesday, March 3, 2010
Can I Cook It?
I ate dinner last night at Cuban Revolution, a local restaurant (trust me, this will eventually be about science). They put a heavy emphasis on presentation: when you walk in the restaurant, you are greeted by dim lighting and a string of blue lights on pillars; the menu itself is disorganized, with text in five fonts and five colors; they insist each and every one of their sandwiches are award winning. This emphasis on presentation only lowered my expectations of the food. I ordered ropa vieja, which I believe is Cuban for Pad Thai, and was both relieved and disappointed by how adequate it was. Thinking about why, I realized that I probably could have cooked something similar myself.
I am a mediocre cook. I started by boiling pasta, but have moved up to complicated techniques like pan frying. My specialties include lasagna, various marinades, and kibbeh. When I got to restaurants, my first criterion is, it has to be better than what I can make myself. As I've improved my cooking skill, the quality of restaurants I am willing to pay for has increased. TGI Friday's or cheap Chinese stir fry no longer cut it.
I read science papers in much the same way I evaluate food: could I have done it myself? Could I have performed the experiments? Or if the experiments are simple, do I know enough math for the analysis? For papers with simple, tedious methodology, I end up empathizing with the authors, that they performed so many experiments. For technique papers, I am impressed the authors were able to get their technique to work, even if they don't answer an interesting biological question.
Like my criterion for restaurants, my expectations for papers have shifted as I have learned more techniques, and how difficult (or easy) they are. When I was young, my eyes glazed over every western blot, and I struggled to keep IPs and antibodies straight when interpreting blots. A few years ago, however, I tried Western blotting myself. It was, of course, a disaster: my bands all smiled, my total protein levels were inconsistent, and my activators never activated. I realized that every biochemist must troubleshoot all of these minor details to get consistent results. Now, my eyes still glaze over when I see figure upon unending figure of western blots, but it is the glaze of respect. I would compare these papers to lasagna, where you don't feel guilty ordering it because you know how time consuming and tricky it is to make.
On the other hand, there are technical papers which I respect for how much math is involved. For example, the concepts behind STORM or PALM imaging are quite intuitive, but the math and programming behind them may be beyond my reach. Reading these papers is like eating at a molecular gastronomy restaurant, where amazing wizardy went into creating the food, but you only care that it tastes good.
Orthogonal to the easy/hard axis are chemistry papers (or more likely figures for me), where I simply have no idea what's going on. Mass spec? Baked alaska? How'd they do that?
In the middle are the papers that use techniques I am familiar with, or even use every day, like electrophysiology or imaging. For these papers, I have the highest standard. Are their images clean? The statistics fair? Do the experiments address their questions? For example, in my first blog post, I was critical of a recent paper on PI3K and AMPAR trafficking, two topics I know comparatively well. These papers are like going to a restaurant and ordering the marinated chicken: it had better be damn good you want me to come back.
When scientists discuss papers amongst themselves, we usually emphasize whether the results are believable, or conclusions justified. Reading papers alone, however, the first question I usually ask is, can I cook it?
I am a mediocre cook. I started by boiling pasta, but have moved up to complicated techniques like pan frying. My specialties include lasagna, various marinades, and kibbeh. When I got to restaurants, my first criterion is, it has to be better than what I can make myself. As I've improved my cooking skill, the quality of restaurants I am willing to pay for has increased. TGI Friday's or cheap Chinese stir fry no longer cut it.
I read science papers in much the same way I evaluate food: could I have done it myself? Could I have performed the experiments? Or if the experiments are simple, do I know enough math for the analysis? For papers with simple, tedious methodology, I end up empathizing with the authors, that they performed so many experiments. For technique papers, I am impressed the authors were able to get their technique to work, even if they don't answer an interesting biological question.
Like my criterion for restaurants, my expectations for papers have shifted as I have learned more techniques, and how difficult (or easy) they are. When I was young, my eyes glazed over every western blot, and I struggled to keep IPs and antibodies straight when interpreting blots. A few years ago, however, I tried Western blotting myself. It was, of course, a disaster: my bands all smiled, my total protein levels were inconsistent, and my activators never activated. I realized that every biochemist must troubleshoot all of these minor details to get consistent results. Now, my eyes still glaze over when I see figure upon unending figure of western blots, but it is the glaze of respect. I would compare these papers to lasagna, where you don't feel guilty ordering it because you know how time consuming and tricky it is to make.
On the other hand, there are technical papers which I respect for how much math is involved. For example, the concepts behind STORM or PALM imaging are quite intuitive, but the math and programming behind them may be beyond my reach. Reading these papers is like eating at a molecular gastronomy restaurant, where amazing wizardy went into creating the food, but you only care that it tastes good.
Orthogonal to the easy/hard axis are chemistry papers (or more likely figures for me), where I simply have no idea what's going on. Mass spec? Baked alaska? How'd they do that?
In the middle are the papers that use techniques I am familiar with, or even use every day, like electrophysiology or imaging. For these papers, I have the highest standard. Are their images clean? The statistics fair? Do the experiments address their questions? For example, in my first blog post, I was critical of a recent paper on PI3K and AMPAR trafficking, two topics I know comparatively well. These papers are like going to a restaurant and ordering the marinated chicken: it had better be damn good you want me to come back.
When scientists discuss papers amongst themselves, we usually emphasize whether the results are believable, or conclusions justified. Reading papers alone, however, the first question I usually ask is, can I cook it?
Wednesday, February 17, 2010
Mike's Tips for Recruitment Weekend
Thursday and Friday of this week are the department's recruiting weekend. Having done these as a recruit, as well as the recruiter, I have a couple tips for recruiting:
1. Don't ask where else they are interviewing
This is my pet peeve. I can understand how this shows interest in a recruit, which one should do, but it comes off as incredibly insecure.
I see the recruiting process as an extended first date. The recruit and the department are trying to get a sense whether their interests are aligned, and whether they both feel they are of equivalent status. Both parties are trying to impress each other, while making it look effortless. And of course, as on many first dates, the recruit is visiting other departments and the department is interviewing other recruits.
So when someone asks, "Where else are you interviewing," I hear, "Are you seeing anyone else?" Which is a good way to avoid a second date. If you are interested in a recruit's science, and want to get to know more about them, you can ask, "What type of science are you interested in?" It doesn't matter what other departments they're visiting, because this one is the best.
2. Don't talk about science
I've been doing post-doc interviews lately, and most of the interviews are consumed with science, both mine and the lab I'm visiting. It can be fun to show off my research, and most people are excited to talk about their own research. After a few hours, though, it gets tiresome. At the end of the day, I've been overloaded by new concepts, and I'm getting tired of going over my boilerplate. Any conversational respite is appreciated.
I can only assume it's even worse for the recruits. Their interviews are two days long, and instead of interviewing with one lab, they see four. When they finally get around to talking to grad students at lunch or the after-party, they're usually exhausted. So when you talk to them, ask them about sports, movies, or short track speed skating. Anything but science.
1. Don't ask where else they are interviewing
This is my pet peeve. I can understand how this shows interest in a recruit, which one should do, but it comes off as incredibly insecure.
I see the recruiting process as an extended first date. The recruit and the department are trying to get a sense whether their interests are aligned, and whether they both feel they are of equivalent status. Both parties are trying to impress each other, while making it look effortless. And of course, as on many first dates, the recruit is visiting other departments and the department is interviewing other recruits.
So when someone asks, "Where else are you interviewing," I hear, "Are you seeing anyone else?" Which is a good way to avoid a second date. If you are interested in a recruit's science, and want to get to know more about them, you can ask, "What type of science are you interested in?" It doesn't matter what other departments they're visiting, because this one is the best.
2. Don't talk about science
I've been doing post-doc interviews lately, and most of the interviews are consumed with science, both mine and the lab I'm visiting. It can be fun to show off my research, and most people are excited to talk about their own research. After a few hours, though, it gets tiresome. At the end of the day, I've been overloaded by new concepts, and I'm getting tired of going over my boilerplate. Any conversational respite is appreciated.
I can only assume it's even worse for the recruits. Their interviews are two days long, and instead of interviewing with one lab, they see four. When they finally get around to talking to grad students at lunch or the after-party, they're usually exhausted. So when you talk to them, ask them about sports, movies, or short track speed skating. Anything but science.
Wednesday, February 10, 2010
Lab Fashion
Having been a scientist for going on six years now, I have little awareness of how outsiders view science, or scientists themselves. The best glimpses I get are from movies and television, where scientists are dressed in white lab coats, and work in colorful rooms. The truth, while not quite diametrically opposed, is much less sexy.
Almost no one in the lab actually wears lab coats. In fact, my high school biology teacher once joked that if you ever see a lab wherein everyone is wearing lab coats and goggles, you should run because they are working with very dangerous things. Typically, scientists dress like computer programmers, viz. jeans and a t-shirt. If you are more senior, you might wear what my friend calls the "scientist's uniform" of khaki slacks and a blue button down.
In fact, the dress code of the lab is so casual that I am instantly suspicious of anyone who dresses too well. When I see someone in a blazer, I wonder if they have a job interview. Or someone wearing a white lab coat, makes me wonder why they are trying to appear to be working (lab coats are of course essential for many lab procedures, like dissections. My rule of thumb is that you should never wear a lab coat without gloves). My lab recently bought lab coats for a few people, and they now wear them when they are doing any work in the lab, not just the dangerous or dirty stuff. It irks me. To be fair, one's attitude can be changed by the clothes one wears, and I would endorse any action that makes one work more effectively. I would need to see the data, though, that shows lab coats make them more effective.
Medical doctors are some of the worst offenders in terms of using the white coat as a status symbol. For doctors, the coats are certainly necessary when working with patients that may bleed or drip muccus. But the doctors often do not disrobe outside the office, and wear their coats in the cafeteria or on the way to their car. Part of being a doctor is certainly to make people as comfortable as possible, and wearing a lab coat may inspire confidence in patients. Like scientists, though, when they are worn too often, I become suspicious that they are compensating. It doesn't seem very hygenic, either, to be wearing your dirty safety clothes in public, but then again, I'm not a doctor.
Almost no one in the lab actually wears lab coats. In fact, my high school biology teacher once joked that if you ever see a lab wherein everyone is wearing lab coats and goggles, you should run because they are working with very dangerous things. Typically, scientists dress like computer programmers, viz. jeans and a t-shirt. If you are more senior, you might wear what my friend calls the "scientist's uniform" of khaki slacks and a blue button down.
In fact, the dress code of the lab is so casual that I am instantly suspicious of anyone who dresses too well. When I see someone in a blazer, I wonder if they have a job interview. Or someone wearing a white lab coat, makes me wonder why they are trying to appear to be working (lab coats are of course essential for many lab procedures, like dissections. My rule of thumb is that you should never wear a lab coat without gloves). My lab recently bought lab coats for a few people, and they now wear them when they are doing any work in the lab, not just the dangerous or dirty stuff. It irks me. To be fair, one's attitude can be changed by the clothes one wears, and I would endorse any action that makes one work more effectively. I would need to see the data, though, that shows lab coats make them more effective.
Medical doctors are some of the worst offenders in terms of using the white coat as a status symbol. For doctors, the coats are certainly necessary when working with patients that may bleed or drip muccus. But the doctors often do not disrobe outside the office, and wear their coats in the cafeteria or on the way to their car. Part of being a doctor is certainly to make people as comfortable as possible, and wearing a lab coat may inspire confidence in patients. Like scientists, though, when they are worn too often, I become suspicious that they are compensating. It doesn't seem very hygenic, either, to be wearing your dirty safety clothes in public, but then again, I'm not a doctor.
Tuesday, February 2, 2010
The Big Picture
This past month, I was taking the medical school module on Brain and Behavior. The course is intended to provide medical students an introduction to neurology and neuroanatomy, and the department thought it would be a good idea for the graduate students to take the class. I don't completely disagree with the idea, although there is a large scope for improvement such that Neurobiology students learn skills and information that are useful and relevant, as opposed to facts and trivia that we will forget soon. One thing missing from the course, and consistently missing from a lot of advanced medical/scientific courses is an appreciation for the beauty of the cell/tissue/organ, etc...
I was originally interested in Biology because to me, the human body was an elegant structure, a few trillion cells, acting in concert, were necessary to accomplish most things we would think of as mundane. To accomplish this feat we call life, there need to be feedback loops, feed-forward loops, intracllular signaling cascades, cell-cell communication locally and systemically each of which is regulated and the regulation is regulated. All these processes need to communicate with one another with temporal and spatial specificity. As we discover more about the processes that allow us to perform the functions we can, I believe it is important to continually appreciate the delicateness and the elegance involved in the sustenance of life amid very narrow thermodynamic limits.
I have found that the med school course focused on details, without much appreciation for the bigger picture. For instance, I understand that somatic sensory information is "perceived" by 1st order neurons in the dorsal root ganglion, before proceeding upwards in the spinal cord through the dorsal columns, terminating on dorsal column nuclei, decassating and continuing upward through the medial lemniscal pathway, terminating in the thalamus where neurons send information to the cortex. However, this does not help me appreciate the fact that this entire process takes a few milliseconds, during which information from multiple neurons have been combined to determine the identity of the stimulus and the appropriate response to it.
In contrast, there are classes which elaborate on biological elegance. For instance, in the concepts II lecture we had this morning, Rich Mooney spent quite some time elaborating on the fact that the auditory system has to use action potentials, which are about 1ms in duration, to code for stimuli that are microseconds long, i.e. they are coding stimuli that are 1000 times faster than their theoretical limit. Rich went on to elaborate on the mechanisms and details about how the process might occur, but held the process and the mechanism with a sense of wonder which reminded me why I care so much.
When conveying information about a certain topic, I believe it is just as important to inspire wonder as it is to elaborate on the components and interactions that characterize the system. I feel that is what distinguishes a good lecturer from a bad one. One need to have great oratorical skills or a vast vocabulary, but one must have the child-like wonder, and be able to inspire that in the audience.
I was originally interested in Biology because to me, the human body was an elegant structure, a few trillion cells, acting in concert, were necessary to accomplish most things we would think of as mundane. To accomplish this feat we call life, there need to be feedback loops, feed-forward loops, intracllular signaling cascades, cell-cell communication locally and systemically each of which is regulated and the regulation is regulated. All these processes need to communicate with one another with temporal and spatial specificity. As we discover more about the processes that allow us to perform the functions we can, I believe it is important to continually appreciate the delicateness and the elegance involved in the sustenance of life amid very narrow thermodynamic limits.
I have found that the med school course focused on details, without much appreciation for the bigger picture. For instance, I understand that somatic sensory information is "perceived" by 1st order neurons in the dorsal root ganglion, before proceeding upwards in the spinal cord through the dorsal columns, terminating on dorsal column nuclei, decassating and continuing upward through the medial lemniscal pathway, terminating in the thalamus where neurons send information to the cortex. However, this does not help me appreciate the fact that this entire process takes a few milliseconds, during which information from multiple neurons have been combined to determine the identity of the stimulus and the appropriate response to it.
In contrast, there are classes which elaborate on biological elegance. For instance, in the concepts II lecture we had this morning, Rich Mooney spent quite some time elaborating on the fact that the auditory system has to use action potentials, which are about 1ms in duration, to code for stimuli that are microseconds long, i.e. they are coding stimuli that are 1000 times faster than their theoretical limit. Rich went on to elaborate on the mechanisms and details about how the process might occur, but held the process and the mechanism with a sense of wonder which reminded me why I care so much.
When conveying information about a certain topic, I believe it is just as important to inspire wonder as it is to elaborate on the components and interactions that characterize the system. I feel that is what distinguishes a good lecturer from a bad one. One need to have great oratorical skills or a vast vocabulary, but one must have the child-like wonder, and be able to inspire that in the audience.
Saturday, January 30, 2010
Nobel Winner's Curse
Bert Sakmann visited Duke last Friday to give a seminar on his current work (streaming video here). By my count, he is the sixth Nobel prize winner I have seen speak. Three of them (Roger Tsien, Eric Kandel, and Torsten Wiesel) gave overviews of their important works, appropriate to the large audiences they were speaking before. The other three (Richard Axel, Bert Sakmann, and Susumu Tonegawa) spoke about the current research in their labs, and largely neglected what made them famous. From their perspective, like a famous rock band, they must decide between playing the audience's favourite songs, or playing the new songs, which while potentially not as good, are what they care about now.
Bert Sakmann is most famous for co-inventing the patch-clamp technique, which allows one to record the actual voltage of individual neurons in the brain. The basics of the technique is that you take a microscopically thin glass cylinder, called a pipette, and bring it so close to the cell membrane that the membrane adheres to the pipette. Ideally, this attachment is so strong that it creates a high electrical resistance, or gigaOhm seal. While you are attached to a cell like this, you can indirectly measure the voltages of the cell. However, to truly measure the cell voltage, you need direct access to the fluid inside the cell, which can be obtained by opening a hole in the membrane. This is traditionally achieved by literally sucking on a tube connected to the pipette. The idea that the fundamental technique of electrophysiology involves puncturing cells by sucking on a straw still tickles me.
Since inventing the whole cell patch technique, Sakmann, like many other famous scientists, has turned his direction "upwards" to more systems oriented questions. The specific system he is now studying is the barrel cortex. The barrel cortex is the part of a rodent's brain responsible for decoding what its whiskers are touching. Each whisker is has a small portion of the cortex dedicated to it called a "barrel."
People have traditionally thought of cortex as having an hierarchical structure, where information flows into one layer, then proceeds through the six layers of cortex until being output not another area of the brain. This theory has come from extensive work in the visual cortex of rats, cats, and other mammals. Sakmann's group recorded (using patch clamp!) from all of the layers of barrel cortex while stimulating the whiskers, and found that information reaches all layers of the cortex simultaneously, in contrast to visual cortex. Previously, people have postulated that perhaps cortical columns (small vertically organized units of cortex covering all six layers) were a general processing unit that had been repeated with variation in different parts of the cortex. By showing that barrel cortex works differently, systems neuroscientists must now be more careful in how the think about information flow. (I should also say at this point that I am not a systems neuroscientist, and do not have a firm knowledge of cortical processing or how information flows through the layers of different types of cortex. The novelty I present here is based on my interpretation of the seminar.)
In the other section of his talk, he showed recordings from layer five thick-tufted pyramidal cells in barrel cortex, and a part of the brainstem called POm. My understanding of this part was more hazy, but I believe he showed there is a cortico-thalamic-cortico feedback loop that acts as a threshold detector for input. His goal in identifying this feedback loop was to give insight into rat decision making, but to this graduate student studying barrel cortex in rats seems like an indirect way to study decision making.
Given he was a Nobel winner, I came into the talk with high expectations, which could not be met. Sakmann spoke to a packed house, as all Nobel winner's do. The faculty in attendance were extremely deferential, which was highly unusual. To these expectations Sakmann presented interesting, but not groundbreaking work, and if anyone else were presenting it, the crowd would have been 1/10th the size. His speaking style was assured, but uninspiring. As I mentioned at the start, Nobel winners have two choices: to present what they're famous for, or present what they care about now. And given the magnitude of what they're famous for, everything else pales in comparison. Maybe that is the Nobel winner's curse, to set such high standards that you cannot help but disappoint in the future.
Bert Sakmann is most famous for co-inventing the patch-clamp technique, which allows one to record the actual voltage of individual neurons in the brain. The basics of the technique is that you take a microscopically thin glass cylinder, called a pipette, and bring it so close to the cell membrane that the membrane adheres to the pipette. Ideally, this attachment is so strong that it creates a high electrical resistance, or gigaOhm seal. While you are attached to a cell like this, you can indirectly measure the voltages of the cell. However, to truly measure the cell voltage, you need direct access to the fluid inside the cell, which can be obtained by opening a hole in the membrane. This is traditionally achieved by literally sucking on a tube connected to the pipette. The idea that the fundamental technique of electrophysiology involves puncturing cells by sucking on a straw still tickles me.
Since inventing the whole cell patch technique, Sakmann, like many other famous scientists, has turned his direction "upwards" to more systems oriented questions. The specific system he is now studying is the barrel cortex. The barrel cortex is the part of a rodent's brain responsible for decoding what its whiskers are touching. Each whisker is has a small portion of the cortex dedicated to it called a "barrel."
People have traditionally thought of cortex as having an hierarchical structure, where information flows into one layer, then proceeds through the six layers of cortex until being output not another area of the brain. This theory has come from extensive work in the visual cortex of rats, cats, and other mammals. Sakmann's group recorded (using patch clamp!) from all of the layers of barrel cortex while stimulating the whiskers, and found that information reaches all layers of the cortex simultaneously, in contrast to visual cortex. Previously, people have postulated that perhaps cortical columns (small vertically organized units of cortex covering all six layers) were a general processing unit that had been repeated with variation in different parts of the cortex. By showing that barrel cortex works differently, systems neuroscientists must now be more careful in how the think about information flow. (I should also say at this point that I am not a systems neuroscientist, and do not have a firm knowledge of cortical processing or how information flows through the layers of different types of cortex. The novelty I present here is based on my interpretation of the seminar.)
In the other section of his talk, he showed recordings from layer five thick-tufted pyramidal cells in barrel cortex, and a part of the brainstem called POm. My understanding of this part was more hazy, but I believe he showed there is a cortico-thalamic-cortico feedback loop that acts as a threshold detector for input. His goal in identifying this feedback loop was to give insight into rat decision making, but to this graduate student studying barrel cortex in rats seems like an indirect way to study decision making.
Given he was a Nobel winner, I came into the talk with high expectations, which could not be met. Sakmann spoke to a packed house, as all Nobel winner's do. The faculty in attendance were extremely deferential, which was highly unusual. To these expectations Sakmann presented interesting, but not groundbreaking work, and if anyone else were presenting it, the crowd would have been 1/10th the size. His speaking style was assured, but uninspiring. As I mentioned at the start, Nobel winners have two choices: to present what they're famous for, or present what they care about now. And given the magnitude of what they're famous for, everything else pales in comparison. Maybe that is the Nobel winner's curse, to set such high standards that you cannot help but disappoint in the future.
Tuesday, January 26, 2010
Assembly and Stoichiometry of the AMPA Receptor and Transmembrane AMPA Receptor Regulatory Protein Complex (TARP):
Today in lab meeting I presented this recent paper from the Tomita lab regarding TARP binding to AMPA Receptors (AMPAR). Tomita, as a post-doc in the Bredt lab, was one of the first people to investigate TARP binding and function in depth. TARPs, as the name suggests, are auxiliary subunits of AMPAR, and are known to modulate AMPAR diffusion and conductances. They contain a PDZ-binding domain that can bind PSD-95, and thus indirectly link AMPAR to PSD-95. Single particle tracking studies have shown this binding can regulate AMPAR mobility in the synpase (Bats and Choquet, 2007). As for the conductances, recordings from AMPAR in oocytes show that AMPAR desensitization is slowed and reduced in the presence of TARPs (Priel et. al. 2005).
In this paper, the authors used fairly simply immunoblotting techniques to investigate how AMPAR form tetramers, how many TARPs can bind to AMPAR, and how TARP binding modulates AMPAR currents. Their basic method was to create AMPAR variants with different weights, so that when they formed tetramers from the different size monomers, the tetramers ran at different speeds. Around half of the amino acids in AMPAR are in the extracellular n-terminal domain (NTD), making the weight difference significant.
Their first interesting result used AMPAR lacking their NTD, but weighted differently using GFP. They found that when they mixed AMPARs lacking their NTD, they could form tetramers containing 1, 2, 3, or 4 of the GFP subunits, showing that the tetramers were not formed by dimer-of-dimers, but from monomers. In contrast, when they mixed full-length AMPAR with AMPAR lacking their NTD, their gel yielded only three bands, showing that the full length AMPAR formed dimers before being dimerized again into tetramers. From this they concluded that the NTD of AMPAR is important for the initial formation of dimers.
They next turned to finding how many TARPs can interact with one AMPAR. By transfecting differing amounts of TARP cRNA, running their gels, and staining for GluR1, they were able to find five different size bands, relating binding of 0-4 TARPs. When they used the minimal transfection situation, which allowed only one TARP to bind, they found that the binding of one TARP was enough to affect the channel conductance.
The above result, however, was in oocytes, and not actual neurons. To investigate the TARP binding in neurons, they used cerebellar granule cells in a stargazer mutant line, which lacks the TARP stargazin. By blotting for GluR2/3 in heterozygous mice, they found only two bands of protein, suggesting that AMPARs form associations with TARPs at a fixed stoichiometry. This fixed stoichiometry suggests that the TARP levels are either minimal or saturated. When they stained for stargazin, they found no unbound stargazin, and thus concluded that only one TARP binds to each AMPAR.
The most interesting aspect of this paper, to me, is how they were able to investigate what I consider a biophysical property - dimerization and stoichiometry - using simple biophysical techniques (and probably common ones in biophysics at that). The finding that the n-terminus is important for the initial dimerization does not seem new to me, as it has previously been shown that the n-terminus alone can form dimers (Leuschner and Hoch, 1999), and I have read reviews that the LIVBP portion of the NTD is the initial site of dimerization (Greger and Ziff, 2007). However, the finding that one TARP alone is enough to modulate AMPAR function is intriguing. I am not sure that I believe their finding that one and only one TARP binds to AMPAR in vivo, given how indirect the evidence is, and that they mention disagreement with another paper in their discussion.
Bats, C., Groc, L., & Choquet, D. (2007) The Interaction between Stargazin and PSD-95 Regulates AMPA Receptor Surface Trafficking. Neuron 53, 719-734.
Greger, I. H., Ziff, E. B., & Penn, A. C. (2007) Molecular determinants of AMPA receptor subunit assembly. Trends in Neurosciences 30, 407-416.
Leuschner, W. D. & Hoch, W. (1999) Subtype-specific assembly of alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptor subunits is mediated by their n-terminal domains. J Biol Chem 274, 16907-16916.
Priel, A., Kolleker, A., Ayalon, G., Gillor, M., Osten, P., & Stern-Bach, Y. (2005) Stargazin Reduces Desensitization and Slows Deactivation of the AMPA-Type Glutamate Receptors. J Neurosci 25, 2682-2686.
In this paper, the authors used fairly simply immunoblotting techniques to investigate how AMPAR form tetramers, how many TARPs can bind to AMPAR, and how TARP binding modulates AMPAR currents. Their basic method was to create AMPAR variants with different weights, so that when they formed tetramers from the different size monomers, the tetramers ran at different speeds. Around half of the amino acids in AMPAR are in the extracellular n-terminal domain (NTD), making the weight difference significant.
Their first interesting result used AMPAR lacking their NTD, but weighted differently using GFP. They found that when they mixed AMPARs lacking their NTD, they could form tetramers containing 1, 2, 3, or 4 of the GFP subunits, showing that the tetramers were not formed by dimer-of-dimers, but from monomers. In contrast, when they mixed full-length AMPAR with AMPAR lacking their NTD, their gel yielded only three bands, showing that the full length AMPAR formed dimers before being dimerized again into tetramers. From this they concluded that the NTD of AMPAR is important for the initial formation of dimers.
They next turned to finding how many TARPs can interact with one AMPAR. By transfecting differing amounts of TARP cRNA, running their gels, and staining for GluR1, they were able to find five different size bands, relating binding of 0-4 TARPs. When they used the minimal transfection situation, which allowed only one TARP to bind, they found that the binding of one TARP was enough to affect the channel conductance.
The above result, however, was in oocytes, and not actual neurons. To investigate the TARP binding in neurons, they used cerebellar granule cells in a stargazer mutant line, which lacks the TARP stargazin. By blotting for GluR2/3 in heterozygous mice, they found only two bands of protein, suggesting that AMPARs form associations with TARPs at a fixed stoichiometry. This fixed stoichiometry suggests that the TARP levels are either minimal or saturated. When they stained for stargazin, they found no unbound stargazin, and thus concluded that only one TARP binds to each AMPAR.
The most interesting aspect of this paper, to me, is how they were able to investigate what I consider a biophysical property - dimerization and stoichiometry - using simple biophysical techniques (and probably common ones in biophysics at that). The finding that the n-terminus is important for the initial dimerization does not seem new to me, as it has previously been shown that the n-terminus alone can form dimers (Leuschner and Hoch, 1999), and I have read reviews that the LIVBP portion of the NTD is the initial site of dimerization (Greger and Ziff, 2007). However, the finding that one TARP alone is enough to modulate AMPAR function is intriguing. I am not sure that I believe their finding that one and only one TARP binds to AMPAR in vivo, given how indirect the evidence is, and that they mention disagreement with another paper in their discussion.
Bats, C., Groc, L., & Choquet, D. (2007) The Interaction between Stargazin and PSD-95 Regulates AMPA Receptor Surface Trafficking. Neuron 53, 719-734.
Greger, I. H., Ziff, E. B., & Penn, A. C. (2007) Molecular determinants of AMPA receptor subunit assembly. Trends in Neurosciences 30, 407-416.
Leuschner, W. D. & Hoch, W. (1999) Subtype-specific assembly of alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptor subunits is mediated by their n-terminal domains. J Biol Chem 274, 16907-16916.
Priel, A., Kolleker, A., Ayalon, G., Gillor, M., Osten, P., & Stern-Bach, Y. (2005) Stargazin Reduces Desensitization and Slows Deactivation of the AMPA-Type Glutamate Receptors. J Neurosci 25, 2682-2686.
Sunday, January 24, 2010
PIP3 controls synaptic function by maintaining AMPA receptor clustering at the postsynaptic membrane:
Welcome to Mike and Rohit's Blog O' Science! We are grad students at Duke and noticed a dearth of good neuroscience blogs, and people kept telling Rohit he should start a blog, so here we are. I'll be posting my thoughts on papers and current research in the synaptic plasticity field, while Rohit will keep us up to date on in vivo imaging.
The first paper I'd like to cover is a recent paper from the Esteban lab about PI3K signaling during LTP. Of all the multitudinous signaling pathways involved in LTP, the PI3K pathway was only recently discovered (Sanna et al 2002; Man, et al 2003). Those papers disagreed slightly on the role of PI3K - Sanna believed it was only necessary for maintenance, while Man thought it was necessary for induction - but both showed that inhibition of PI3K signaling impaired LTP.
This paper started by showing that inhibition of PI3K signaling via overexpressiong PH-Grp1 decreased the basal AMPAR EPSCs in CA1 neurons. PH domains bind to phosphoinositides, and PH-Grp1 specifically binds to PIP3. Overexpression of PH-Grp1 would thus sequester PIP3, and stop it from functioning in the cell. They confirmed their genetic result pharmacologically using LY294002, a PI3K inhibitor. To show that the current decrease was synapse specific, they bath applied AMPA, and showed that the current was the same, and thus number of surface AMPAR was the same. So far so good.
Next they tried to induce LTP in neurons overexpressing PH-Grp1, and were unable to do so. Still cool.
To confirm that the decrease in AMPAR currents was due to a decrease in synaptic but not extrasynaptic AMPAR, they transfected neurons with GFP-GluR2 and PH-Grp1. Here they found that without Grp1, GFP-GluR2 was expressed in both the spine and dendrite; with Grp1, however, GFP-GluR2 shifted more towards the spine. I originally thought this contradicted other lab's findings, but in verifying this I found that GFP-GluR2 does not have a strong spine bias, while SEP-GluR2 (surface) is punctate in the spine, presumably due to GluR2-containing endosomes in the dendrite (Kopec et. al. 2006). Still, this result contradicts their earlier finding that Grp1 decreased synaptic but not extrasynaptic AMPAR number.
Since they saw changes in the subcellular distribution of AMPAR when Grp was expressed, they looked at the expression pattern of PSD-95, an integral anchoring protein at the synapse. They found that in contrast to GluR2, PSD-95 is punctate in control conditions, and loses its punctility after Grp expression. This contradicts the previous figure, but is in line with the decrease of synaptic AMPAR currents. To see if this PSD-95 expression pattern changed AMPAR mobility, they performed FRAP on SEP-GluR2, and found that the mobile fraction increased following Grp expression, presumably due to lack of anchoring. I am usually quite skeptical of FRAP experiments due to the high variability I have encountered doing them myself, and the large disagreement between labs on time constants and mobile fractions. The FRAP performed here particularly stands out for its bizarrely low mobile fraction and tau recovery (tau of 5 minutes compared to 1-2 minutes in Ashby et. al., 2006 and Makino et. al., 2009).
To try to reconcile the previous two figures, they performed immuno EM to see where in the PSD the AMPAR were located. Here they found that the AMPAR redistributed from the PSD proper to the perisynaptic region. While EM has amazing resolution, I am skeptical that a shift of 80 nm in a subset of proteins can be precisely observed. This finding further does not really explain how the AMPAR are being anchored near the synapse, since PSD-95 binding to TARPs (and thereby, presumably to AMPAR) is intact .
In the discussion the authors mentioned that PIP3, the PI3K end product, is involved in cell polarity, and is present at the tips of branching neurites. They hypothesized that it was playing a similar role in the synapse, which is also a highly polarized compartment.
In the end I found this a frustrating, if interesting paper. PI3K signaling is an important, if ignored, player in LTP. However, the contradictory nature of their results is dissatisfying. I have a hard time understanding how PIP3 simultaneously bring AMPAR into the spine, while reducing AMPAR currents and reducing PSD-95 puncta. The most intriguing part of this paper, in fact, is the redistribution of PSD-95, which makes me believe future research in PIP3 signaling should focus there, instead of on AMPAR trafficking.
Ashby, M. C., Maier, S. R., Nishimune, A., & Henley, J. M. (2006) Lateral Diffusion Drives Constitutive Exchange of AMPA Receptors at Dendritic Spines and Is Regulated by Spine Morphology. J Neuroscience 26, 7046-7055.
Kopec, C. D., Li, B., Wei, W., Boehm, J., & Malinow, R. (2006) Glutamate Receptor Exocytosis and Spine Enlargement during Chemically Induced Long-Term Potentiation. J Neuroscience 26, 2000-2009.
Makino, H. & Malinow, R. (2009) AMPA Receptor Incorporation into Synapses during LTP: The Role of Lateral Movement and Exocytosis. Neuron 64, 381-390.
Man, H.-Y., Wang, Q., Lu, W.-Y., Ju, W., Ahmadian, G., Liu, L., D'Souza, S., Wong, T. P., Taghibiglou, C., Lu, J., et al. (2003) Activation of PI3-Kinase Is Required for AMPA Receptor Insertion during LTP of mEPSCs in Cultured Hippocampal Neurons. Neuron 38, 611-624.
Sanna, P. P., Cammalleri, M., Berton, F., Simpson, C., Lutjens, R., Bloom, F. E., & Francesconi, W. (2002) Phosphatidylinositol 3-Kinase Is Required for the Expression But Not for the Induction or the Maintenance of Long-Term Potentiation in the Hippocampal CA1 Region. J Neurosci 22, 3359-3365.