Credit: Photo by Lance Long; courtesy Electronic Visualization Laboratory, University of Illinois at Chicago
The Major Research Instrumentation program has helped to fund pieces of research equipment ranging from scanning probe microscopes, which have helped to visualize and characterize nano-scale biological tools, to nuclear magnetic resonance (NMR) spectrometers, which allow chemists to identify the individual molecules they make. Not only does this instrumentation help scientists advance their own research, it’s also used to train the next generation of scientists. For example, an X-ray diffractometer at Utah State University allowed Joan Hevel and Sean Johnson to teach four high school students in their lab about protein crystallization. Learn more.
The twin Voyager 1 and 2 spacecraft are exploring where nothing from Earth has flown before. Continuing their more-than-40-year journey since their 1977 launches, they each are much farther away from Earth and the Sun than Pluto.
The primary mission was the exploration of Jupiter and Saturn. After making a string of discoveries there – such as active volcanoes on Jupiter’s moon Io and intricacies of Saturn’s rings – the mission was extended.
Voyager 2 went on to explore Uranus and Neptune, and is still the only spacecraft to have visited those outer planets. The adventurers’ current mission, the Voyager Interstellar Mission (VIM), will explore the outermost edge of the Sun’s domain. And beyond.
‘BUS’ Housing Electronics
The basic structure of the spacecraft is called the “bus,” which carries the various engineering subsystems and scientific instruments. It is like a large ten-sided box. Each of the ten sides of the bus contains a compartment (a bay) that houses various electronic assemblies.
Cosmic Ray Subsystem (CRS)
The Cosmic Ray Subsystem (CRS) looks only for very energetic particles in plasma, and has the highest sensitivity of the three particle detectors on the spacecraft. Very energetic particles can often be found in the intense radiation fields surrounding some planets (like Jupiter). Particles with the highest-known energies come from other stars. The CRS looks for both.
High-Gain Antenna (HGA)
The High-Gain Antenna (HGA) transmits data to Earth on two frequency channels (the downlink). One at about 8.4 gigahertz, is the X-band channel and contains science and engineering data. For comparison, the FM radio band is centered around 100 megahertz.
Imaging Science Subsystem (ISS)
The Imaging Science Subsystem (ISS) is a modified version of the slow scan vidicon camera designed that were used in the earlier Mariner flights. The ISS consists of two television-type cameras, each with eight filters in a commandable Filter Wheel mounted in front of the vidicons. One has a low resolution 200 mm wide-angle lens, while the other uses a higher resolution 1500 mm narrow-angle lens.
Infrared Interferometer Spectrometer and Radiometer (IRIS)
The Infrared Interferometer Spectrometer and Radiometer (IRIS) actually acts as three separate instruments. First, it is a very sophisticated thermometer. It can determine the distribution of heat energy a body is emitting, allowing scientists to determine the temperature of that body or substance.
Second, the IRIS is a device that can determine when certain types of elements or compounds are present in an atmosphere or on a surface.
Third, it uses a separate radiometer to measure the total amount of sunlight reflected by a body at ultraviolet, visible and infrared frequencies.
Low-Energy Charged Particles (LECP)
The Low-Energy Charged Particles (LECP) looks for particles of higher energy than the Plasma Science instrument, and it overlaps with the Cosmic Ray Subsystem (CRS). It has the broadest energy range of the three sets of particle sensors.
The LECP can be imagined as a piece of wood, with the particles of interest playing the role of the bullets. The faster a bullet moves, the deeper it will penetrate the wood. Thus, the depth of penetration measures the speed of the particles. The number of “bullet holes” over time indicates how many particles there are in various places in the solar wind, and at the various outer planets. The orientation of the wood indicates the direction from which the particles came.
Magnetometer (MAG)
Although the Magnetometer (MAG) can detect some of the effects of the solar wind on the outer planets and moons, its primary job is to measure changes in the Sun’s magnetic field with distance and time, to determine if each of the outer planets has a magnetic field, and how the moons and rings of the outer planets interact with those magnetic fields.
Optical Calibration Target The target plate is a flat rectangle of known color and brightness, fixed to the spacecraft so the instruments on the movable scan platform (cameras, infrared instrument, etc.) can point to a predictable target for calibration purposes.
Photopolarimeter Subsystem (PPS)
The Photopolarimeter Subsystem (PPS) uses a 0.2 m telescope fitted with filters and polarization analyzers. The experiment is designed to determine the physical properties of particulate matter in the atmospheres of Jupiter, Saturn and the rings of Saturn by measuring the intensity and linear polarization of scattered sunlight at eight wavelengths.
The experiment also provided information on the texture and probable composition of the surfaces of the satellites of Jupiter and Saturn.
Planetary Radio Astronomy (PRA) and Plasma Wave Subsystem (PWS)
Two separate experiments, The Plasma Wave Subsystem and the Planetary Radio Astronomy experiment, share the two long antennas which stretch at right-angles to one another, forming a “V”.
Plasma Science (PLS)
The Plasma Science (PLS) instrument looks for the lowest-energy particles in plasma. It also has the ability to look for particles moving at particular speeds and, to a limited extent, to determine the direction from which they come.
The Plasma Subsystem studies the properties of very hot ionized gases that exist in interplanetary regions. One plasma detector points in the direction of the Earth and the other points at a right angle to the first.
Radioisotope Thermoelectric Generators (RTG)
Three RTG units, electrically parallel-connected, are the central power sources for the mission module. The RTGs are mounted in tandem (end-to-end) on a deployable boom. The heat source radioisotopic fuel is Plutonium-238 in the form of the oxide Pu02. In the isotopic decay process, alpha particles are released which bombard the inner surface of the container. The energy released is converted to heat and is the source of heat to the thermoelectric converter.
Ultraviolet Spectrometer (UVS)
The Ultraviolet Spectrometer (UVS) is a very specialized type of light meter that is sensitive to ultraviolet light. It determines when certain atoms or ions are present, or when certain physical processes are going on.
The instrument looks for specific colors of ultraviolet light that certain elements and compounds are known to emit.
Learn more about the Voyager 1 and 2 spacecraft HERE.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
A team of architects and chemists from the University of Cambridge has designed super-stretchy and strong fibres which are almost entirely composed of water, and could be used to make textiles, sensors and other materials. The fibres, which resemble miniature bungee cords as they can absorb large amounts of energy, are sustainable, non-toxic and can be made at room temperature.
This new method not only improves upon earlier methods of making synthetic spider silk, since it does not require high energy procedures or extensive use of harmful solvents, but it could substantially improve methods of making synthetic fibres of all kinds, since other types of synthetic fibres also rely on high-energy, toxic methods. The results are reported in the journal Proceedings of the National Academy of Sciences.
Spider silk is one of nature’s strongest materials, and scientists have been attempting to mimic its properties for a range of applications, with varying degrees of success. “We have yet to fully recreate the elegance with which spiders spin silk,” said co-author Dr Darshil Shah from Cambridge’s Department of Architecture.
Read more.
Novel theory explains how metal nanoparticles form
Although scientists have for decades been able to synthesize nanoparticles in the lab, the process is mostly trial and error, and how the formation actually takes place is obscure. However, a study recently published in Nature Communications by chemical engineers at the University of Pittsburgh’s Swanson School of Engineering explains how metal nanoparticles form.
“Thermodynamic Stability of Ligand-Protected Metal Nanoclusters” (DOI: 10.1038/ncomms15988) was co-authored by Giannis Mpourmpakis, assistant professor of chemical and petroleum engineering, and PhD candidate Michael G. Taylor. The research, completed in Mpourmpakis’ Computer-Aided Nano and Energy Lab (C.A.N.E.LA.), is funded through a National Science Foundation CAREER award and bridges previous research focused on designing nanoparticles for catalytic applications.
“Even though there is extensive research into metal nanoparticle synthesis, there really isn’t a rational explanation why a nanoparticle is formed,” Dr. Mpourmpakis said. “We wanted to investigate not just the catalytic applications of nanoparticles, but to make a step further and understand nanoparticle stability and formation. This new thermodynamic stability theory explains why ligand-protected metal nanoclusters are stabilized at specific sizes.”
Read more.
Big Improvements to Brain-Computer Interface
When people suffer spinal cord injuries and lose mobility in their limbs, it’s a neural signal processing problem. The brain can still send clear electrical impulses and the limbs can still receive them, but the signal gets lost in the damaged spinal cord.
The Center for Sensorimotor Neural Engineering (CSNE)—a collaboration of San Diego State University with the University of Washington (UW) and the Massachusetts Institute of Technology (MIT)—is working on an implantable brain chip that can record neural electrical signals and transmit them to receivers in the limb, bypassing the damage and restoring movement. Recently, these researchers described in a study published in the journal Nature Scientific Reports a critical improvement to the technology that could make it more durable, last longer in the body and transmit clearer, stronger signals.
The technology, known as a brain-computer interface, records and transmits signals through electrodes, which are tiny pieces of material that read signals from brain chemicals known as neurotransmitters. By recording brain signals at the moment a person intends to make some movement, the interface learns the relevant electrical signal pattern and can transmit that pattern to the limb’s nerves, or even to a prosthetic limb, restoring mobility and motor function.
The current state-of-the-art material for electrodes in these devices is thin-film platinum. The problem is that these electrodes can fracture and fall apart over time, said one of the study’s lead investigators, Sam Kassegne, deputy director for the CSNE at SDSU and a professor in the mechanical engineering department.
Kassegne and colleagues developed electrodes made out of glassy carbon, a form of carbon. This material is about 10 times smoother than granular thin-film platinum, meaning it corrodes less easily under electrical stimulation and lasts much longer than platinum or other metal electrodes.
“Glassy carbon is much more promising for reading signals directly from neurotransmitters,” Kassegne said. “You get about twice as much signal-to-noise. It’s a much clearer signal and easier to interpret.”
The glassy carbon electrodes are fabricated here on campus. The process involves patterning a liquid polymer into the correct shape, then heating it to 1000 degrees Celsius, causing it become glassy and electrically conductive. Once the electrodes are cooked and cooled, they are incorporated into chips that read and transmit signals from the brain and to the nerves.
Researchers in Kassegne’s lab are using these new and improved brain-computer interfaces to record neural signals both along the brain’s cortical surface and from inside the brain at the same time.
“If you record from deeper in the brain, you can record from single neurons,” said Elisa Castagnola, one of the researchers. “On the surface, you can record from clusters. This combination gives you a better understanding of the complex nature of brain signaling.”
A doctoral graduate student in Kassegne’s lab, Mieko Hirabayashi, is exploring a slightly different application of this technology. She’s working with rats to find out whether precisely calibrated electrical stimulation can cause new neural growth within the spinal cord. The hope is that this stimulation could encourage new neural cells to grow and replace damaged spinal cord tissue in humans. The new glassy carbon electrodes will allow her to stimulate, read the electrical signals of and detect the presence of neurotransmitters in the spinal cord better than ever before.
New discovery could be a major advance for understanding neurological diseases
The discovery of a new mechanism that controls the way nerve cells in the brain communicate with each other to regulate our learning and long-term memory could have major benefits to understanding how the brain works and what goes wrong in neurodegenerative disorders such as epilepsy and dementia. The breakthrough, published in Nature Neuroscience, was made by scientists at the University of Bristol and the University of Central Lancashire. The findings will have far-reaching implications in many aspects of neuroscience and understanding how the brain works.
The human brain contains around 100-billion nerve cells, each of which makes about 10,000 connections to other cells, called synapses. Synapses are constantly transmitting information to, and receiving information from other nerve cells. A process, called long-term potentiation (LTP), increases the strength of information flow across synapses. Lots of synapses communicating between different nerve cells form networks and LTP intensifies the connectivity of the cells in the network to make information transfer more efficient. This LTP mechanism is how the brain operates at the cellular level to allow us to learn and remember. However, when these processes go wrong they can lead to neurological and neurodegenerative disorders.
Precisely how LTP is initiated is a major question in neuroscience. Traditional LTP is regulated by the activation of special proteins at synapses called NMDA receptors. This study, by Professor Jeremy Henley and co-workers reports a new type of LTP that is controlled by kainate receptors.
This is an important advance as it highlights the flexibility in the way synapses are controlled and nerve cells communicate. This, in turn, raises the possibility of targeting this new pathway to develop therapeutic strategies for diseases like dementia, in which there is too little synaptic transmission and LTP, and epilepsy where there is too much inappropriate synaptic transmission and LTP.
Jeremy Henley, Professor of Molecular Neuroscience in the University’s School of Biochemistry in the Faculty of Biomedical Sciences, said: “These discoveries represent a significant advance and will have far-reaching implications for the understanding of memory, cognition, developmental plasticity and neuronal network formation and stabilisation. In summary, we believe that this is a groundbreaking study that opens new lines of inquiry which will increase understanding of the molecular details of synaptic function in health and disease.”
Dr Milos Petrovic, co-author of the study and Reader in Neuroscience at the University of Central Lancashire added: “Untangling the interactions between the signal receptors in the brain not only tells us more about the inner workings of a healthy brain, but also provides a practical insight into what happens when we form new memories. If we can preserve these signals it may help protect against brain diseases.
“This is certainly an extremely exciting discovery and something that could potentially impact the global population. We have discovered potential new drug targets that could help to cure the devastating consequences of dementias, such as Alzheimer’s disease. Collaborating with researchers across the world in order to identify new ways to fight disease like this is what world-class scientific research is all about, and we look forward to continuing our work in this area.”
Materials research creates potential for improved computer chips and transistors
It’s a material world, and an extremely versatile one at that, considering its most basic building blocks – atoms – can be connected together to form different structures that retain the same composition.
Diamond and graphite, for example, are but two of the many polymorphs of carbon, meaning that both have the same chemical composition and differ only in the manner in which their atoms are connected. But what a world of difference that connectivity makes: The former goes into a ring and costs thousands of dollars, while the latter has to sit content within a humble pencil.
The inorganic compound hafnium dioxide commonly used in optical coatings likewise has several polymorphs, including a tetragonal form with highly attractive properties for computer chips and other optical elements. However, because this form is stable only at temperatures above 3100 degrees Fahrenheit – think blazing inferno – scientists have had to make do with its more limited monoclinic polymorph. Until now.
Read more.
(Image caption: New model mimics the connectivity of the brain by connecting three distinct brain regions on a chip. Credit: Disease Biophysics Group/Harvard University)
Multiregional brain on a chip
Harvard University researchers have developed a multiregional brain-on-a-chip that models the connectivity between three distinct regions of the brain. The in vitro model was used to extensively characterize the differences between neurons from different regions of the brain and to mimic the system’s connectivity.
The research was published in the Journal of Neurophysiology.
“The brain is so much more than individual neurons,” said Ben Maoz, co-first author of the paper and postdoctoral fellow in the Disease Biophysics Group in the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). “It’s about the different types of cells and the connectivity between different regions of the brain. When modeling the brain, you need to be able to recapitulate that connectivity because there are many different diseases that attack those connections.”
“Roughly twenty-six percent of the US healthcare budget is spent on neurological and psychiatric disorders,” said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics Building at SEAS and Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University. “Tools to support the development of therapeutics to alleviate the suffering of these patients is not only the human thing to do, it is the best means of reducing this cost.“
Researchers from the Disease Biophysics Group at SEAS and the Wyss Institute modeled three regions of the brain most affected by schizophrenia — the amygdala, hippocampus and prefrontal cortex.
They began by characterizing the cell composition, protein expression, metabolism, and electrical activity of neurons from each region in vitro.
“It’s no surprise that neurons in distinct regions of the brain are different but it is surprising just how different they are,” said Stephanie Dauth, co-first author of the paper and former postdoctoral fellow in the Disease Biophysics Group. “We found that the cell-type ratio, the metabolism, the protein expression and the electrical activity all differ between regions in vitro. This shows that it does make a difference which brain region’s neurons you’re working with.”
Next, the team looked at how these neurons change when they’re communicating with one another. To do that, they cultured cells from each region independently and then let the cells establish connections via guided pathways embedded in the chip.
The researchers then measured cell composition and electrical activity again and found that the cells dramatically changed when they were in contact with neurons from different regions.
“When the cells are communicating with other regions, the cellular composition of the culture changes, the electrophysiology changes, all these inherent properties of the neurons change,” said Maoz. “This shows how important it is to implement different brain regions into in vitro models, especially when studying how neurological diseases impact connected regions of the brain.”
To demonstrate the chip’s efficacy in modeling disease, the team doped different regions of the brain with the drug Phencyclidine hydrochloride — commonly known as PCP — which simulates schizophrenia. The brain-on-a-chip allowed the researchers for the first time to look at both the drug’s impact on the individual regions as well as its downstream effect on the interconnected regions in vitro.
The brain-on-a-chip could be useful for studying any number of neurological and psychiatric diseases, including drug addiction, post traumatic stress disorder, and traumatic brain injury.
"To date, the Connectome project has not recognized all of the networks in the brain,” said Parker. “In our studies, we are showing that the extracellular matrix network is an important part of distinguishing different brain regions and that, subsequently, physiological and pathophysiological processes in these brain regions are unique. This advance will not only enable the development of therapeutics, but fundamental insights as to how we think, feel, and survive.”
This car race involved years of training, feats of engineering, high-profile sponsorships, competitors from around the world and a racetrack made of gold.
But the high-octane competition, described as a cross between physics and motor-sports, is invisible to the naked eye. In fact, the track itself is only a fraction of the width of a human hair, and the cars themselves are each comprised of a single molecule.
The Nanocar Race, which happened over the weekend at Le centre national de la recherché scientific in Toulouse, France, was billed as the “first-ever race of molecule-cars.”
It’s meant to generate excitement about molecular machines. Research on the tiny structures won last year’s Nobel Prize in Chemistry, and they have been lauded as the “first steps into a new world,” as The Two-Way reported.
Image: CNRS
(Image caption: Brain showing hallmarks of Alzheimer’s disease (plaques in blue). Credit: ZEISS Microscopy)
New imaging technique measures toxicity of proteins associated with Alzheimer’s and Parkinson’s diseases
Researchers have developed a new imaging technique that makes it possible to study why proteins associated with Alzheimer’s and Parkinson’s diseases may go from harmless to toxic. The technique uses a technology called multi-dimensional super-resolution imaging that makes it possible to observe changes in the surfaces of individual protein molecules as they clump together. The tool may allow researchers to pinpoint how proteins misfold and eventually become toxic to nerve cells in the brain, which could aid in the development of treatments for these devastating diseases.
The researchers, from the University of Cambridge, have studied how a phenomenon called hydrophobicity (lack of affinity for water) in the proteins amyloid-beta and alpha synuclein – which are associated with Alzheimer’s and Parkinson’s respectively – changes as they stick together. It had been hypothesised that there was a link between the hydrophobicity and toxicity of these proteins, but this is the first time it has been possible to image hydrophobicity at such high resolution. Details are reported in the journal Nature Communications.
“These proteins start out in a relatively harmless form, but when they clump together, something important changes,” said Dr Steven Lee from Cambridge’s Department of Chemistry, the study’s senior author. “But using conventional imaging techniques, it hasn’t been possible to see what’s going on at the molecular level.”
In neurodegenerative diseases such as Alzheimer’s and Parkinson’s, naturally-occurring proteins fold into the wrong shape and clump together into filament-like structures known as amyloid fibrils and smaller, highly toxic clusters known as oligomers which are thought to damage or kill neurons, however the exact mechanism remains unknown.
For the past two decades, researchers have been attempting to develop treatments which stop the proliferation of these clusters in the brain, but before any such treatment can be developed, there first needs to be a precise understanding of how oligomers form and why.
“There’s something special about oligomers, and we want to know what it is,” said Lee. “We’ve developed new tools that will help us answer these questions.”
When using conventional microscopy techniques, physics makes it impossible to zoom in past a certain point. Essentially, there is an innate blurriness to light, so anything below a certain size will appear as a blurry blob when viewed through an optical microscope, simply because light waves spread when they are focused on such a tiny spot. Amyloid fibrils and oligomers are smaller than this limit so it’s very difficult to directly visualise what is going on.
However, new super-resolution techniques, which are 10 to 20 times better than optical microscopes, have allowed researchers to get around these limitations and view biological and chemical processes at the nanoscale.
Lee and his colleagues have taken super-resolution techniques one step further, and are now able to not only determine the location of a molecule, but also the environmental properties of single molecules simultaneously.
Using their technique, known as sPAINT (spectrally-resolved points accumulation for imaging in nanoscale topography), the researchers used a dye molecule to map the hydrophobicity of amyloid fibrils and oligomers implicated in neurodegenerative diseases. The sPAINT technique is easy to implement, only requiring the addition of a single transmission diffraction gradient onto a super-resolution microscope. According to the researchers, the ability to map hydrophobicity at the nanoscale could be used to understand other biological processes in future.
Neuroscientists call for deep collaboration to ‘crack’ the human brain
The time is ripe, the communication technology is available, for teams from different labs and different countries to join efforts and apply new forms of grassroots collaborative research in brain science. This is the right way to gradually upscale the study of the brain so as to usher it into the era of Big Science, claim neuroscientists in Portugal, Switzerland and the United Kingdom. And they are already putting ideas into action.
In a Comment in the journal Nature, an international trio of neuroscientists outlines a concrete proposal for jump-starting a new, bottom-up, collaborative “big science” approach to neuroscience research, which they consider crucial to tackle the still unsolved great mysteries of the brain.
How does the brain function, from molecules to cells to circuits to brain systems to behavior? How are all these levels of complexity integrated to ultimately allow consciousness to emerge in the human brain?
The plan now proposed by Zach Mainen, director of research at the Champalimaud Centre for the Unknown, in Lisbon, Portugal; Michael Häusser, professor of Neuroscience at University College London, United Kingdom; and Alexandre Pouget, professor of neuroscience at the University of Geneva, Switzerland, is inspired by the way particle physics teams nowadays mount their huge accelerator experiments to discover new subatomic particles and ultimately to understand the evolution of the Universe.
“Some very large physics collaborations have precise goals and are self-organized”, says Zach Mainen. More specifically, his model is the ATLAS experiment at the European Laboratory of Particle Physics (CERN, near Geneva), which includes nearly 3,000 scientists from tens of countries and was able (together with its “sister” experiment, CMS) to announce the discovery of the long-sought Higgs boson in July 2012.
Although the size of the teams involved in neuroscience may not be nearly comparable to the CERN teams, the collaborative principles should be very similar, according to Zach Mainen. “What we propose is very much in the physics style, a kind of 'Grand Unified Theory’ of brain research, he says. "Can we do it? Clearly, it’s not going to happen within five years, but we do have theories that need to be tested, and the underlying principles of how to do it will be much the same as in physics.”
To help push neuroscience research to take the leap into the future, the three neuroscientists propose some simple principles, at least in theory: “focus on a single brain function”; “combine experimentalists and theorists”; “standardize tools and methods”; “share data”; “assign credit in new ways”. And one of the fundamental premises to make this possible is to “engender a sphere of trust within which it is safe [to share] data, resources and plans”, they write.
Needless to say, the harsh competitiveness of the field is not a fertile ground for this type of “deep” collaborative effort. But the authors themselves are already putting into practice the principles they advocate in their article.
“We have a group of 20 researchers (10 theorists and 10 experimentalists), about half in the US and half in the UK, Switzerland and Portugal” says Zach Mainen. The group will focus on only one well-defined goal: the foraging behavior for food and water resources in the mouse, recording activity from as much of the brain as possible - at least several dozen brain areas.
“By collaboration, we don’t mean business as usual; we really mean it”, concludes Zach Mainen. “We’ll have 10 labs doing the same experiments, with the same gear, the same computer programs. The data we will obtain will go into the cloud and be shared by the 20 labs. It’ll be almost as a global lab, except it will be distributed geographically.”
For as long as scientists have been listening in on the activity of the brain, they have been trying to understand the source of its noisy, apparently random, activity. In the past 20 years, “balanced network theory” has emerged to explain this apparent randomness through a balance of excitation and inhibition in recurrently coupled networks of neurons. A team of scientists has extended the balanced model to provide deep and testable predictions linking brain circuits to brain activity.
Lead investigators at the University of Pittsburgh say the new model accurately explains experimental findings about the highly variable responses of neurons in the brains of living animals. On Oct. 31, their paper, “The spatial structure of correlated neuronal variability,” was published online by the journal Nature Neuroscience.
The new model provides a much richer understanding of how activity is coordinated between neurons in neural circuits. The model could be used in the future to discover neural “signatures” that predict brain activity associated with learning or disease, say the investigators.
“Normally, brain activity appears highly random and variable most of the time, which looks like a weird way to compute,” said Brent Doiron, associate professor of mathematics at Pitt, senior author on the paper, and a member of the University of Pittsburgh Brain Institute (UPBI). “To understand the mechanics of neural computation, you need to know how the dynamics of a neuronal network depends on the network’s architecture, and this latest research brings us significantly closer to achieving this goal.”
Earlier versions of the balanced network theory captured how the timing and frequency of inputs—excitatory and inhibitory—shaped the emergence of variability in neural behavior, but these models used shortcuts that were biologically unrealistic, according to Doiron.
“The original balanced model ignored the spatial dependence of wiring in the brain, but it has long been known that neuron pairs that are near one another have a higher likelihood of connecting than pairs that are separated by larger distances. Earlier models produced unrealistic behavior—either completely random activity that was unlike the brain or completely synchronized neural behavior, such as you would see in a deep seizure. You could produce nothing in between.”
In the context of this balance, neurons are in a constant state of tension. According to co-author Matthew Smith, assistant professor of ophthalmology at Pitt and a member of UPBI, “It’s like balancing on one foot on your toes. If there are small overcorrections, the result is big fluctuations in neural firing, or communication.”
The new model accounts for temporal and spatial characteristics of neural networks and the correlations in the activity between neurons—whether firing in one neuron is correlated with firing in another. The model is such a substantial improvement that the scientists could use it to predict the behavior of living neurons examined in the area of the brain that processes the visual world.
After developing the model, the scientists examined data from the living visual cortex and found that their model accurately predicted the behavior of neurons based on how far apart they were. The activity of nearby neuron pairs was strongly correlated. At an intermediate distance, pairs of neurons were anticorrelated (When one responded more, the other responded less.), and at greater distances still they were independent.
“This model will help us to better understand how the brain computes information because it’s a big step forward in describing how network structure determines network variability,” said Doiron. “Any serious theory of brain computation must take into account the noise in the code. A shift in neuronal variability accompanies important cognitive functions, such as attention and learning, as well as being a signature of devastating pathologies like Parkinson’s disease and epilepsy.”
While the scientists examined the visual cortex, they believe their model could be used to predict activity in other parts of the brain, such as areas that process auditory or olfactory cues, for example. And they believe that the model generalizes to the brains of all mammals. In fact, the team found that a neural signature predicted by their model appeared in the visual cortex of living mice studied by another team of investigators.
“A hallmark of the computational approach that Doiron and Smith are taking is that its goal is to infer general principles of brain function that can be broadly applied to many scenarios. Remarkably, we still don’t have things like the laws of gravity for understanding the brain, but this is an important step for providing good theories in neuroscience that will allow us to make sense of the explosion of new experimental data that can now be collected,” said Nathan Urban, associate director of UPBI.
Just when lighting aficionados were in a dark place, LEDs came to the rescue. Over the past decade, LED technologies – short for light-emitting diode – have swept the lighting industry by offering features such as durability, efficiency and long life.
Now, Princeton engineering researchers have illuminated another path forward for LED technologies by refining the manufacturing of light sources made with crystalline substances known as perovskites, a more efficient and potentially lower-cost alternative to materials used in LEDs found on store shelves.
The researchers developed a technique in which nanoscale perovskite particles self-assemble to produce more efficient, stable and durable perovskite-based LEDs. The advance, reported January 16 in Nature Photonics, could speed the use of perovskite technologies in commercial applications such as lighting, lasers and television and computer screens.
“The performance of perovskites in solar cells has really taken off in recent years, and they have properties that give them a lot of promise for LEDs, but the inability to create uniform and bright nanoparticle perovskite films has limited their potential,” said Barry Rand an assistant professor of electrical engineering and the Andlinger Center for Energy and the Environment at Princeton.
Read more.
An ultralight high-performance mechanical watch made with graphene is unveiled today in Geneva at the Salon International De La Haute Horlogerie thanks to a unique collaboration.
The University of Manchester has collaborated with watchmaking brand Richard Mille and McLaren F1 to create world’s lightest mechanical chronograph by pairing leading graphene research with precision engineering.
The RM 50-03 watch was made using a unique composite incorporating graphene to manufacture a strong but lightweight new case to house the delicate watch mechanism. The graphene composite known as Graph TPT weighs less than previous similar materials used in watchmaking.
Graphene is the world’s first two-dimensional material at just one-atom thick. It was first isolated at The University of Manchester in 2004 and has the potential to revolutionise a large number of applications including, high-performance composites for the automotive and aerospace industries, as well as flexible, bendable mobile phones and tablets and next-generation energy storage.
Read more.
Suppose you woke up in your bedroom with the lights off and wanted to get out. While heading toward the door with your arms out, you would predict the distance to the door based on your memory of your bedroom and the steps you have already made. If you touch a wall or furniture, you would refine the prediction. This is an example of how important it is to supplement limited sensory input with your own actions to grasp the situation. How the brain comprehends such a complex cognitive function is an important topic of neuroscience.
Dealing with limited sensory input is also a ubiquitous issue in engineering. A car navigation system, for example, can predict the current position of the car based on the rotation of the wheels even when a GPS signal is missing or distorted in a tunnel or under skyscrapers. As soon as the clean GPS signal becomes available, the navigation system refines and updates its position estimate. Such iteration of prediction and update is described by a theory called “dynamic Bayesian inference.”
In a collaboration of the Neural Computation Unit and the Optical Neuroimaging Unit at the Okinawa Institute of Science and Technology Graduate University (OIST), Dr. Akihiro Funamizu, Prof. Bernd Kuhn, and Prof. Kenji Doya analyzed the brain activity of mice approaching a target under interrupted sensory inputs. This research is supported by the MEXT Kakenhi Project on “Prediction and Decision Making” and the results were published online in Nature Neuroscience on September 19th, 2016.
The team performed surgeries in which a small hole was made in the skulls of mice and a glass cover slip was implanted onto each of their brains over the parietal cortex. Additionally, a small metal headplate was attached in order to keep the head still under a microscope. The cover slip acted as a window through which researchers could record the activities of hundreds of neurons using a calcium-sensitive fluorescent protein that was specifically expressed in neurons in the cerebral cortex. Upon excitation of a neuron, calcium flows into the cell, which causes a change in fluorescence of the protein. The team used a method called two-photon microscopy to monitor the change in fluorescence from the neurons at different depths of the cortical circuit (Figure 1).
(Figure 1: Parietal Cortex. A depiction of the location of the parietal cortex in a mouse brain can be seen on the left. On the right, neurons in the parietal cortex are imaged using two-photon microscopy)
The research team built a virtual reality system in which a mouse can be made to believe it was walking around freely, but in reality, it was fixed under a microscope. This system included an air-floated Styrofoam ball on which the mouse can walk and a sound system that can emit sounds to simulate movement towards or past a sound source (Figure 2).
(Figure 2: Acoustic Virtual Reality System. Twelve speakers are placed around the mouse. The speakers generate sound based on the movement of the mouse running on the spherical treadmill (left). When the mouse reaches the virtual sound source it will get a droplet of sugar water as a reward)
An experimental trial starts with a sound source simulating a distance from 67 to 134 cm in front of and 25 cm to the left of the mouse. As the mouse steps forward and rotates the ball, the sound is adjusted to mimic the mouse approaching the source by increasing the volume and shifting in direction. When the mouse reaches just by the side of the sound source, drops of sugar water come out from a tube in front of the mouse as a reward for reaching the goal. After the mice learn that they will be rewarded at the goal position, they increase licking the tube as they come closer to the goal position, in expectation of the sugar water.
The team then tested what happens if the sound is removed for certain simulated distances in segments of about 20 cm. Even when the sound is not given, the mice increase licking as they came closer to the goal position in anticipation of the reward (Figure 3). This means that the mice predicted the goal distance based on their own movement, just like the dynamic Bayesian filter of a car navigation system predicts a car’s location by rotation of tires in a tunnel. Many neurons changed their activities depending on the distance to the target, and interestingly, many of them maintained their activities even when the sound was turned off. Additionally, when the team injects a drug that suppresses neural activities in a region of the mice’s brains, called the parietal cortex they find that the mice did not increase licking when the sound is omitted. This suggests that the parietal cortex plays a role in predicting the goal position.
(Figure 3: Estimation of the goal distance without sound. Mice are eager to find the virtual sound source to get the sugar water reward. When the mice get closer to the goal, they increase licking in expectation of the sugar water reward. They increased licking when the sound is on but also when the sound is omitted. This result suggests that mice estimate the goal distance by taking their own movement into account)
In order to further explore what the activity of these neurons represents, the team applied a probabilistic neural decoding method. Each neuron is observed for over 150 trials of the experiment and its probability of becoming active at different distances to the goal could be identified. This method allowed the team to estimate each mouse’s distance to the goal from the recorded activities of about 50 neurons at each moment. Remarkably, the neurons in the parietal cortex predict the change in the goal distance due to the mouse’s movement even in the segments where sound feedback was omitted (Figure 4). When the sound was given, the predicted distance from the sound became more accurate. These results show that the parietal cortex predicts the distance to the goal due to the mouse’s own movements even when sensory inputs are missing and updates the prediction when sensory inputs are available, in the same form as dynamic Bayesian inference.
(Figure 4: Distance estimation in the parietal cortex utilizes dynamic Bayesian inference. Probabilistic neural decoding allows for the estimation of the goal distance from neuronal activity imaged from the parietal cortex. Neurons could predict the goal distance even during sound omissions. The prediction became more accurate when sound was given. These results suggest that the parietal cortex predicts the goal distance from movement and updates the prediction with sensory inputs, in the same way as dynamic Bayesian inference)
The hypothesis that the neural circuit of the cerebral cortex realizes dynamic Bayesian inference has been proposed before, but this is the first experimental evidence showing that a region of the cerebral cortex realizes dynamic Bayesian inference using action information. In dynamic Bayesian inference, the brain predicts the present state of the world based on past sensory inputs and motor actions. “This may be the basic form of mental simulation,” Prof. Doya says. Mental simulation is the fundamental process for action planning, decision making, thought and language. Prof. Doya’s team has also shown that a neural circuit including the parietal cortex was activated when human subjects performed mental simulation in a functional MRI scanner. The research team aims to further analyze those data to obtain the whole picture of the mechanism of mental simulation.
Understanding the neural mechanism of mental simulation gives an answer to the fundamental question of “How are thoughts formed?” It should also contribute to our understanding of the causes of psychiatric disorders caused by flawed mental simulation, such as schizophrenia, depression, and autism. Moreover, by understanding the computational mechanisms of the brain, it may become possible to design robots and programs that think like the brain does. This research contributes to the overall understanding of how the brain allows us to function.
A new study is the first to show that living organisms can be persuaded to make silicon-carbon bonds – something only chemists had done before. Scientists at Caltech “bred” a bacterial protein to make the humanmade bonds – a finding that has applications in several industries.
Molecules with silicon-carbon, or organosilicon, compounds are found in pharmaceuticals as well as in many other products, including agricultural chemicals, paints, semiconductors, and computer and TV screens. Currently, these products are made synthetically, since the silicon-carbon bonds are not found in nature.
The new study demonstrates that biology can instead be used to manufacture these bonds in ways that are more environmentally friendly and potentially much less expensive.
“We decided to get nature to do what only chemists could do – only better,” says Frances Arnold, Caltech’s Dick and Barbara Dickinson Professor of Chemical Engineering, Bioengineering and Biochemistry, and principal investigator of the new research, published in the Nov. 24 issue of the journal Science.
Read more.
(Image caption: Young neurons (pink), responsible for encoding new memories, must compete with mature neurons (green) to survive and integrate into the hippocampal circuit. Credit: Kathleen McAvoy, Sahay Lab)
Making memories stronger and more precise during aging
When it comes to the billions of neurons in your brain, what you see at birth is what get — except in the hippocampus. Buried deep underneath the folds of the cerebral cortex, neural stem cells in the hippocampus continue to generate new neurons, inciting a struggle between new and old as the new attempts to gain a foothold in the memory-forming center of the brain.
In a study published online in Neuron, Harvard Stem Cell Institute (HSCI) researchers at Massachusetts General Hospital and the Broad Institute of MIT and Harvard in collaboration with an international team of scientists found they could bias the competition in favor of the newly generated neurons.
“The hippocampus allows us to form new memories of ‘what, when and where’ that help us navigate our lives,” said HSCI Principal Faculty member and the study’s corresponding author, Amar Sahay, PhD, “and neurogenesis—the generation of new neurons from stem cells—is critical for keeping similar memories separate.”
As the human brain matures, the connections between older neurons become stronger, more numerous, and more intertwined, making integration for the newly formed neurons more difficult. Neural stem cells become less productive, leading to a decline in neurogenesis. With fewer new neurons to help sort memories, the aging brain can become less efficient at keeping separate and faithfully retrieving memories.
The research team selectively overexpressed a transcription factor, Klf9, only in older neurons in mice, which eliminated more than one-fifth of their dendritic spines, increased the number of new neurons that integrated into the hippocampus circuitry by two-fold, and activated neural stem cells.
When the researchers returned the expression of Klf9 back to normal, the old dendritic spines reformed, restoring competition. However, the previously integrated neurons remained.
“Because we can do this reversibly, at any point in the animals life we can rejuvenate the hippocampus with extra, new, encoding units,” said Sahay, who is also an investigator with the MGH Center for Regenerative Medicine.
The authors employed a complementary strategy in which they deleted a protein important for dendritic spines, Rac1, only in the old neurons and achieved a similar outcome, increasing the survival of the new neurons.
In order to keep two similar memories separate, the hippocampus activates two different populations of neurons to encode each memory in a process called pattern separation. When there is overlap between these two populations, researchers believe it is more difficult for an individual to distinguish between two similar memories formed in two different contexts, to discriminate between a Sunday afternoon stroll through the woods from a patrol through enemy territory in a forest, for example. If the memories are encoded in overlapping populations of neurons, the hippocampus may inappropriately retrieve either. If the memories are encoded in non-overlapping populations of neurons, the hippocampus stores them separately and retrieves them only when appropriate.
Mice with increased neurogenesis had less overlap between the two populations of neurons and had more precise and stronger memories, which, according to Sahay, demonstrates improved pattern separation.
Mice with increased neurogenesis in middle age and aging cohorts exhibited better memory precision.
“We believe that by increasing the hippocampus’s ability to do what it supposed to do and not retrieve past experiences when it shouldn’t can help,” Sahay said. This may be particularly useful for individuals suffering from post-traumatic stress disorder, mild cognitive impairment, or age-related memory loss.
Researchers have used CRISPR—a revolutionary new genetic engineering technique—to convert cells isolated from mouse connective tissue directly into neuronal cells.
In 2006, Shinya Yamanaka, a professor at the Institute for Frontier Medical Sciences at Kyoto University at the time, discovered how to revert adult connective tissue cells, called fibroblasts, back into immature stem cells that could differentiate into any cell type. These so-called induced pluripotent stem cells won Yamanaka the Nobel Prize in medicine just six years later for their promise in research and medicine.
Since then, researchers have discovered other ways to convert cells between different types. This is mostly done by introducing many extra copies of “master switch” genes that produce proteins that turn on entire genetic networks responsible for producing a particular cell type.
Now, researchers at Duke University have developed a strategy that avoids the need for the extra gene copies. Instead, a modification of the CRISPR genetic engineering technique is used to directly turn on the natural copies already present in the genome.
These early results indicate that the newly converted neuronal cells show a more complete and persistent conversion than the method where new genes are permanently added to the genome. These cells could be used for modeling neurological disorders, discovering new therapeutics, developing personalized medicines and, perhaps in the future, implementing cell therapy.
The study was published on August 11, 2016, in the journal Cell Stem Cell.
“This technique has many applications for science and medicine. For example, we might have a general idea of how most people’s neurons will respond to a drug, but we don’t know how your particular neurons with your particular genetics will respond,” said Charles Gersbach, the Rooney Family Associate Professor of Biomedical Engineering and director for the Center for Biomolecular and Tissue Engineering at Duke. “Taking biopsies of your brain to test your neurons is not an option. But if we could take a skin cell from your arm, turn it into a neuron, and then treat it with various drug combinations, we could determine an optimal personalized therapy.”
“The challenge is efficiently generating neurons that are stable and have a genetic programming that looks like your real neurons,” says Joshua Black, the graduate student in Gersbach’s lab who led the work. “That has been a major obstacle in this area.”
In the 1950s, Professor Conrad Waddington, a British developmental biologist who laid the foundations for developmental biology, suggested that immature stem cells differentiating into specific types of adult cells can be thought of as rolling down the side of a ridged mountain into one of many valleys. With each path a cell takes down a particular slope, its options for its final destination become more limited.
If you want to change that destination, one option is to push the cell vertically back up the mountain—that’s the idea behind reprogramming cells to be induced pluripotent stem cells. Another option is to push it horizontally up and over a hill and directly into another valley.
“If you have the ability to specifically turn on all the neuron genes, maybe you don’t have to go back up the hill,” said Gersbach.
Previous methods have accomplished this by introducing viruses that inject extra copies of genes to produce a large number of proteins called master transcription factors. Unique to each cell type, these proteins bind to thousands of places in the genome, turning on that cell type’s particular gene network. This method, however, has some drawbacks.
“Rather than using a virus to permanently introduce new copies of existing genes, it would be desirable to provide a temporary signal that changes the cell type in a stable way,” said Black. “However, doing so in an efficient manner might require making very specific changes to the genetic program of the cell.”
In the new study, Black, Gersbach, and colleagues used CRISPR to precisely activate the three genes that naturally produce the master transcription factors that control the neuronal gene network, rather than having a virus introduce extra copies of those genes.
CRISPR is a modified version of a bacterial defense system that targets and slices apart the DNA of familiar invading viruses. In this case, however, the system has been tweaked so that no slicing is involved. Instead, the machinery that identifies specific stretches of DNA has been left intact, and it has been hitched to a gene activator.
The CRISPR system was administered to mouse fibroblasts in the laboratory. The tests showed that, once activated by CRISPR, the three neuronal master transcription factor genes robustly activated neuronal genes. This caused the fibroblasts to conduct electrical signals—a hallmark of neuronal cells. And even after the CRISPR activators went away, the cells retained their neuronal properties.
“When blasting cells with master transcription factors made by viruses, it’s possible to make cells that behave like neurons,” said Gersbach. “But if they truly have become autonomously functioning neurons, then they shouldn’t require the continuous presence of that external stimulus.”
The experiments showed that the new CRISPR technique produced neuronal cells with an epigenetic program at the target genes matching the neuronal markings naturally found in mouse brain tissue.
“The method that introduces extra genetic copies with the virus produces a lot of the transcription factors, but very little is being made from the native copies of these genes,” explained Black. “In contrast, the CRISPR approach isn’t making as many transcription factors overall, but they’re all being produced from the normal chromosomal position, which is a powerful difference since they are stably activated. We’re flipping the epigenetic switch to convert cell types rather than driving them to do so synthetically.”
The next steps, according to Black, are to extend the method to human cells, raise the efficiency of the technique and try to clear other epigenetic hurdles so that it could be applied to model particular diseases.
“In the future, you can imagine making neurons and implanting them in the brain to treat Parkinson’s disease or other neurodegenerative conditions,” said Gersbach. “But even if we don’t get that far, you can do a lot with these in the lab to help develop better therapies.”
Scientists from Singapore have streamlined the process of using human stem cells to mass produce GABAergic neurons (GNs) in the laboratory. This new protocol provides scientists with a robust source of GNs to study many psychiatric and neurological disorders such as autism, schizophrenia, and epilepsy, which are thought to develop at least in part due to GN dysfunction.
GNs are inhibitory neurons that reduce neuronal activation, and make up roughly 20 per cent of the human brain. They work alongside excitatory neurons (ENs) to ensure balanced neural activity for normal brain function. The coordinated interplay between GNs and ENs orchestrate specific activation patterns in the brain, which are responsible for our behaviour, emotions, and higher reasoning. Functional impairment of GNs results in imbalanced neural activity, thereby contributing to the symptoms observed in many psychiatric disorders.
The availability of high quality, functional human GN populations would facilitate the development of good models for studying psychiatric disorders, as well as for screening drug effects on specific populations of neurons. Scientists worldwide have been hard at work trying to generate a consistent supply of GNs in the laboratory, but have been faced with many challenges. Protocols involving multiple complex stages, poor yield, and requiring a long time to generate mature and functional GNs are just some of the limitations encountered.
Many of these limitations have now been overcome by the development of a rapid and robust protocol to generate GNs from human pluripotent stem cells (hPSCs) in a single step. With the addition of a specific combination of factors, hPSCs turn into mature and functional GNs in a mere six – eight weeks. This is about two – three times faster than the 10 - 30 weeks required for previous protocols. In addition, this new protocol is highly efficient, with GNs making up more than 80 per cent of the final neuron population.
To develop this protocol, the team from Duke-NUS Medical School (Duke-NUS), A*STAR’s Genome Institute of Singapore (GIS) and the National Neuroscience Institute (NNI) first identified genetic factors involved in GN development in the brain. The team then tried many different combinations of these factors, and succeeded in confirming that mature and functional human GNs were indeed generated.
“Just like how a balance of Yin and Yang is needed in order to stay healthy, a balance of ENs and GNs is required for normal brain function. We now know a fair bit about ENs because we have good protocols to make them. However, we still know very little of the other player, the GNs, because current protocols do not work well. Yet, when these GNs malfunction our brain goes haywire,” commented Dr Alfred Sun, a Research Fellow at NNI and co-first author of the publication alongside Mr Qiang Yuan, an NUS Graduate School PhD student.
“Our quick, efficient and easy way to mass produce GNs for lab use is a game changer for neuroscience and drug discovery. With increased recognition of the essential role of GNs in almost all neurological and psychiatric diseases, we envisage our new method to be widely used to advance research and drug screening,” said Dr Shawn Je, Assistant Professor in the Neuroscience and Behavioural Disorders Programme at Duke-NUS, and senior author of the study.
The speed and efficiency of generating GNs with this new protocol provides researchers unprecedented access to the quantities of neurons necessary for studying the role of GNs in disease mechanisms. Drugs and small molecules may now be screened at an unparalleled rate to discover the next blockbuster treatment for autism, schizophrenia, and epilepsy.
(Image caption: A new technique called magnified analysis of proteome (MAP), developed at MIT, allows researchers to peer at molecules within cells or take a wider view of the long-range connections between neurons. Credit: Courtesy of the researchers)
Imaging the brain at multiple size scales
MIT researchers have developed a new technique for imaging brain tissue at multiple scales, allowing them to peer at molecules within cells or take a wider view of the long-range connections between neurons.
This technique, known as magnified analysis of proteome (MAP), should help scientists in their ongoing efforts to chart the connectivity and functions of neurons in the human brain, says Kwanghun Chung, the Samuel A. Goldblith Assistant Professor in the Departments of Chemical Engineering and Brain and Cognitive Sciences, and a member of MIT’s Institute for Medical Engineering and Science (IMES) and Picower Institute for Learning and Memory.
“We use a chemical process to make the whole brain size-adjustable, while preserving pretty much everything. We preserve the proteome (the collection of proteins found in a biological sample), we preserve nanoscopic details, and we also preserve brain-wide connectivity,” says Chung, the senior author of a paper describing the method in the July 25 issue of Nature Biotechnology.
The researchers also showed that the technique is applicable to other organs such as the heart, lungs, liver, and kidneys.
The paper’s lead authors are postdoc Taeyun Ku, graduate student Justin Swaney, and visiting scholar Jeong-Yoon Park.
Multiscale imaging
The new MAP technique builds on a tissue transformation method known as CLARITY, which Chung developed as a postdoc at Stanford University. CLARITY preserves cells and molecules in brain tissue and makes them transparent so the molecules inside the cell can be imaged in 3-D. In the new study, Chung sought a way to image the brain at multiple scales, within the same tissue sample.
“There is no effective technology that allows you to obtain this multilevel detail, from brain region connectivity all the way down to subcellular details, plus molecular information,” he says.
To achieve that, the researchers developed a method to reversibly expand tissue samples in a way that preserves nearly all of the proteins within the cells. Those proteins can then be labeled with fluorescent molecules and imaged.
The technique relies on flooding the brain tissue with acrylamide polymers, which can form a dense gel. In this case, the gel is 10 times denser than the one used for the CLARITY technique, which gives the sample much more stability. This stability allows the researchers to denature and dissociate the proteins inside the cells without destroying the structural integrity of the tissue sample.
Before denaturing the proteins, the researchers attach them to the gel using formaldehyde, as Chung did in the CLARITY method. Once the proteins are attached and denatured, the gel expands the tissue sample to four or five times its original size.
“It is reversible and you can do it many times,” Chung says. “You can then use off-the-shelf molecular markers like antibodies to label and visualize the distribution of all these preserved biomolecules.”
There are hundreds of thousands of commercially available antibodies that can be used to fluorescently tag specific proteins. In this study, the researchers imaged neuronal structures such as axons and synapses by labeling proteins found in those structures, and they also labeled proteins that allow them to distinguish neurons from glial cells.
“We can use these antibodies to visualize any target structures or molecules,” Chung says. “We can visualize different neuron types and their projections to see their connectivity. We can also visualize signaling molecules or functionally important proteins.”
High resolution
Once the tissue is expanded, the researchers can use any of several common microscopes to obtain images with a resolution as high as 60 nanometers — much better than the usual 200 to 250-nanometer limit of light microscopes, which are constrained by the wavelength of visible light. The researchers also demonstrated that this approach works with relatively large tissue samples, up to 2 millimeters thick.
“This is, as far as I know, the first demonstration of super-resolution proteomic imaging of millimeter-scale samples,” Chung says.
“This is an exciting advance for brain mapping, a technique that reveals the molecular and connectional architecture of the brain with unprecedented detail,” says Sebastian Seung, a professor of computer science at the Princeton Neuroscience Institute, who was not involved in the research.
Currently, efforts to map the connections of the human brain rely on electron microscopy, but Chung and colleagues demonstrated that the higher-resolution MAP imaging technique can trace those connections more accurately.
Chung’s lab is now working on speeding up the imaging and the image processing, which is challenging because there is so much data generated from imaging the expanded tissue samples.
“It’s already easier than other techniques because the process is really simple and you can use off-the-shelf molecular markers, but we are trying to make it even simpler,” Chung says.
waaavess
Beta rhythms, or waves of brain activity with an approximately 20 Hz frequency, accompany vital fundamental behaviors such as attention, sensation and motion and are associated with some disorders such as Parkinson’s disease. Scientists have debated how the spontaneous waves emerge, and they have not yet determined whether the waves are just a byproduct of activity, or play a causal role in brain functions. Now in a new paper led by Brown University neuroscientists, they have a specific new mechanistic explanation of beta waves to consider.
The new theory, presented in the Proceedings of the National Academy of Sciences, is the product of several lines of evidence: external brainwave readings from human subjects, sophisticated computational simulations and detailed electrical recordings from two mammalian model organisms.
“A first step to understanding beta’s causal role in behavior or pathology, and how to manipulate it for optimal function, is to understand where it comes from at the cellular and circuit level,” said corresponding author Stephanie Jones, research associate professor of neuroscience at Brown University. “Our study combined several techniques to address this question and proposed a novel mechanism for spontaneous neocortical beta. This discovery suggests several possible mechanisms through which beta may impact function.”
Making waves
The team started by using external magnetoencephalography (MEG) sensors to observe beta waves in the human somatosensory cortex, which processes sense of touch, and the inferior frontal cortex, which is associated with higher cognition.
They closely analyzed the beta waves, finding they lasted at most a mere 150 milliseconds and had a characteristic wave shape, featuring a large, steep valley in the middle of the wave.
The question from there was what neural activity in the cortex could produce such waves. The team attempted to recreate the waves using a computer model of a cortical circuitry, made up of a multilayered cortical column that contained multiple cell types across different layers. Importantly, the model was designed to include a cell type called pyramidal neurons, whose activity is thought to dominate the human MEG recordings.
They found that they could closely replicate the shape of the beta waves in the model by delivering two kinds of excitatory synaptic stimulation to distinct layers in the cortical columns of cells: one that was weak and broad in duration to the lower layers, contacting spiny dendrites on the pyramidal neurons close to the cell body; and another that was stronger and briefer, lasting 50 milliseconds (i.e., one beta period), to the upper layers, contacting dendrites farther away from the cell body. The strong distal drive created the valley in the waveform that determined the beta frequency.
Meanwhile they tried to model other hypotheses about how beta waves emerge, but found those unsuccessful.
With a model of what to look for, the team then tested it by looking for a real biological correlate of it in two animal models. The team analyzed measurements in the cortex of mice and rhesus macaques and found direct confirmation that this kind of stimulation and response occurred across the cortical layers in the animal models.
“The ultimate test of the model predictions is to record the electrical signals inside the brain,” Jones said. “These recordings supported our model predictions.”
Beta in the brain
Neither the computer models nor the measurements traced the source of the excitatory synaptic stimulations that drive the pyramidal neurons to produce the beta waves, but Jones and her co-authors posit that they likely come from the thalamus, deeper in the brain. Projections from the thalamus happen to be in exactly the right places needed to deliver signals to the right positions on the dendrites of pyramidal neurons in the cortex. The thalamus is also known to send out bursts of activity that last 50 milliseconds, as predicted by their theory.
With a new biophysical theory of how the waves emerge, the researchers hope the field can now investigate whether beta rhythms affect or merely reflect behavior and disease. Jones’s team in collaboration with Professor of Neuroscience Christopher Moore at Brown is now testing predictions from the theory that beta may decrease sensory or motor information processing functions in the brain. New hypotheses are that the inputs that create beta may also stimulate inhibitory neurons in the top layers of the cortex, or that they may may saturate the activity of the pyramidal neurons, thereby reducing their ability to process information; or that the thalamic bursts that give rise to beta occupy the thalamus to the point where it doesn’t pass information along to the cortex.
Figuring this out could lead to new therapies based on manipulating beta, Jones said.
“An active and growing field of neuroscience research is trying to manipulate brain rhythms for optimal function with stimulation techniques,” she said. “We hope that our novel finding on the neural origin of beta will help guide research to manipulate beta, and possibly other rhythms, for improved function in sensorimotor pathologies.”