The Fairest Of Them All.

The Fairest Of Them All.
The Fairest Of Them All.
The Fairest Of Them All.
The Fairest Of Them All.

The fairest of them all.

More Posts from Ourvioletdeath and Others

7 years ago
Neurons Have The Right Shape For Deep Learning

Neurons have the right shape for deep learning

Deep learning has brought about machines that can ‘see’ the world more like humans can, and recognize language. And while deep learning was inspired by the human brain, the question remains: Does the brain actually learn this way? The answer has the potential to create more powerful artificial intelligence and unlock the mysteries of human intelligence.

In a study published in eLife, CIFAR Fellow Blake Richards and his colleagues unveiled an algorithm that simulates how deep learning could work in our brains. The network shows that certain mammalian neurons have the shape and electrical properties that are well-suited for deep learning. Furthermore, it represents a more biologically realistic way of how real brains could do deep learning.

Research was conducted by Richards and his graduate student Jordan Guerguiev, at the University of Toronto, Scarborough, in collaboration with Timothy Lillicrap at Google DeepMind. Their algorithm was based on neurons in the neocortex, which is responsible for higher order thought.

“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.”

Using this knowledge of the neurons’ structure, Richards and Guerguiev built a model that similarly received signals in segregated compartments. These sections allowed simulated neurons in different layers to collaborate, achieving deep learning.

“It’s just a set of simulations so it can’t tell us exactly what our brains are doing, but it does suggest enough to warrant further experimental examination if our own brains may use the same sort of algorithms that they use in AI,” Richards says.

This research idea goes back to AI pioneers Geoffrey Hinton, a CIFAR Distinguished Fellow and founder of the Learning in Machines & Brains program, and program Co-Director Yoshua Bengio, and was one of the main motivations for founding the program in the first place. These researchers sought not only to develop artificial intelligence, but also to understand how the human brain learns, says Richards.

In the early 2000s, Richards and Lillicrap took a course with Hinton at the University of Toronto and were convinced deep learning models were capturing “something real” about how human brains work. At the time, there were several challenges to testing that idea. Firstly, it wasn’t clear that deep learning could achieve human-level skill. Secondly, the algorithms violated biological facts proven by neuroscientists.

Now, Richards and a number of researchers are looking to bridge the gap between neuroscience and AI. This paper builds on research from Bengio’s lab on a more biologically plausible way to train neural nets and an algorithm developed by Lillicrap that further relaxes some of the rules for training neural nets. The paper also incorporates research from Matthew Larkam on the structure of neurons in the neocortex. By combining neurological insights with existing algorithms, Richards’ team was able to create a better and more realistic algorithm simulating learning in the brain.

The tree-like neocortex neurons are only one of many types of cells in the brain. Richards says future research should model different brain cells and examine how they could interact together to achieve deep learning. In the long-term, he hopes researchers can overcome major challenges, such as how to learn through experience without receiving feedback.

“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” Richards says.

7 years ago
Peering Into Neural Networks

Peering into neural networks

Neural networks, which learn to perform computational tasks by analyzing large sets of training data, are responsible for today’s best-performing artificial intelligence systems, from speech recognition systems, to automatic translators, to self-driving cars.

But neural nets are black boxes. Once they’ve been trained, even their designers rarely have any idea what they’re doing — what data elements they’re processing and how.

Two years ago, a team of computer-vision researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) described a method for peering into the black box of a neural net trained to identify visual scenes. The method provided some interesting insights, but it required data to be sent to human reviewers recruited through Amazon’s Mechanical Turk crowdsourcing service.

At this year’s Computer Vision and Pattern Recognition conference, CSAIL researchers presented a fully automated version of the same system. Where the previous paper reported the analysis of one type of neural network trained to perform one task, the new paper reports the analysis of four types of neural networks trained to perform more than 20 tasks, including recognizing scenes and objects, colorizing grey images, and solving puzzles. Some of the new networks are so large that analyzing any one of them would have been cost-prohibitive under the old method.

The researchers also conducted several sets of experiments on their networks that not only shed light on the nature of several computer-vision and computational-photography algorithms, but could also provide some evidence about the organization of the human brain.

Neural networks are so called because they loosely resemble the human nervous system, with large numbers of fairly simple but densely connected information-processing “nodes.” Like neurons, a neural net’s nodes receive information signals from their neighbors and then either “fire” — emitting their own signals — or don’t. And as with neurons, the strength of a node’s firing response can vary.

In both the new paper and the earlier one, the MIT researchers doctored neural networks trained to perform computer vision tasks so that they disclosed the strength with which individual nodes fired in response to different input images. Then they selected the 10 input images that provoked the strongest response from each node.

In the earlier paper, the researchers sent the images to workers recruited through Mechanical Turk, who were asked to identify what the images had in common. In the new paper, they use a computer system instead.

“We catalogued 1,100 visual concepts — things like the color green, or a swirly texture, or wood material, or a human face, or a bicycle wheel, or a snowy mountaintop,” says David Bau, an MIT graduate student in electrical engineering and computer science and one of the paper’s two first authors. “We drew on several data sets that other people had developed, and merged them into a broadly and densely labeled data set of visual concepts. It’s got many, many labels, and for each label we know which pixels in which image correspond to that label.”

The paper’s other authors are Bolei Zhou, co-first author and fellow graduate student; Antonio Torralba, MIT professor of electrical engineering and computer science; Aude Oliva, CSAIL principal research scientist; and Aditya Khosla, who earned his PhD as a member of Torralba’s group and is now the chief technology officer of the medical-computing company PathAI.

The researchers also knew which pixels of which images corresponded to a given network node’s strongest responses. Today’s neural nets are organized into layers. Data are fed into the lowest layer, which processes them and passes them to the next layer, and so on. With visual data, the input images are broken into small chunks, and each chunk is fed to a separate input node.

For every strong response from a high-level node in one of their networks, the researchers could trace back the firing patterns that led to it, and thus identify the specific image pixels it was responding to. Because their system could frequently identify labels that corresponded to the precise pixel clusters that provoked a strong response from a given node, it could characterize the node’s behavior with great specificity.

The researchers organized the visual concepts in their database into a hierarchy. Each level of the hierarchy incorporates concepts from the level below, beginning with colors and working upward through textures, materials, parts, objects, and scenes. Typically, lower layers of a neural network would fire in response to simpler visual properties — such as colors and textures — and higher layers would fire in response to more complex properties.

But the hierarchy also allowed the researchers to quantify the emphasis that networks trained to perform different tasks placed on different visual properties. For instance, a network trained to colorize black-and-white images devoted a large majority of its nodes to recognizing textures. Another network, when trained to track objects across several frames of video, devoted a higher percentage of its nodes to scene recognition than it did when trained to recognize scenes; in that case, many of its nodes were in fact dedicated to object detection.

One of the researchers’ experiments could conceivably shed light on a vexed question in neuroscience. Research involving human subjects with electrodes implanted in their brains to control severe neurological disorders has seemed to suggest that individual neurons in the brain fire in response to specific visual stimuli. This hypothesis, originally called the grandmother-neuron hypothesis, is more familiar to a recent generation of neuroscientists as the Jennifer-Aniston-neuron hypothesis, after the discovery that several neurological patients had neurons that appeared to respond only to depictions of particular Hollywood celebrities.

Many neuroscientists dispute this interpretation. They argue that shifting constellations of neurons, rather than individual neurons, anchor sensory discriminations in the brain. Thus, the so-called Jennifer Aniston neuron is merely one of many neurons that collectively fire in response to images of Jennifer Aniston. And it’s probably part of many other constellations that fire in response to stimuli that haven’t been tested yet.

Because their new analytic technique is fully automated, the MIT researchers were able to test whether something similar takes place in a neural network trained to recognize visual scenes. In addition to identifying individual network nodes that were tuned to particular visual concepts, they also considered randomly selected combinations of nodes. Combinations of nodes, however, picked out far fewer visual concepts than individual nodes did — roughly 80 percent fewer.

“To my eye, this is suggesting that neural networks are actually trying to approximate getting a grandmother neuron,” Bau says. “They’re not trying to just smear the idea of grandmother all over the place. They’re trying to assign it to a neuron. It’s this interesting hint of this structure that most people don’t believe is that simple.”

7 years ago
Why Do Some People With Cystic Fibrosis Live Much Longer Than Others? 

Why Do Some People with Cystic Fibrosis Live Much Longer Than Others? 

Researchers at Boston Children’s Hospital have found that longevity among patients with cystic fibrosis (CF) may be linked to their genes.

Studying over 600 patients with CF, the scientists found five individuals who stood out due to their age–in their 50s and 60s–and relative lung function. By sequencing the genomes of those five patients, the scientists found a set of rare and never-before-discovered genetic variants, related to so-called epithelial sodium channels (ENaCs), that might help explain their longevity and stable lung function.

“Our hypothesis is that these ENaC mutations help to rehydrate the airways of CF patients, making it less likely for detrimental bacteria to take up residence in the lungs,” said Ruobing Wang, MD, a pulmonologist at Boston Children’s.

Read more

Funding: This work was supported by the Gene Discovery Core of The Manton Center for Orphan Disease Research, Boston Children’s Hospital; Gilda and Alfred Slifka, Gail and Adam Slifka and the CFMS Fund; the National Institute of Arthritis and Musculoskeletal and Skeletal Diseases; the National Institute of Health (NIH) (U54 HD090255, P30 DK079307); the Eunice Kennedy Shriver National Institute of Child Health and Human Development/National Human Genome Research Institute/NIH (U19 HD077671); the May Family Fund; Cystic Fibrosis Foundation Therapeutics, Inc. (RO1 HL090136 and U01 HL100402 RFA-HL-09-004); and the National Heart, Lung, and Blood Institute (R37HL51856).

Raise your voice in support of expanding federal funding for life-saving medical research by joining the AAMC’s advocacy community.

6 years ago

*chemicals*

Wayfaring: do you want to go ahead and get your flu shot today?

Patient: no, I don’t get them. I’m not against vaccines, just the flu shot.

Wayfaring: why is that?

Patient: I’m just not comfortable with all the bad stuff they put in it.

Wayfaring: ok well let’s talk about it. What substances in particular worry you?

Patient: oh uhhh… you know… the bad ones.

Wayfaring: 

*chemicals*
7 years ago

Consider A Pet Monkey

Consider A Pet Monkey
6 years ago

The Hand is here! Where is Voigt’s dinner? GUMBY IS HERE ALSO

7 years ago

While putting your favorite condiment on a sandwich, you accidentally make a magical occult symbol and summon a demon.

7 years ago

Terraforming Mars

Terraforming Mars
7 years ago
[Wondermark] This Is The Coolest Thing.
[Wondermark] This Is The Coolest Thing.
[Wondermark] This Is The Coolest Thing.

[Wondermark] this is the coolest thing.

6 years ago

😍😍😍😍😍😍

Levi Knocking Kenny Out, S03e02
Levi Knocking Kenny Out, S03e02
Levi Knocking Kenny Out, S03e02
Levi Knocking Kenny Out, S03e02

Levi knocking Kenny out, s03e02

  • komeijisatori
    komeijisatori reblogged this · 1 month ago
  • natanmilyan
    natanmilyan liked this · 1 month ago
  • nicoleemory
    nicoleemory liked this · 1 month ago
  • shaepeg
    shaepeg liked this · 1 month ago
  • virgopussy
    virgopussy liked this · 1 month ago
  • abel-gayble
    abel-gayble reblogged this · 1 month ago
  • semperpr0grediens
    semperpr0grediens reblogged this · 1 month ago
  • p-r-azeer
    p-r-azeer liked this · 1 month ago
  • thisonewhale
    thisonewhale reblogged this · 2 months ago
  • lilianavk
    lilianavk liked this · 2 months ago
  • pbandgay93
    pbandgay93 reblogged this · 2 months ago
  • curiouser---curiouser
    curiouser---curiouser reblogged this · 2 months ago
  • aarondgtl
    aarondgtl liked this · 2 months ago
  • moonshinedemon
    moonshinedemon reblogged this · 2 months ago
  • imnotlesbianimamerican
    imnotlesbianimamerican reblogged this · 2 months ago
  • pulchritudinous-paradoxical
    pulchritudinous-paradoxical reblogged this · 2 months ago
  • indiscriminateindecision
    indiscriminateindecision reblogged this · 2 months ago
  • indiscriminateindecision
    indiscriminateindecision liked this · 2 months ago
  • spiralupandout
    spiralupandout liked this · 2 months ago
  • lifes-just-a-rainbow
    lifes-just-a-rainbow liked this · 2 months ago
  • iloveolderwomen3000
    iloveolderwomen3000 liked this · 2 months ago
  • thatsetter
    thatsetter reblogged this · 2 months ago
  • lesbihonests
    lesbihonests reblogged this · 2 months ago
  • plac-grocka
    plac-grocka reblogged this · 2 months ago
  • plac-grocka
    plac-grocka liked this · 2 months ago
  • whimsical-username
    whimsical-username reblogged this · 3 months ago
  • mariannetheflash
    mariannetheflash liked this · 4 months ago
  • lanadelrey698
    lanadelrey698 liked this · 8 months ago
  • luxurymelbournetours
    luxurymelbournetours liked this · 8 months ago
  • dyghu
    dyghu reblogged this · 9 months ago
  • istanbulbulls
    istanbulbulls liked this · 11 months ago
  • unanocheenrio
    unanocheenrio liked this · 11 months ago
  • babushkaat
    babushkaat reblogged this · 1 year ago
  • sortoftemporarypeace
    sortoftemporarypeace reblogged this · 1 year ago
  • jerrys-gel-pens
    jerrys-gel-pens reblogged this · 1 year ago
  • itsjdubbyuh
    itsjdubbyuh liked this · 1 year ago
  • peachesandcocks
    peachesandcocks reblogged this · 1 year ago
  • thequeenundying
    thequeenundying reblogged this · 1 year ago
  • ihavemace
    ihavemace reblogged this · 1 year ago
  • christina046
    christina046 liked this · 1 year ago
  • cherrytomato-shutdown
    cherrytomato-shutdown reblogged this · 1 year ago
  • daydreaming-juna
    daydreaming-juna reblogged this · 1 year ago
  • supermanicsoul
    supermanicsoul liked this · 1 year ago
  • homeisaplaceinthehills
    homeisaplaceinthehills liked this · 1 year ago
  • enlitenblogg
    enlitenblogg reblogged this · 1 year ago
ourvioletdeath - Inner Ramblings of the Mind
Inner Ramblings of the Mind

205 posts

Explore Tumblr Blog
Search Through Tumblr Tags