(Pie -> cat courtesy of https://affinelayer.com/pixsrv/ )
I work with neural networks, which are a type of machine learning computer program that learn by looking at examples. They’re used for all sorts of serious applications, like facial recognition and ad targeting and language translation. I, however, give them silly datasets and ask them to do their best.
So, for my latest experiment, I collected the titles of 2237 sweet and savory pie recipes from a variety of sources including Wikipedia and David Shields. I simply gave them to a neural network with no explanation (I never give it an explanation) and asked it to try to generate more.
Its very first attempt left something to be desired, but it had figured out that "P”, “i”, and “e” were important somehow.
e Piee i m t iee ic ic Pa ePeeetae a e eee ema iPPeaia eieer i i i ie e eciie Pe eaei a
Second checkpoint. Progress: Pie.
Pie Pee Pie Pimi Pie Pim Cue Pie Pie (er Wie Pae Pim Piu Pie Pim Piea Cre Pia Pie Pim Pim Pie Pie Piee Pie Piee
This is expected, since the word “pie” is both simple and by far the most common word in the dataset. It stays in the stage above for rather a while, able to spell only “Pie” and nothing else. It’s like evolution trying to get past the single-celled organism stage. After 4x more time has elapsed, it finally adds a few more words: “apple”, “cream”, and “tart”. Then, at the sixth checkpoint, “pecan”.
Seventh checkpoint: These are definitely pies. We are still working on spelling “strawberry”, however.
Boatin Batan Pie Shrawberry Pie With An Cream Pie Cream Pie Sweesh Pie Ipple Pie Wrasle Cream Pie Swrawberry Pie Cream Pie Sae Fart Tart Cheem Pie Sprawberry Cream Pie Cream Pie
10th checkpoint. Still working.
Coscard Pie Tluste Trenss Pie Wot Flustickann Fart Oag’s Apple Pie Daush Flumberry O Cheesaliane Rutter Chocklnd Apple Rhupperry pie Flonberry Peran Pie Blumbberry Cream Pie Futters Whabarb Wottiry Rasty Pasty Kamphible Idponsible Swarlot Cream Cream Cront
16th checkpoint. Showing some signs of improvement? Maybe. It thinks Qtrupberscotch is a thing.
Buttermitk Tlreed whonkie Pie Spiatake Bog Pastry Taco Custard Pie Apple Pie With Pharf Calamed apple Freech Fodge Cranberry Rars Farb Fart Feep-Lisf Pie With Qpecisn-3rnemerry Fluit Turd Turbyy Raisin Pie Forp Damelnut Pie Flazed Berry Pie Figi’s Chicken Sugar Pie Sauce and Butterm’s Spustacian Pie Fill Pie With Boubber Pie Bok Pie Booble Rurble Shepherd’s Parfate Ner with Cocoatu Vnd Pie Iiakiay Coconate Meringue Pie With Spiced Qtrupberscotch Apple Pie Bustard Chiffon Pie
Finally we arrive at what, according to the neural network, is Peak Pie. It tracks its own progress by testing itself against the original dataset and scoring itself, and here is where it thinks it did the best.
It did in fact come up with some that might actually work, in a ridiculously-decadent sort of way.
Baked Cream Puff Cake Four Cream Pie Reese’s Pecan Pie Fried Cream Pies Eggnog Peach Pie #2 Fried Pumpkin Pie Whopper pie Rice Krispie-Chiffon Pie Apple Pie With Fudge Treats Marshmallow Squash Pie Pumpkin Pie with Caramelized Pie Butter Pie
But these don’t sound very good actually.
Strawberry Ham Pie Vegetable Pecan Pie Turd Apple Pie Fillings Pin Truffle Pie Fail Crunch Pie Crust Turf Crust Pot Beep Pies Crust Florid Pumpkin Pie Meat-de-Topping Parades Or Meat Pies Or Cake #1 Milk Harvest Apple Pie Ice Finger Sugar Pie Amazon Apple Pie Prize Wool Pie Snood Pie Turkey Cinnamon Almond-Pumpkin Pie With Fingermilk Pumpkin Pie With Cheddar Cookie Fish Strawberry Pie Butterscotch Bean Pie Impossible Maple Spinach Apple Pie Strawberry-Onions Marshmallow Cracker Pie Filling Caribou Meringue Pie
And I have no what these are:
Stramberiy Cheese Pie The pon Pie Dississippi Mish Boopie Crust Liger Strudel Free pie Sneak Pie Tear pie Basic France Pie Baked Trance pie Shepherd’s Finger Tart Buster’s Fib Lemon Pie Worf Butterscotch Pie Scent Whoopie Grand Prize Winning I*iple Cromberry Yas Law-Ox Strudel Surf Pie, Blue Ulter Pie - Pitzon’s Flangerson’s Blusty Tart Fresh Pour Pie Mur’s Tartless Tart
More of the neural network’s attempts to understand what humans like to eat:
Perhaps my favorite: Small Sandwiches
All my other neural network recipe experiments here.
Want more than that? I’ve got a bunch more recipes that I couldn’t fit in this post. Enter your email here and I’ll send you 38 more selected recipes.
Want to help with neural network experiments? For NaNoWriMo I’m crowdsourcing a dataset of novel first lines, after the neural network had trouble with a too-small dataset. Go to this form (no email necessary) and enter the first line of your novel, or your favorite novel, or of every novel on your bookshelf. You can enter as many as you like. At the end of the month, I’ll hopefully have enough sentences to give this another try.
Timelapse of Europa & Io orbiting Jupiter, shot from Cassini during its flyby of Jupiter
Today is Copernicus’s 540th birthday. You may remember Copernicus as the man who said “Hey, what if the Earth went around the sun?” To which the Catholic Church replied “Hey, what if we set you on fire?”
So it turns out you can train a neural network to generate paint colors if you give it a list of 7,700 Sherwin-Williams paint colors as input. How a neural network basically works is it looks at a set of data - in this case, a long list of Sherwin-Williams paint color names and RGB (red, green, blue) numbers that represent the color - and it tries to form its own rules about how to generate more data like it.
Last time I reported results that were, well… mixed. The neural network produced colors, all right, but it hadn’t gotten the hang of producing appealing names to go with them - instead producing names like Rose Hork, Stanky Bean, and Turdly. It also had trouble matching names to colors, and would often produce an “Ice Gray” that was a mustard yellow, for example, or a “Ferry Purple” that was decidedly brown.
These were not great names.
There are lots of things that affect how well the algorithm does, however.
One simple change turns out to be the “temperature” (think: creativity) variable, which adjusts whether the neural network always picks the most likely next character as it’s generating text, or whether it will go with something farther down the list. I had the temperature originally set pretty high, but it turns out that when I turn it down ever so slightly, the algorithm does a lot better. Not only do the names better match the colors, but it begins to reproduce color gradients that must have been in the original dataset all along. Colors tend to be grouped together in these gradients, so it shifts gradually from greens to browns to blues to yellows, etc. and does eventually cover the rainbow, not just beige.
Apparently it was trying to give me better results, but I kept screwing it up.
Raw output from RGB neural net, now less-annoyed by my temperature setting
People also sent in suggestions on how to improve the algorithm. One of the most-frequent was to try a different way of representing color - it turns out that RGB (with a single color represented by the amount of Red, Green, and Blue in it) isn’t very well matched to the way human eyes perceive color.
These are some results from a different color representation, known as HSV. In HSV representation, a single color is represented by three numbers like in RGB, but this time they stand for Hue, Saturation, and Value. You can think of the Hue number as representing the color, Saturation as representing how intense (vs gray) the color is, and Value as representing the brightness. Other than the way of representing the color, everything else about the dataset and the neural network are the same. (char-rnn, 512 neurons and 2 layers, dropout 0.8, 50 epochs)
Raw output from HSV neural net:
And here are some results from a third color representation, known as LAB. In this color space, the first number stands for lightness, the second number stands for the amount of green vs red, and the third number stands for the the amount of blue vs yellow.
Raw output from LAB neural net:
It turns out that the color representation doesn’t make a very big difference in how good the results are (at least as far as I can tell with my very simple experiment). RGB seems to be surprisingly the best able to reproduce the gradients from the original dataset - maybe it’s more resistant to disruption when the temperature setting introduces randomness.
And the color names are pretty bad, no matter how the colors themselves are represented.
However, a blog reader compiled this dataset, which has paint colors from other companies such as Behr and Benjamin Moore, as well as a bunch of user-submitted colors from a big XKCD survey. He also changed all the names to lowercase, so the neural network wouldn’t have to learn two versions of each letter.
And the results were… surprisingly good. Pretty much every name was a plausible match to its color (even if it wasn’t a plausible color you’d find in the paint store). The answer seems to be, as it often is for neural networks: more data.
Raw output using The Big RGB Dataset:
I leave you with the Hall of Fame:
RGB:
HSV:
LAB:
Big RGB dataset:
I am not going to tag the name of the bird, because I’m pretty sure I would get tagged as NSFW if I did, but I assure you their beaks are getting longer and it’s probably because of the UK’s obsession with bird feeders.
Space may seem empty, but it’s actually a dynamic place, dominated by invisible forces, including those created by magnetic fields. Magnetospheres – the areas around planets and stars dominated by their magnetic fields – are found throughout our solar system. They deflect high-energy, charged particles called cosmic rays that are mostly spewed out by the sun, but can also come from interstellar space. Along with atmospheres, they help protect the planets’ surfaces from this harmful radiation.
It’s possible that Earth’s protective magnetosphere was essential for the development of conditions friendly to life, so finding magnetospheres around other planets is a big step toward determining if they could support life.
But not all magnetospheres are created equal – even in our own backyard, not all planets in our solar system have a magnetic field, and the ones we have observed are all surprisingly different.
Earth’s magnetosphere is created by the constantly moving molten metal inside Earth. This invisible “force field” around our planet has an ice cream cone-like shape, with a rounded front and a long, trailing tail that faces away from the sun. The magnetosphere is shaped that way because of the constant pressure from the solar wind and magnetic fields on the sun-facing side.
Earth’s magnetosphere deflects most charged particles away from our planet – but some do become trapped in the magnetic field and create auroras when they rain down into the atmosphere.
We have several missions that study Earth’s magnetosphere – including the Magnetospheric Multiscale mission, Van Allen Probes, and Time History of Events and Macroscale Interactions during Substorms (also known as THEMIS) – along with a host of other satellites that study other aspects of the sun-Earth connection.
Mercury, with a substantial iron-rich core, has a magnetic field that is only about 1% as strong as Earth’s. It is thought that the planet’s magnetosphere is stifled by the intense solar wind, limiting its strength, although even without this effect, it still would not be as strong as Earth’s. The MESSENGER satellite orbited Mercury from 2011 to 2015, helping us understand our tiny terrestrial neighbor.
After the sun, Jupiter has by far the biggest magnetosphere in our solar system – it stretches about 12 million miles from east to west, almost 15 times the width of the sun. (Earth’s, on the other hand, could easily fit inside the sun.) Jupiter does not have a molten metal core like Earth; instead, its magnetic field is created by a core of compressed liquid metallic hydrogen.
One of Jupiter’s moons, Io, has intense volcanic activity that spews particles into Jupiter’s magnetosphere. These particles create intense radiation belts and the large auroras around Jupiter’s poles.
Ganymede, Jupiter’s largest moon, also has its own magnetic field and magnetosphere – making it the only moon with one. Its weak field, nestled in Jupiter’s enormous shell, scarcely ruffles the planet’s magnetic field.
Our Juno mission orbits inside the Jovian magnetosphere sending back observations so we can better understand this region. Previous observations have been received from Pioneers 10 and 11, Voyagers 1 and 2, Ulysses, Galileo and Cassini in their flybys and orbits around Jupiter.
Saturn’s moon Enceladus transforms the shape of its magnetosphere. Active geysers on the moon’s south pole eject oxygen and water molecules into the space around the planet. These particles, much like Io’s volcanic emissions at Jupiter, generate the auroras around the planet’s poles. Our Cassini mission studies Saturn’s magnetic field and auroras, as well as its moon Enceladus.
Uranus’ magnetosphere wasn’t discovered until 1986 when data from Voyager 2’s flyby revealed weak, variable radio emissions. Uranus’ magnetic field and rotation axis are out of alignment by 59 degrees, unlike Earth’s, whose magnetic field and rotation axis differ by only 11 degrees. On top of that, the magnetic field axis does not go through the center of the planet, so the strength of the magnetic field varies dramatically across the surface. This misalignment also means that Uranus’ magnetotail – the part of the magnetosphere that trails away from the sun – is twisted into a long corkscrew.
Neptune’s magnetosphere is also tilted from its rotation axis, but only by 47. Just like on Uranus, Neptune’s magnetic field strength varies across the planet. This also means that auroras can be seen away from the planet’s poles – not just at high latitudes, like on Earth, Jupiter and Saturn.
Neither Venus nor Mars have global magnetic fields, although the interaction of the solar wind with their atmospheres does produce what scientists call an “induced magnetosphere.” Around these planets, the atmosphere deflects the solar wind particles, causing the solar wind’s magnetic field to wrap around the planet in a shape similar to Earth’s magnetosphere.
Outside of our solar system, auroras, which indicate the presence of a magnetosphere, have been spotted on brown dwarfs – objects that are bigger than planets but smaller than stars.
There’s also evidence to suggest that some giant exoplanets have magnetospheres. As scientists now believe that Earth’s protective magnetosphere was essential for the development of conditions friendly to life, finding magnetospheres around exoplanets is a big step in finding habitable worlds.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
Bonus comic!
Yahoo! Einstein was right again! :D We now have our first detection of gravitational waves!
http://www.nytimes.com/2016/02/12/science/ligo-gravitational-waves-black-holes-einstein.html?_r=0
http://www.space.com/17661-theory-general-relativity.html