Recurrent Neural Network Writes Music and Shakespeare Novels | Two Minute Papers #19


Artificial neural networks are very useful
tools that are able to learn and recognize objects on images, or learn the style of Van
Gogh and paint new pictures in his style. Today, we’re going to talk about recurrent
neural networks. So, what does the recurrent part mean? With an artificial neural network, we usually
have a one-to-one relation between the input and the output. This means that one image comes in, and one
classification result comes out, whether the image depicts a human face or a train. With recurrent neural networks, we can have
a one-to-many relation between the input and the output. The input would still be an image, but the
output would not be a word, but a sequence of words, a sentence that describes what we
see on the image. For a many-to-one relation, a good example
is sentiment analysis. This means that a sequence of inputs, for
instance, a sentence is classified as either negative or positive. This is very useful for processing movie reviews,
where we’d like to know whether the user liked or hated the movie without reading pages and
pages of discussion. And finally, recurrent neural networks can
also deal with many-to-many relations, translating an input sequence into an output sequence. Examples of this can be machine translations
that take an input sentence and translate it to an output sentence in a different language. For another example of a many to many relation,
let’s see what the algorithm learned after reading Tolstoy’s War and Peace novel by asking
it to write [exactly] [in that style]. It should be noted that generating a new novel
happens letter by letter, so the algorithm is not allowed to memorize words. Let’s take a look at the results at different
stages of the training process. The initial results are, well, gibberish. But the algorithm seems to recognize immediately,
that words are basically a big bunch of letters that are separates by spaces. If we wait a bit more, we see that it starts
to get a very rudimentary understanding of structures – for instance, a quotation mark
that you have opened must be closed, and a sentence can be closed by a period, and it
is followed by an uppercase letter. Later, it starts to learn shorter and more
common words, such as fall, that, the, for, me. If we wait for longer, we see that it already
gets a grasp of longer words and smaller parts of sentences actually start to make sense. Here is a piece of Shakespeare that was written
by the algorithm after reading all of his works. You see names that make sense, and you [really]
have to check the text thoroughly to conclude that it’s indeed not the real deal. It can also try to write math papers. I had to look for quite a bit until I realized
that something is fishy here. It is not unreasonable to think that it can
very easily deceive a non-expert reader. Can you believe this? This is insanity. It is also capable of learning the source
code of the Linux operating system and generate new code that also looks quite sensible. It can also try to continue the song “Let
it Go” from the famous Disney movie, Frozen. Or, it can write its own grooves after learning
from other people’s works. So, recurrent neural networks are really amazing
tools that open up completely new horizons for solving problems where either the inputs
or the outputs are not one thing, but a sequence of things. And now, signing off with a piece of recurrent
neural network wisdom: Well, your wit is in the care of side and
that. Bear this in mind wherever you go. Thanks for watching, and I’ll see you next time!

, , , , , , , , , , , , , , , , , , , , , ,

Post navigation

36 thoughts on “Recurrent Neural Network Writes Music and Shakespeare Novels | Two Minute Papers #19

  1. These techniques can also "continue" famous paintings. Make sure to check it out! 🙂 – http://extrapolated-art.com/

  2. This is a great intro to recurrent neural networks: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
    Also that final sentence stood out to me as special as well.
    Great work, thanks for sharing!

  3. These are really amazing results especially for the painting. But I was disappointed with the Shakespeare text. It took me an afternoon to write 180 lines of python that count conditional letter probability and generate at least not far inferior results. It also generates letter by letter.
    here is an excerpt:

    Every were going of the
    have to look; and I bet they're right
    would be with this dark, and know passed into a character
    orge's everyone to then, was leaning but asked Mr. Turner's face the through a body than boast
    of replied, and like a

    and the code on pastebin. Sorry for the german comments.
    http://pastebin.com/vuPXKYGa

  4. Great job! This is also fairly fun thing done with RNN's: http://www.cs.toronto.edu/~graves/handwriting.html

  5. Has anyone trained a neural network with enough literature to eventually create its own fiction? It would be very interesting!

  6. I was really watching this video to help me learn something specific but didn't exactly find it. How does a neural network decide on its own if the music is good or not?

  7. I hope you have used the bases of this to make it come up with new ideas innovation advancement in technology…parameters where it add its own inputs to exponentially give out unique probable outputs. Eventually find discoveries, an extremely fast future.

  8. HOLY … THIS IS THE GREATEST TIME TO BE ALIVE!!! xD xD xD So much fun ^^ I'm looking very much forward to what I see as the final AI challenge: Humor! Maybe we'll have an almost god-like comedian in the future who creates instant material according to our experiences and taste! ^^

  9. “Well, my wit is in the care of side and that,” but that means to me “which side?” and also “should I identify and observe?” Is my own mind in the employ of such senses? For were it not so then I should be quite the dullard.

Leave a Reply

Your email address will not be published. Required fields are marked *