Loading AI tools
Type of recurrent neural network architecture From Wikipedia, the free encyclopedia
Long short-term memory (LSTM)[1] is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem[2] commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps (thus "long short-term memory").[1] The name is made in analogy with long-term memory and short-term memory and their relationship, studied by cognitive psychologists since the early 20th century.
This article may be too technical for most readers to understand. (March 2022) |
An LSTM unit is typically composed of a cell and three gates: an input gate, an output gate,[3] and a forget gate.[4] The cell remembers values over arbitrary time intervals, and the gates regulate the flow of information into and out of the cell. Forget gates decide what information to discard from the previous state, by mapping the previous state and the current input to a value between 0 and 1. A (rounded) value of 1 signifies retention of the information, and a value of 0 represents discarding. Input gates decide which pieces of new information to store in the current cell state, using the same system as forget gates. Output gates control which pieces of information in the current cell state to output, by assigning a value from 0 to 1 to the information, considering the previous and current states. Selectively outputting relevant information from the current state allows the LSTM network to maintain useful, long-term dependencies to make predictions, both in current and future time-steps.
LSTM has wide applications in classification,[5][6] data processing, time series analysis tasks,[7] speech recognition,[8][9] machine translation,[10][11] speech activity detection,[12] robot control,[13][14] video games,[15][16] and healthcare.[17]
In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with classic RNNs is computational (or practical) in nature: when training a classic RNN using back-propagation, the long-term gradients which are back-propagated can "vanish", meaning they can tend to zero due to very small numbers creeping into the computations, causing the model to effectively stop learning. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow with little to no attenuation. However, LSTM networks can still suffer from the exploding gradient problem.[18]
The intuition behind the LSTM architecture is to create an additional module in a neural network that learns when to remember and when to forget pertinent information.[4] In other words, the network effectively learns which information might be needed later on in a sequence and when that information is no longer needed. For instance, in the context of natural language processing, the network can learn grammatical dependencies.[19] An LSTM might process the sentence "Dave, as a result of his controversial claims, is now a pariah" by remembering the (statistically likely) grammatical gender and number of the subject Dave, note that this information is pertinent for the pronoun his and note that this information is no longer important after the verb is.
In the equations below, the lowercase variables represent vectors. Matrices and contain, respectively, the weights of the input and recurrent connections, where the subscript can either be the input gate , output gate , the forget gate or the memory cell , depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, is not just one unit of one LSTM cell, but contains LSTM cell's units.
See [20] for an empirical study of 8 architectural variants of LSTM.
The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are:[1][4]
where the initial values are and and the operator denotes the Hadamard product (element-wise product). The subscript indexes the time step.
Letting the superscripts and refer to the number of input features and number of hidden units, respectively:
The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM).[21][22] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.[21] is not used, is used instead in most places.
Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. and represent the activations of respectively the input, output and forget gates, at time step .
The 3 exit arrows from the memory cell to the 3 gates and represent the peephole connections. These peephole connections actually denote the contributions of the activation of the memory cell at time step , i.e. the contribution of (and not , as the picture may suggest). In other words, the gates and calculate their activations at time step (i.e., respectively, and ) also considering the activation of the memory cell at time step , i.e. .
The single left-to-right arrow exiting the memory cell is not a peephole connection and denotes .
The little circles containing a symbol represent an element-wise multiplication between its inputs. The big circles containing an S-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum.
Peephole convolutional LSTM.[23] The denotes the convolution operator.
An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.
A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to if the spectral radius of is smaller than 1.[2][24]
However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.
Many applications use stacks of LSTM RNNs[25] and train them by connectionist temporal classification (CTC)[5] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.
Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution[7] or by policy gradient methods, especially when there is no "teacher" (that is, training labels).
Applications of LSTM include:
2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice.[50][51] According to the official blog post, the new model cut transcription errors by 49%.[52]
2016: Google started using an LSTM to suggest messages in the Allo conversation app.[53] In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%.[10][54][55]
Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype[56][57][58] in the iPhone and for Siri.[59][60]
Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology.[61]
2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.[11]
Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".[62]
2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2,[15] and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.[14][63]
2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II.[16][63]
Aspects of LSTM were anticipated by "focused back-propagation" (Mozer, 1989),[64] cited by the LSTM paper.[1]
Sepp Hochreiter's 1991 German diploma thesis analyzed the vanishing gradient problem and developed principles of the method.[2] His supervisor, Jürgen Schmidhuber, considered the thesis highly significant.[65]
An early version of LSTM was published in 1995 in a technical report by Sepp Hochreiter and Jürgen Schmidhuber,[66] then published in the NIPS 1996 conference.[3]
The most commonly used reference point for LSTM was published in 1997 in the journal Neural Computation.[1] By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates.[20]
(Felix Gers, Jürgen Schmidhuber, and Fred Cummins, 1999)[67] introduced the forget gate (also called "keep gate") into the LSTM architecture in 1999, enabling the LSTM to reset its own state.[20] This is the most commonly used version of LSTM nowadays.
(Gers, Schmidhuber, and Cummins, 2000) added peephole connections.[21][22] Additionally, the output activation function was omitted.[20]
(Graves, Fernandez, Gomez, and Schmidhuber, 2006)[5] introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences.
(Graves, Schmidhuber, 2005)[26] published LSTM with full backpropagation through time and bidirectional LSTM.
(Kyunghyun Cho et al., 2014)[68] published a simplified variant of the forget gate LSTM[67] called Gated recurrent unit (GRU).
(Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber, 2015) used LSTM principles[67] to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks.[69][70][71] Concurrently, the ResNet architecture was developed. It is equivalent to an open-gated or gateless highway network.[72]
A modern upgrade of LSTM called xLSTM is published by a team leaded by Sepp Hochreiter (Maximilian et al, 2024).[73][74] One of the 2 blocks (mLSTM) of the architecture are parallelizable like the Transformer architecture, the other ones (sLSTM) allow state tracking.
2004: First successful application of LSTM to speech Alex Graves et al.[75][63]
2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models.[21][63]
Hochreiter et al. used LSTM for meta-learning (i.e. learning a learning algorithm).[76]
2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher.[7]
Mayer et al. trained LSTM to control robots.[13]
2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher.[77]
Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology.[37]
2009: Justin Bayer et al. introduced neural architecture search for LSTM.[78][63]
2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves.[79] One was the most accurate model in the competition and another was the fastest.[80] This was the first time an RNN won international competitions.[63]
2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset.[28]
Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference.[81][82][83] Their Time-Aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.