Also, depending on the application, if the sensitivity to immediate and closer neighbors is higher than inputs that come further away, a variant that looks only into a limited future/past can be modeled.
A recurrent neural network parses the inputs in a sequential fashion.
Image classifying CNNs have become so successful because the 2D convolutions are an effective form of parameter sharing where each convolutional filter basically extracts the presence or absence of a feature in an image which is a function of not just one pixel but also of its surrounding neighbor pixels.
In other words, the success of CNNs and RNNs can be attributed to the concept of “parameter sharing” which is fundamentally an effective way of leveraging the relationship between one input item and its surrounding neighbors in a more intrinsic fashion compared to a vanilla neural network.
Sure can, but the ‘series’ part of the input means something.
A single input item from the series is related to others and likely has an influence on its neighbors.
While it’s good that the introduction of hidden state enabled us to effectively identify the relationship between the inputs, is there a way we can make a RNN “deep” and gain the multi level abstractions and representations we gain through “depth” in a typical neural network? (1) Perhaps the most obvious of all, is to add hidden states, one on top of another, feeding the output of one to the next.
(2) We can also add additional nonlinear hidden layers between input to hidden state (3) We can increase depth in the hidden to hidden transition (4) We can increase depth in the hidden to output transition.
Otherwise it's just “many” inputs, not a “series” input (duh! So we need something that captures this relationship across inputs meaningfully.
Recurrent Neural Network remembers the past and it’s decisions are influenced by what it has learnt from the past.