ht = yt = sigmoid(ht − 1 * W + xt * U)
To perform the BPTT with a RNN unit, we have the eror comming from the top layer (δ1), the future hidden state (δ2). Also, we have stored during the feed forward the states at each step of the feeding. In the case of the future layer, this error is just set to zero if not calculated yet. For convention, ⋅ correspond to point wise multiplication, while * correspond to matrix multiplication.
The rules on how to back prpagate come from this post.
δ3 = δ1 + δ2
δ4 = δ3 ⋅ sigmoid′(ht)
δ5 = δ4 * WT δ6 = δ4 * UT
The error δ5 and δ6 are used for the next layers. Once all those errors are available, it is possible to calculate the weight update.
δW = δW + ht − 1T * δ4
δU = δU + xtT * δ5
This should be according to the linked post but in reality, we did it as follow:
δ5 = δ6 = ((δ2 * WT) + (δ1 * UT)) * sigmoid′(ht)
δU = δU + xtT * δ1
δW = δW + ht − 1T * δ2