Principal components can be extracted using single-layer feed-forward neural networks (Sanger, 1989; Rubner and Tavan, 1989; Oja, 1989; Diamantaras and Kung, 1996). These networks learn unsupervised by using variants of the Hebbian rule. They provide an iterative solution to (2.5) and do not need the computation of , which could be computationally expensive (it is O(d 2n)). Networks extracting principal components further provide a biological basis for PCA. One example of such a PCA algorithm is Oja's rule.
Oja's algorithm (Oja, 1982) uses a single neuron with an input vector , a weight vector , and an output y. The output can be written as y = (this corresponds to (2.1)). According to Oja's rule, after a training pattern is presented, the weights change by a Hebbian term minus a forgetting function: