Overview | HNet | Performance Aspects | Mathematics | Biology

Rapid convergence of non-linear solutions brings truly “real-time” learning to complex real-world problems.  Holographic neural assemblies exhibit a logarithmic rate in learning convergence for both linear and non-linear problem spaces.  Non-linear spaces require the use of combinatorics, often referred to in the QIP world as "entangled states".  Conventional neural networks encounter severe limitations in recall error reduction for non-linear problems, as indicated in the previous comparison.

The graph to the right illustrates convergence characteristics typical of an HNeT assembly trained on highly non-linear data, where number of patterns learned greatly exceeds the number of stimulus pattern elements.  In under 100 epochs, recall accuracy reaches the resolution of floating point numbers applied in the computation.  This convergence characteristic of HNeT (logarithmic reduction in error - down to the computational resolution) is observed in both random test and real-world application data.

An important feature of holographic neural processing is that the above logarithmic convergence for non-linear problem sets degrades only at the saturation point where; the number of learned associative patterns exceeds the number of cortical memory elements.  This limit occurs irrespective of the dimensionality of the stimulus input pattern (the number of elements defining a stimulus pattern).

This property holds true irrespective of whether the number of associative patterns learned is 1000 or 1.0 x 109