Overview  HNet  Performance Aspects  Mathematics  Biology
Rapid convergence of nonlinear solutions brings truly “realtime” learning to complex realworld problems. Holographic neural assemblies exhibit a logarithmic rate in learning convergence for both linear and nonlinear problem spaces. Nonlinear spaces require the use of combinatorics, often referred to in the QIP world as "entangled states". Conventional neural networks encounter severe limitations in recall error reduction for nonlinear problems, as indicated in the previous comparison.
The graph to the right illustrates convergence characteristics typical of an HNeT assembly trained on highly nonlinear data, where number of patterns learned greatly exceeds the number of stimulus pattern elements. In under 100 epochs, recall accuracy reaches the resolution of floating point numbers applied in the computation. This convergence characteristic of HNeT (logarithmic reduction in error  down to the computational resolution) is observed in both random test and realworld application data.
An important feature of holographic neural processing is that the above logarithmic convergence for nonlinear problem sets degrades only at the saturation point where; the number of learned associative patterns exceeds the number of cortical memory elements. This limit occurs irrespective of the dimensionality of the stimulus input pattern (the number of elements defining a stimulus pattern).
This property holds true irrespective of whether the number of associative patterns learned is 1000 or 1.0 x 10^{9}
