Bereiche | Tage | Auswahl | Suche | Downloads | Hilfe
DY: Dynamik und Statistische Physik
DY 46: Poster
DY 46.11: Poster
Donnerstag, 11. März 2004, 16:00–18:00, Poster D
Magnification control of concave/convex learning in self-organizing neural maps and neural gas — •Jens Christian Claussen2 and Thomas Villmann1 — 1Cl. for Psychotherapy, University Leipzig — 2Theoretical Physics, University Kiel
The self-organizing neural maps (SOM) by Kohonen and the neural gas (NG) by Martinetz are paradigms of neural winner-take all feedforward computation and are widely applied for vector quantization tasks. A data representation maximizing mutual information corresponds to an magnification exponent of 1, which is not reached by SOM and (finite-dimensional) NG. Recently we presented magnification approaches for NG by an additive winner-relaxing term [1] (extending an approach for SOM [2,3,4]) as an alternative to localized learning [4].
In this work [5] an alternative learning rule, the concave/convex learning [6], is investigated with respect to its magnification behaviour. We present an analytical derivation of the magnification exponent for SOM (in 1D) and for NG in arbitrary dimensions from a continuum theory. The results are validated by numerical calculation of the entropy for a standardized test system. We observe a dependence on the learning exponent in agreement with the theoretical predictions.
[1] J.C.C. and T.V., Proc. ESANN 2003; J.C.C. and T.V. (subm.)
[2] J.C.C. cond-mat/0208414; [3] J.C.C. Complexity 8(4), 15 (2003).
[4] J.C.C., Math.Mod.Comp.Biol.Med. p.17, ed.V.Capasso, Bologna, 2003.
[5] M. Hermann, T.V., Proc. ICANN 1997.
[6] T.V. and J.C.C., Proc. WSOM 2003; T.V. and J.C.C. (subm.)
[7] Y.Zheng, J.F.Greenleaf, IEEE Trans. Neur. Netw. 7, 87 (1996).