Tesis:

Contribution of artificial metaplasticity to pattern recognition


  • Autor: FOMBELLIDA VETAS, Juan

  • Título: Contribution of artificial metaplasticity to pattern recognition

  • Fecha: 2018

  • Materia: Sin materia definida

  • Escuela: E.T.S. DE INGENIEROS DE TELECOMUNICACION

  • Departamentos: SEÑALES, SISTEMAS Y RADIOCOMUNICACIONES

  • Acceso electrónico: http://oa.upm.es/51494/

  • Director/a 1º: ANDINA DE LA FUENTE, Diego
  • Director/a 2º: FERRÁNDEZ VICENTE, Jose Manuel

  • Resumen: Artificial Neural Networks design and training algorithms are based many times on the optimization of an objective error function used to provide an evaluation of the performances of the network. The value of the error depends basically on the weight values of the different connections between the neurons of the network. The learning methods modify and update the different weight values following a strategy that tends to minimize the final error in the network performance. The neural network theory identifies the weight values as a representation of the synaptic weights in the biological neural networks, and their ability to change their values can be interpreted as a kind of artificial plasticity inspired by the demonstrated biological counterpart process. The biological metaplasticity is related to the processes of memory and learning as an inherent property of the biological neuron connections, and consists in the capacity of modifying the learning mechanism using the information present in the network itself. In such a way, Artificial MetaPlasticity (AMP), is interpreted as the ability to change the efficiency of artificial plasticity depending on certain elements used in the training. A very efficient AMP model (as a function of learning time and performance) is the approach that connects metaplasticity and Shannon’s information theory, which establishes that less frequent patterns carry more information than frequent patterns. This model defines AMP as a learning procedure that produces greater modifications in the synaptic weights when less frequent patterns are presented to the network than when frequent patterns are used, as a way of extracting more information from the former than from the latter. In this doctoral thesis the AMP theory is implemented using different Artificial Neural Network (ANN), models and different learning paradigms. The networks are used as classifiers or predictors of synthetic and real data sets in order to be able to compare and evaluate the results obtained with several state of the art methods. The AMP theory is implemented over two general learning methods: • Supervised training: The BackPropagation Algorithm (BPA), is one of the best known and most used algorithms to training the neural networks. This algorithm compares the ideal results with the real results obtained at the networks output and calculates an error value. This value is used to modify the weight values in order to get a final trained network that minimizes the differences between the ideal and the real results. The BPA has been successfully applied to several patter classification problems in areas such as: medicine, bioinformatic, banking, climatological predictions, etc. However the classic algorithm has shown some limitations that prevent this method to reach an optimal efficiency level (convergence, speed problems and classification accuracy). Artificial Metaplasticity modification to the classic BPA, is in this case implemented in a Multilayer Perceptron (MLP), neural network. The Artificial Metaplasticity on MultiLayer Perceptron (AMMLP) model was applied in the ANNs training phase. During the training phase the AMMLP algorithm updates the weights assigning higher values to the less frequent activations than to the more frequent ones. AMMLP achieves a more efficient training and improves MLP performance. The suggested AMMLP algorithm was applied to different problems related to pattern classification or prediction in different areas and considering different methods for obtaining the information from the data set. Modeling this interpretation in the training phase, the hypothesis of an improved training shows a much more efficient training maintaining the ANN performance. This algorithm has achieved deeper learning on several multidisciplinary data sets without the need of a deep network. • Unsupervised training: Koniocortex-Like Network (KLN) is a novel category of bio-inspired neural networks whose architecture and properties are inspired in the biological koniocortex, the first layer of the cortex that receives information from the thalamus. In the KLN competition and pattern classification emerges naturally due to the interplay of inhibitory inter-neurons, metaplasticity and intrinsic plasticity. This behavior resembles a Winner Take All (WTA) mode of operation, where the most active neuron “wins”, i.e. fires, while neighboring ones remain silent. Although a winning neuron is identified by calculation in many artificial neural networks models, in biological neural networks the winning neuron emerges from a natural dynamic process. Recently proposed, it has shown a big potential for complex tasks with unsupervised learning. Now for the first time, its competitive results are proved in several relevant real applications. The simulations show that the unsupervised learning that emerges from individual neurons properties is comparable and even surpasses results obtained with several advanced state-of-the-art supervised and unsupervised learning algorithms.