Intelligent Systems Laboratory
School of Electrical and Computer Engineering
National Technical University of Athens
Contact Person: Andreas Stafylopatis

Research activities of the Intelligent Systems Laboratory in the area of “Learning - Neural Networks” include the following topics: neural networks, ensemble methods and meta-learning, memory-based learning, clustering, self-organization, feature selection etc.

Ensemble methods

Ensemble Classification: In [Fro03] a multi-net classification method is proposed. The very good performance of the proposed system is mainly due to the combination of supervised and unsupervised learning methods, the ability of the sub- classifiers to solve difficult tasks and finally the balance between sub- tasks simplification and decision-making efficiency. Moreover, inspired by a modular way of reasoning, a subsethood-product fuzzy neural classifier has been developed [Per08], with a novel dynamic architecture, involving a main module and a number of submodules. The CART algorithm is employed as a fast preprocess of structure identification, which divides the input space into high certainty and low certainty regions, each representing a primary fuzzy rule. These primary fuzzy rules use a minimum set of attributes and are mapped onto the main neuro-fuzzy module. The patterns belonging to a low certainty primary rule get further split into a subset of secondary rules that use an extended set of attributes. Each such rule subset is mapped onto an expert-submodule, which gets activated only when a pattern falls into the respective low certainty region. This dynamic resource-allocating model is optimized through a supervised learning procedure.

Ensemble Clustering: Exploiting the ‘ensemble’ idea of constructing a complex model from simple ones, ensemble clustering algorithms have been developed. The key feature of the proposed multi-clustering method [Fro04, Fro05] is the ability to partition a set of data points to the optimal number of clusters not constrained to be hyper-spherically shaped.

Memory-Based Learning

A memory-based learning methodology for classification has been proposed in [Pat07] that relies on the main idea of the k -nearest neighbors algorithm. In the proposed approach, given an unclassified pattern, a set of neighboring patterns is found, not necessarily using all input feature dimensions. In addition, a novel weighting scheme of the memory-base is proposed: using the self-organizing maps model dynamic weights of the memory-base patterns are produced during the execution of the algorithm.

Weight Learning in Connectionist Fuzzy Logic Programs

Fuzzy logic programs are a useful framework for imperfect knowledge representation and reasoning using the formalism of logic programming. Nevertheless, there is the need for modeling adaptation of fuzzy logic programs, so that machine learning techniques can be applied. Weighted fuzzy logic programs bring fuzzy logic programs and connectionist models closer together by associating a significance weight with each atom in the body of a fuzzy rule: By exploiting the existence of the weights, it is possible to construct a connectionist model that reflects the exact structure of a weighted fuzzy logic program [Cho08]. Based on the connectionist representation, first the weight adaptation problem is defined as the task of adapting the weights of the rules of a weighted fuzzy logic program, so that they fit best a set of training data, and then a subgradient descent learning algorithm is proposed that allows to obtain an approximate solution for the weight adaptation problem.