- Consider Signing-up!
- Why Sign-up?
- Benefits of Signed-up Users!
Welcome to our new site!
Our site has been recently re-design, to be more attractive and easy to use. Our intention is the new site to act as your main contact point between you and EETN, by concentrating news, events and member's content. Signed-up users can participate in our vibrant community and enjoy the benefits of signed-up users!
You are now browsing our site as a guest!
Of course, you are very welcome simply to visit our site as a guest, but if you are an EETN member you are missing so many features available only to signed-up EETN members, such as managing and reviewing your EETN membership, managing your personal information, accessing content available only to signed-up users, participating in our discussion forums, submitting news and articles, and much more! Thus, if you are an EETN member, please consider signing up!
5 Reasons for signing-up!
Please note that signing-up is limited to EETN members only!
Benefits of signed-up users!
Founded in 1988, the Hellenic Artificial Intelligence Society (EETN) is a non profit scientific organization devoted to organizing and promoting AI (Artificial Intelligence) research in Greece and abroad. Since its establishment, EETN has participated in the organization of various national and international events related to AI and its subfields. EETN is also interested in promoting AI in higher education and in exploiting AI research results by commercial organizations. The recent, rapid growth of the Internet and the World Wide Web has intensified the need for intelligent information systems and increased commercial interest in the area of AI.
Since 1996, EETN is a member of the European Coordinating Committee for Artificial Intelligence (ECCAI), whose international prestige is similar to the one of the Association for the Advancement of Artificial Intelligence (AAAI). EETN's participation in ECCAI, as an equal society member, as well as the organization of EETN conferences (SETN), workshops and summer schools has significantly upgraded its role.
The highest authority in EETN is the General Assembly, which is convened normally once every year and exceptionally when it is needed. It consists of the regular members, who have paid their subscription fees. It is considered to be in quorum when 50% + 1 of the regular members (having completed their economical obligations to Society) are present.
Local groups of EETN can be established in any city in Greece or abroad, by at least 10 regular members who reside in the corresponding city region. There can be only one local group in a city. Every local group has a representative, who must also participate in the General Assembly. Until now the bigger local groups are situated in Athens, Thessaloniki and Patra. There are also smaller groups in Ioannina, Kavala, Corfu, Samos, Chios, Syros, Orestiada, Chania and Irakleio.
The resources of EETN are:
- Annual subscriptions of members and registration fees of new members.
- Exceptional contributions of members.
- Income from the publication/disposal of magazines.
- Participation fees for Seminars, Conferences etc.
- Subsidies, sponsorings, donations and bequests.
- All-nature economic aids or contributions.
EETN member Manolis Koubarakis has been appointed as ECCAI Fellow. Congratulations to him!!
ECCAI is the European Coordinating Committee for Artificial Intelligence. The ECCAI Fellows Program has been started in 1999 to recognize individuals who have made significant, sustained contributions to the field of artificial intelligence (AI) in Europe. The ECCAI Fellows Program honors only a very small percentage of the total membership of all ECCAI member societies (up to a maximum of 3%).
It is worth underscoring how it is the third time that an EETN is nominated as a Fellow, the previous ones being the previous ones being Constantine Spyropoulos, Grigorios Antoniou, and John Mylopoulos. See complete ECCAI fellows list here.
Αγαπητά μέλη της ΕΕΤΝ,
Εκ μέρους του ΔΣ της ΕΕΤΝ, είμαι στην ιδιαίτερα ευχάριστη θέση να σας ανακοινώσω ότι μετά από πρόταση του ΔΣ, ο καθηγητής του ΕΚΠΑ κος Μ. Κουμπαράκης επελέγη από το ECCAI Board ως ECCAI Fellow. Όπως γνωρίζετε η επιλογή αυτή είναι ιδιαίτερα αυστηρή και προκρίνει ανθρώπους με σημαντική συμβολή στην Τεχνητή Νοημοσύνη στην Ευρώπη αλλά και γενικότερα.
Να αναφέρω επίσης ότι από την ΕΕΤΝ ήδη 4 μέλη της έχουν αξιωθεί της επιλογής αυτής: ο Κ. Σπυρόπουλος, ο Γ. Αντωνίου, o Γ. Μυλόπουλος και τώρα ο Μ. Κουμπαράκης. Συγχαρητήρια στον Μανώλη, η αναγνώριση αυτή είναι αντάξια του έργου του! Εύχομαι (με βάσιμη αισιοδοξία) και άλλα μέλη της ΕΕΤΝ να διακριθούν με παρόμοιο τρόπο στα επόμενα έτη.
|University/Research /Organization||Department/Institute||Lab/Group/Person Name||URL||Key Area 1||Key Area 1 Most Recent and Representative Publication||Key Area 2||Key Area 2 Most Recent and Representative Publication||Key Area 3||Key Area 3 Most Recent and Representative Publication||Number of Faculty Members||Number of Post Docs||Number of PostGraduates||Number of PreGraduates||Do you want this information to be published in the webpage of EETN?|
|Aristotle University of Thessaloniki||Informatics||Intelligent Systems||http://intelligence.csd.auth.gr/||Autonomous Agents and Multi-agent Systems||Citation: K. Kravari, N. Bassiliades, “DISARM: A Social Distributed Agent Reputation Model based on Defeasible Logic”, Journal of Systems and Software, Vol. 117, pp. 130–152, July 2016.||Planning and Scheduling||Citation: M. Symeonidis, D. Giouroukis, D. Vrakas, A planning problem validator based on reachability analysis, Proceedings of the 9th Hellenic Conference on Artificial Intelligence, Thessaloniki Greece, 2016, doi>10.1145/2903220.2903237||Web and Knowledge-based Information Systems||Citation: I. Viktoratos, A. Tsadiras, N. Bassiliades, "Modeling Human Daily Preferences through a Context-Aware Web-Mapping System Using Semantic Technologies", Pervasive and Mobile Computing, accepted for publication, 2016.||4||3||17||3||Yes|
|Agents act in open and thus risky environments with limited or no human intervention. Making the appropriate decision about who to trust in order to interact with is not only necessary but it is also a challenging process. To this end, trust and reputation models, based on interaction trust or witness reputation, have been proposed. Yet, they are often faced with skepticism since they usually presuppose the use of a centralized authority, the trustworthiness and robustness of which may be questioned. Distributed models, on the other hand, are more complex but they are more suitable for personalized estimations based on each agent's interests and preferences. Furthermore, distributed approaches allow the study of a really challenging aspect of multi-agent systems, that of social relations among agents. To this end, this article proposes DISARM, a novel distributed reputation model. DISARM treats Multi-agent Systems as social networks, enabling agents to establish and maintain relationships, limiting the disadvantages of the common distributed approaches. Additionally, it is based on defeasible logic, modeling the way intelligent agents, like humans, draw reasonable conclusions from incomplete and possibly conflicting (thus inconclusive) information. Finally, we provide an evaluation that illustrates the usability of the proposed model.||AI planning is the field that focuses on creating abstract representations of various processes. As such, it is obvious that it is a broad field and demands specialized tools to be used in order to ease the process of designing domains and problems. The literature shows that most of these tools do not offer an option to re-examine and validate the result of the aforementioned designing process.
This work presents a novel approach to validating a description of a problem as well as presenting information related to this validation process in an efficient way. It introduces a proxy web service that implements the process of finding invalid descriptions as well as the extension of the relevant tool VLEPpO, that suited the needs of the visualization process.
|In this paper, a novel geosocial networking service called “G-SPLIS” (Geosocial Semantic Personalized Location Information System) is presented. The paper provides a methodology to design, implement and share in a formal way human daily preferences regarding points of interest (POIs) and POI owners’ group targeted offering policies, via user-defined preferences and policy rules. By adding rules at run time users have more flexibility and they do not rely on the pre-determined application’s methods to get personalized information. Furthermore, G-SPLIS provides a large knowledge base for other systems in the web, because rules are easily sharable. To achieve the above, the presented system is compatible with Semantic Web standards such as the schema.org ontology and uses RuleML for rules that define regular users’ preferences and POI owner’s group-targeted offers. Finally, it combines at run-time the above to match user context with related information and visualizes personalized information.|
|National and Kapodistrian University of Athens||Dept. of Informatics and Telecommunications||Manolis Koubarakis||http://cgi.di.uoa.gr/~koubarak/||Web and Knowledge-based Information Systems||G. Garbis, K. Kyzirakos and M. Koubarakis. Geographica: A Benchmark for Geospatial RDF Stores. In the 12th International Semantic Web Conference (ISWC 2013). Sydney, Australia. October 21-25, 2013.||Knowledge Representation, Reasoning, and Logic||Charalampos Nikolaou and Manolis Koubarakis. Querying incomplete information in RDF with SPARQL. Artificial Intelligence, 2016||Constraints, Satisfiability, and Search||Stella Giannakopoulou, Charalampos Nikolaou, Manolis Koubarakis. A Reasoner for the RCC-5 and RCC-8 Calculi Extended with Constants. In the Proceedings of the 28th AAAI Conference on Artificial Intelligence, pp. 2659-2665. Quebec City, Quebec, Canada, July 27-31, 2014||1||1||5||0||Yes|
|Geospatial extensions of SPARQL like GeoSPARQL and stSPARQL have recently been de ned and corresponding geospatial RDF stores have been implemented. However, there is no widely used bench-mark for evaluating geospatial RDF stores which takes into account re-cent advances to the state of the art in this area. In this paper, we develop a benchmark, called Geographica, which uses both real-world and syn-thetic data to test the o ered functionality and the performance of some prominent geospatial RDF stores.||Incomplete information has been studied in-depth in relational databases and knowledge representation. In the context of the Web, incomplete information issues have been studied in detail for XML, but very few papers exist that do the same for RDF. In this paper we make the first general proposal for extending RDF with the ability to represent property values that exist but are unknown or partially known using constraints. Following ideas from incomplete information literature, we develop a semantics for this extension of RDF, called RDFi, and study query evaluation for SPARQL. We transfer the concept of representation systems from incomplete information in relational databases to the case of RDFi and identify two very important fragments of SPARQL that can be used to define a representation system for RDFi. The first corresponds to the monotone fragment of graph patterns that uses only the operators AND, UNION, and FILTER. The second corresponds to the well-designed graph patterns, that is, a fragment that uses only operators AND, FILTER, and OPT, and enjoys interesting properties that make query evaluation efficient. We prove that each of the two fragments can be used to define a representation system for CONSTRUCT queries without blank nodes in their templates. We also define the fundamental concept of certain answers to SPARQL queries over RDFi databases and present an algorithm for its computation. Then, we present complexity results for computing certain answers by considering equality, temporal, and spatial constraint languages and the class of CONSTRUCT queries of our representation systems. Finally, we demonstrate the usefulness of RDFi in geospatial Semantic Web applications by giving a number of examples and comparing the modeling capabilities of RDFi with related formalisms found in the literature.||The problem of checking the consistency of spatial calculi that contain both unknown and known entities (constants, i.e., real geometries) has recently been studied. Until now, all the approaches are theoretical and no implementation has been proposed. In this paper we present the first reasoner that takes as input RCC-5 or RCC-8 networks with variables and constants and decides their consistency. We investigate the performance of the reasoner experimentally using real-world networks and show that we can achieve significantly better times by geometry simplification and parallelization|
|Patras||Mathematics||ML LAB||http://ml.math.upatras.gr||Machine Learning and Data Mining||Citation: Nikos Fazakis, Stamatis Karlos, Sotiris Kotsiantis, and Kyriakos Sgarbas, “Self-Trained LMT for Semisupervised Learning”, Computational Intelligence and Neuroscience, vol. 2016, Article ID 3057481, 13 pages, 2016. doi:10.1155/2016/3057481.||Knowledge Representation, Reasoning, and Logic||Citation: Stamatis Karlos, Nikos Fazakis, Angeliki-Panagiota Panagopoulou, Sotiris B. Kotsiantis, Kyriakos Sgarbas: Locally application of naive Bayes for self-training, Evolving Systems, 2016, pp. 1-16, doi:10.1007/s12530-016-9159-3.||Multidisciplinary Topics||Citation: Stamatis Karlos, Nikos Fazakis, Katerina Karanikola, Sotiris Kotsiantis, and Kyriakos Sgarbas: Speech Recognition Combining MFCCs and Image Features, SPECOM 2016: Chapter Speech and Computer, Volume 9811 of the series Lecture Notes in Computer Science pp. 651-658.||1||0||4||0||Yes|
|Abstract: The most important asset of semisupervised classification methods is the use of available unlabeled data combined with a clearly smaller set of labeled examples, so as to increase the classification accuracy compared with the default procedure of supervised methods, which on the other hand use only the labeled data during the training phase. Both the absence of automated mechanisms that produce labeled data and the high cost of needed human effort for completing the procedure of labelization in several scientific domains rise the need for semisupervised methods which counterbalance this phenomenon. In this work, a self-trained Logistic Model Trees (LMT) algorithm is presented, which combines the characteristics of Logistic Trees under the scenario of poor available labeled data. We performed an in depth comparison with other well-known semisupervised classification methods on standard benchmark datasets and we finally reached to the point that the presented technique had better accuracy in most cases.||Abstract: Semi-supervised algorithms are well-known for their ability to combine both supervised and unsupervised strategies for optimizing their learning ability under the assumption that only a few examples together with their full feature set are given. In such cases, the use of weak learners as base classifiers is usually preferred, since the iterative behavior of semi-supervised schemes require the building of new temporal models during each new iteration. Locally weighted naïve Bayes classifier is such a classifier that encompasses the power of NB and k-NN algorithms. In this work, we have implemented a self-labeled weighted variant of local learner which uses NB as the base classifier of self-training scheme. We performed an in depth comparison with other well-known semi-supervised classification methods on standard benchmark datasets and we reached to the conclusion that the presented technique had better accuracy in most cases.||Abstract: Automatic speech recognition (ASR) task constitutes a well-known issue among fields like Natural Language Processing (NLP), Digital Signal Processing (DSP) and Machine Learning (ML). In this work, a robust supervised classification model is presented (MFCCs + autocor + SVM) for feature extraction of solo speech signals. Mel Frequency Cepstral Coefficients (MFCCs) are exploited combined with Content Based Image Retrieval (CBIR) features extracted from spectrogram produced by each frame of the speech signal. Improvement of classification accuracy using such extended feature vectors is examined against using only MFCCs with several classifiers for three scenarios of different number of speakers.|
|University of Patras||Business Administration||Pavlos Peppas||http://www.bma.upatras.gr/staff/pavlos/||Knowledge Representation, Reasoning, and Logic||P. Peppas, M.-A. Williams, S. Chopra, and N. Foo, "Relevance in Belief Revision", Artificial Intelligence vol 229, pp 126-138, 2015.||1||0||0||0||Yes|
|Abstract: Possible-world semantics are provided for Parikh's relevance-sensitive axiom for belief revision, known as axiom (P). Loosely speaking, axiom (P) states that if a belief set K can be divided into two disjoint compartments, and the new information A relates only to the first compartment, then the second compartment should not be effected by the revision of K by A. Using the well-known connection between AGM revision functions and preorders on possible worlds as our starting point, we formulate additional constraints on such preorders that characterise precisely Parikh's axiom (P). Interestingly, the additional constraints essentially generalise a criterion of plausibility between possible worlds that predates axiom (P). A by-product of our study is the identification of two possible readings of Parikh's axiom (P), which we call the strong and the weak versions of the axiom.|
|University of Macedonia||Department of Applied Informatics||Computational Methods and Operations Research Lab, Artificial Intelligence & Intelligent Systems Group||http://ai.uom.gr||Planning and Scheduling||Citation: I. Refanidis and A. Alexiadis. Deployment and Evaluation of SELFPLANNER, an Automated Individual Task Management System. Computational Intelligence, 27(1), 2011, pp. 41-59.||Web and Knowledge-based Information Systems||Citation: Ioannis Refanidis, Christos Emmanouilidis, Ilias Sakellariou, Anastasios Alexiadis, Remous-Aris Koutsiamanis, Konstantinos Agnantis, Aimilia Tasidou, Fotios Kokkoras, and Pavlos S. Efraimidis. myVisitPlannerGR: Personalized Itinerary Planning System for Tourism. In Proceedings of the Hellenic AI Society Conference, Ioannina, Greece (2014), LNCS (Springer) 8445, pp. 615–629.||Constraints, Satisfiability, and Search||Citation: I. Refanidis and I. Sakellariou. Computing higher order exclusion relations in propositional planning. Journal of Experimental and Theoretical Artificial Intelligence (JETAI), vol. 25, no.1 (2012), 23-51.||2||0||Yes|
|Abstract: This paper presents SELFPLANNER, a deployed web-based intelligent calendar application that helps a user schedule in time and space her individual tasks. Contrary to other intelligent calendar assistants that concentrate on automating meeting scheduling, SELFPLANNER emphasizes on scheduling individual tasks and events, leaving meeting arrangement for external handling. The two key features of SELFPLANNER, also critical factors for its potential broader adoption, are problem modeling and user interface. SELFPLANNER supports simple, interruptible and flexible periodic tasks, arbitrary temporal domains, constraints over the parts of an interruptible task, binary constraints between tasks and preferences over their temporal domains, location references, classes of locations, time zones, etc. As for user interface, SELFPLANNER integrates with Google Calendar and a Google Maps based application, whereas it introduces an innovative way to define temporal domains, based on a combination of user-defined reusable templates and manual editing. The core of the system is based on the Squeaky Wheel Optimization algorithm, with efficient domain dependent heuristics. The paper is accompanied with an extensive evaluation of the system, comprising an analytic and an exploratory part. Evaluation results are very promising and suggest that SELFPLANNER constitutes a step towards the next generation of intelligent calendar applications.||Abstract: This application paper presents MYVISITPLANNERGR, an intelligent web-based system aiming at making recommendations that help visitors and residents of the region of Northern Greece to plan their leisure, cultural and other ac-tivities during their stay in this area. The system encompasses a rich ontolo-gy of activities, categorized across dimensions such as activity type, histori-cal era, user profile and age group. Each activity is characterized by attrib-utes describing its location, cost, availability and duration range. The system makes activity recommendations based on user-selected criteria, such as visit duration and timing, geographical areas of interest and visit profiling. The user edits the proposed list and the system creates a plan, taking into ac-count temporal and geographical constraints imposed by the selected activi-ties, as well as by other events in the user’s calendar. The user may edit the proposed plan or request alternative plans. A recommendation engine em-ploys non-intrusive machine learning techniques to dynamically infer and update the user’s profile, concerning his preferences for both activities and resulting plans, while taking privacy concerns into account. The system is coupled with a module to semi-automatically feed its database with new ac-tivities in the area.||Abstract: This article presents a systematic and complete algorithm to compute all higher order exclusion relations for a propositional planning problem, that is, sets of propositions that cannot hold simultaneously at specific time points, without any bound on the order of the exclusion relations. This algorithm is proved to allow for backtrack-free plan extraction, provided that all goals have to be achieved simultaneously. In particular, leveled global consistency is achieved, i.e., all exclusion relations between propositions within each time step are computed. However, achieving leveled global consistency is impractical for most non-trivial planning problems. Indeed, as our empirical evaluation over a variety of planning problems suggest, best performance is achieved when setting a bound on the order of the computed exclusion relations and using search to extract a plan. Additional statistics extracted from our experiments shed light on the internal dynamics of Graphplan-style planners.|
|Technical University of Crete||School of Production Engineering and Management||Applied Mathematics and Computers Laboratory||http://www.amcl.tuc.gr/||Autonomous Agents and Multi-agent Systems||Citation: N. Mitakidis, P. Delias, N. Spanoudakis. .Validating Requirements Using Gaia Roles Models. In Matteo Baldoni, Luciano Baresi and Mehdi Dastani (Eds.): Engineering Multi-Agent Systems Third International Workshop, EMAS 2015, Revised Selected Papers, Lecture Notes in Artificial Intelligence (LNAI), vol. 9318, Springer-Verlag, pp. 171-190, doi: 10.1007/978-3-319-26184-3_10||Agent-based and integrated systems||Citation: N. Spanoudakis, P. Moraitis. Engineering Ambient Intelligence Systems using Agent Technology. IEEE Intelligent Systems, vol. 30, issue 3, May-June 2015, pp. 60-67, doi: 10.1109/MIS.2015.3||Knowledge Representation, Reasoning, and Logic||Citation: N. Spanoudakis, A. C. Kakas, P. Moraitis. Applications of Argumentation: The SoDA Methodology. In 22nd European Conference on Artificial Intelligence (ECAI 2016), The Hague, Holland, 29 Aug-2 Sep, 2016, pp. 1722 - 1723, doi: 10.3233/978-1-61499-672-9-1722||4||3||11||0||Yes|
|Abstract: This paper presents a method that aims at assisting an engineer in transforming agent roles models to a process model. Thus, the software engineer can employ available tools to validate specific properties of the modeled system before its final implementation. The method includes a tool for aiding the engineer in the transformation process. This tool uses a recursive algorithm for automating the transformation process and guides the user to dynamically integrate two or more agent roles in a process model with multiple pools. The tool usage is demonstrated through a running example, based on a real world project. Simulations of the defined agent roles can be used to (a) validate the system requirements and (b) determine how it could scale. This way, engineers, analysts and managers can configure the processes’ parameters and identify and resolve risks early in their project.||Abstract: This article shows how to model and implement an ambient intelligence (AmI) system using agent technology. The HERA project, undertaken by a consortium with members from academia as well as industry, applied the agent systems engineering methodology (ASEME), an agent-oriented software engineering approach, to develop a real-world system for the ambient assisted living application domain. This article focuses on the software architecture, along with the development method and validation results. The obtained results demonstrate the added value of agent technology use, along with how ASEME can be applied for modeling a real-world ambient intelligence system.||Abstract: The area of argumentation is now at a stage that it can benefit from the study of systematic methodologies for building applications of argumentation. Herein, we present such a systematic argumentation software methodology, called SoDA (Software Development for Argumentation), that facilitates the principled modeling of real life problems, and the Gorgias-B tool that supports it. Gorgias-B builds on the system Gorgias that implements the theoretical framework proposed in . SoDA and Gorgias-B built on the successful use of Gorgias during the last ten years by different users for developing real life applications (see http://gorgiasb.tuc.gr/Apps.html).|
|DEMOCRITUS UNIVERSITY OF THRACE||ELECTRICAL AND COMPUTER ENGINEERING||Visual Computing Group (VCG)||http://vc.ee.duth.gr/||Robotics, Perception and Vision||Citation: P. Theologou, I. Pratikakis, and T. Theoharis (2016), Unsupervised spectral mesh segmentation driven by heterogeneous graphs, IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN: 0162-8828, DOI: 10.1109/TPAMI.2016.2544311.||Machine Learning and Data Mining||Citation: M. Savelonas, I. Pratikakis, and K. Sfikas, Fisher encoding of differential fast point feature histograms for partial 3D object retrieval, Pattern Recognition, Elsevier, vol. 55, July 2016, pp. 114-124.||1||2||4||5||Yes|
|Abstract: A fully automatic mesh segmentation scheme using heterogeneous graphs is presented. We introduce a spectral
framework where local geometry affinities are coupled with surface patch affinities. A heterogeneous graph is constructed combining
two distinct graphs: a weighted graph based on adjacency of patches of an initial over-segmentation, and the weighted dual mesh
graph. The partitioning relies on processing each eigenvector of the heterogeneous graph Laplacian individually, taking into account
the nodal set and nodal domain theory. Experiments on standard datasets show that the proposed unsupervised approach outperforms
the state-of-the-art unsupervised methodologies and is comparable to the best supervised approaches
|Abstract: Partial 3D object retrieval has attracted intense research efforts due to its potential for a wide range of applications, such as 3D object repair and predictive
digitization. This work introduces a partial 3D object retrieval method, applicable
on both point clouds and structured 3D models, which is based on a shape matching scheme combining local shape descriptors with their Fisher encodings. Experiments on the SHREC 2013 large-scale benchmark dataset for partial object retrieval, as well as on the publicly available Hampson pottery dataset,
demonstrate that the proposed method outperforms seven recently evaluated
partial retrieval methods.
|University of the Aegean||Information and Communication Systems Eng.||AI Lab||http://icsdweb.aegean.gr/ai-lab/||Machine Learning and Data Mining||N. Potha, M. Maragoudakis, D. Lyras, A Biology-Inspired, Data Mining Framework for Extracting Patterns in Sexual Cyberbullying Data, Knowledge-Based Systems, 2016, Elsevier, Volume 96, 15 March 2016, Pages 134–155.||Robotics, Perception and Vision||Diamantatos, P., E. Kavallieratou, and S. Gritzalis. "Skeleton Hinge Distribution for Writer Identification." International Journal on Artificial Intelligence Tools(2016).||Natural Language Processing||Stamatatos, E.||3||1||8||0||Yes|
|Plagiarism Detection Using Stopword n-grams|
|Journal of the American Society for Information Science and Technology, 62(12), pp. 2512-2527, Wiley, 2011.|
|University of Patras||Computer Engineering & Informatics||AI Grooup||http://aigroup.ceid.upatras.gr/||Knowledge Representation, Reasoning, and Logic||Citation: Ioannis Hatzilygeroudis and Jim Prentzas, “Symbolic-neural rule based reasoning and explanation”, Expert Systems with Applications (ESWA) 42 (2015) 4595-4609.||Intelligent Tutoring and Intelligent e-Learning||Citation: Grivokostopoulou, F., Perikos, I. & Hatzilygeroudis, I. Int J Artif Intell Educ (2016). doi:10.1007/s40593-016-0116-x||Natural Language Processing||Citation: Isidoros Perikos and Ioannis Hatzilygeroudis, “Recognizing Emotions in Text Using Ensemble of Classifiers”, Engineering Applications of Artificial Intelligence (EAAI) Volume 51, May 2016, Pages 191–201.||1||2||3||0||Yes|
|Abstract: In this paper, we present neurule-based inference and explanation mechanisms. A neurule is a kind of integrated rule, which integrates a symbolic rule with neurocomputing: each neurule is considered as an adaline neural unit. Thus, a neurule base consists of a number of autonomous adaline units (neurules), expressed in a symbolic oriented syntax. There are two inference processes for neurules: the connectionism-oriented process, which gives pre-eminence to neurocomputing approach, and the symbolism-oriented process, which gives pre-eminence to a symbolic backwards chaining like approach. Symbolism-oriented process is proved to be more efficient than the connectionism-oriented one, in terms of the number of required computations (56,47% to 63,88% average reduction) and the mean runtime gain (59,95% to 64,89% in average), although both require almost the same average number of input values. The neurule-based explanation mechanism provides three types of explanations: ‘how’ a conclusion was derived, ‘why’ a value for a specific input variable was asked from the user and ‘why-not’ a variable has acquired a specific value. As shown by experiments, the neurule-based explanation mechanism is superior to that provided by known connectionist expert systems, another neuro-symbolic integration category. It provides less in number (64,38% to 69,26% average reduction) and more natural explanation rules, thus increasing efficiency (mean runtime gain of 56,73% in average) and comprehensibility of explanations.||Abstract: In this paper, first we present an educational system that assists students in learning and tutors in teaching search algorithms, an artificial intelligence topic. Learning is achieved through a wide range of learning activities. Algorithm visualizations demonstrate the operational functionality of algorithms according to the principles of active learning. So, a visualization process can stop and request from a student to specify the next step or explain the way that a decision was made by the algorithm. Similarly, interactive exercises assist students in learning to apply algorithms in a step-by-step interactive way. Students can apply an algorithm to an example case, specifying the algorithm’s steps interactively, with the system’s guidance and help, when necessary. Next, we present assessment approaches integrated in the system that aim to assist tutors in assessing the performance of students, reduce their marking task workload and provide immediate and meaningful feedback to the students. Automatic assessment is achieved in four stages, which constitute a general assessment framework. First, the system calculates the similarity between the student's and the correct answer using the edit distance metric. In the next stage, it identifies the type of the answer, based on an introduced answer categorization scheme related to completeness and accuracy of an answer, taking into account student carelessness too. Afterwards, the types of errors are identified, based on an introduced error categorization scheme. Finally, answer is automatically marked via an automated marker, based on its type, the edit distance and the type of errors made. To assess the learning effectiveness of the system an extended evaluation study was conducted in real class conditions. The experiment showed very encouraging results. Furthermore, to evaluate the performance of the assessment system, we compared the assessment mechanism against expert (human) tutors. A total of 400 students’ answers were assessed by three tutors and the results showed a very good agreement between the automatic assessment system and the tutors.||Abstract: Emotions constitute a key factor in human nature and behavior. The most common way for people to express their opinions, thoughts and communicate with each other is via written text. In this paper, we present a sentiment analysis system for automatic recognition of emotions in text, using an ensemble of classifiers. The designed ensemble classifier schema is based on the notion of combining knowledge-based and statistical machine learning classification methods aiming to benefit from their merits and minimize their drawbacks. The ensemble schema is based on three classifiers; two are statistical (a Naïve Bayes and a Maximum Entropy learner) and the third one is a knowledge-based tool performing deep analysis of the natural language sentences. The knowledge-based tool analyzes the sentence׳s text structure and dependencies and implements a keyword-based approach, where the emotional state of a sentence is derived from the emotional affinity of the sentence’s emotional parts. The ensemble classifier schema has been extensively evaluated on various forms of text such as, news headlines, articles and social media posts. The experimental results indicate quite satisfactory performance regarding the ability to recognize emotion presence in text and also to identify the polarity of the emotions.|
|Aegean||Financial and Management Engineering||MDE-Lab||http://mde-lab.aegean.gr/||Multidisciplinary Topics||Citation: Kyriklidis, C. and Dounias, G., (2016): Evolutionary computation for resource leveling optimization in project management, Integrated Computer-Aided Engineering (IOS Press), Vol. 23, No. 2, pp. 173-184||Multidisciplinary Topics||Citation: K. Boulas, G. Dounias, C. Papadopoulos, Approximating throughput using genetic programming in small serial production lines, in “Operational Research in Business & Economics”, Eds. Grigoroudis E. and Doumpos M., Springer Series in Business and Economics, pp.185-204 (2016), ISSN: 2198-7246||Multidisciplinary Topics||Citation: V. Vassiliadis and G. Dounias, 2016, Algorithmic Trading based on Biologically-Inspired Algorithms, to appear in, Shu-Heng Chen and Mak Kaboudan (Eds), OUP Handbook on Computational Economics and Finance. Oxford University Press||1||0||5||0||Yes|
|Abstract: This paper proposes an evolutionary computation based approach for solving resource leveling optimization problems
in project management. In modern management engineering, problems of this kind refer to the optimal handling of available
resources in a candidate project and have emerged, as the result of the even increasing needs of project managers in facing project
complexity, controlling related budgeting and finances and managing the construction production line. Standard approaches,
such as exhaustive or greedy search methodologies, fail to provide near-optimum solutions in feasible time even for small scale
problems, whereas intelligent approaches manage to quickly reach high quality near-optimal solutions. In this work, a new
genetic algorithm is proposed which investigates the start time of the non-critical activities of a project, in order to optimally
allocate its resources. The innovation of the proposed approach is related to certain genetic operations applied like crossover
for the improvement of the solution quality from generation to generation. The presentation and performance comparison of all
multi-objective functions for resource leveling that are available in literature is another interesting part of this work. Detailed
experiments with small and medium size benchmark problems taken from publicly available project data resources produce
highly accurate resource profiles. As shown in the experimental results, the proposed methodology proves capable of coping even
with large size project management problems without the need to divide the original problem to sub-problems due to complexity.
|Abstract: Genetic Programming (GP) has been used in a variety of fields to solve
complicated problems. This paper shows that GP can be applied in the domain of
serial production systems for acquiring useful measurements and line characteristics
such as throughput. Extensive experimentation has been performed in order to
set up the genetic programming implementation and to deal with problems like
code bloat or over fitting. We improve previous work on estimation of throughput
for three stages and present a formula for the estimation of throughput of production
lines with four stations. Further work is needed, but so far, results are
|Abstract: In recent years, algorithmic trading is an upcoming trend in finance. Algorithmic
trading refers to any automated process, consisting of a number of interconnected
components, whose main aim is to perform financial transactions of any kind. Their
main advantage lies in the fact that the human intervention is minimized to an acceptable
extent. This is quite desirable due to the fact that nowadays there are numerous
factors affecting financial decisions. Financial managers are able to deal with a limited
amount of information. Concerning algorithmic trading systems, there are many
ways to implement them. In this study, the main aim is to highlight the efficiency of
biologically-inspired methodologies, when incorporated in such systems. Biologically inspired
intelligence comprises a range of algorithms, whose common philosophy is
based on the behavior of real world, natural systems and networks. What is more, the
performance of the applied nature-inspired intelligent (NII) methodologies is compared
to traditional benchmark approaches, such as the random portfolio construction.
|Aegean||Department of Information
& Communication Systems Engineering
|Konstantinos Karampidis||Machine Learning and Data Mining||Citation: Karampidis, Konstantinos, and Giorgios Papadourakis. "File Type Identification for Digital Forensics." International Conference on Advanced Information Systems Engineering. Springer International Publishing, 2016.||Other (Specify in the cell below)||Citation: Karampidis, K., G. Papadourakis, and I. Deligiannis. "File type identification–a literature review." Proceedings of 9th International Conference on New Horizons in Industry Business and Education, NHIBE. 2015.||Other (Specify in the cell below)||Yes|
|Abstract: In modern world the use of digital devices for leisure or professional reasons (computers, tablets and smartphones etc.) is growing quickly. Nevertheless, criminals try to fool authorities and hide evidence in a computer or any other digital device, by changing the file type. File type detection is a very demanding task for a digital forensic examiner. In this paper a new methodology is proposed – in a digital forensics perspective- to identify altered file types with high accuracy by employing computational intelligence techniques. The proposed methodology is applied in the four most common types of files (jpg, png and gif). A three stage process involving feature extraction (Byte Frequency Distribution), feature selection (genetic algorithm) and classification (neural network) is proposed. Experimental results were conducted having files altered in a digital forensics perspective and the results are presented. The proposed model shows very high and exceptional accuracy in file type identification.||Digital Forensics||Abstract: The rapid growth and use of digital devices (e.g. computers, android tablets and smartphones), made people vulnerable to cybercrimes . Dr. Debarati Halder and Dr. K. Jaishankar (2011) define cybercrimes as: "Offences that are committed against individuals or groups of individuals with a criminal motive to intentionally harm the reputation of the victim or cause physical or mental harm, or loss, to the victim directly or indirectly, using modern telecommunication networks such as Internet (Chat rooms, emails, notice boards and groups) and mobile phones (SMS/MMS)"  . For instance, one major and loathsome crime is child pornography. A child predator may try to hide evidence in a computer or any other digital device, by changing the file type. This could be easily done by altering the file extension or the file signature. A digital forensic examiner on the other hand, uses forensic software to accurate identify the file types in order to determine which files may contain potential evidence. Nevertheless, current type recognition mechanisms are vulnerable to simple ‘’attacks’’ and even the most widely used commercial forensic software suites may not predict correctly an intentionally altered file.
For example, if someone changes file extension from .jpg to .doc, the forensic software will identify that the file type is changed. Nevertheless, if the file signature is changed as well in order to relate to a .doc file, the forensic software detection algorithm may show poor results. Another important field where file type identification must be quick and accurate is spam e-mail. Every day massive amount of spam e-mails are received and lot of time is spent to delete them. Unfortunately this is not the only disadvantage. Network bandwidth is taken, e-mail servers are slowing down and eventually an unexperienced end user may not be able to identify if the e-mail hides malicious content. These are only a few paradigms of the possible damage caused by an unsuccessful file type recognition. This
literature review will try to examine all possible practices of identifying a file type and try to record current recognizing techniques.
|Abstract: Symbolic event recognition systems have been successfully applied to a variety of application domains, extracting useful information in the form of events, allowing experts or other systems to monitor and respond when significant events are recognised. In a typical event recognition application, however, these systems often have to deal with a significant amount of uncertainty. In this article, we address the issue of uncertainty in logic-based event recognition by extending the Event Calculus with probabilistic reasoning. Markov logic networks are a natural candidate for our logic-based formalism. However, the temporal semantics of the Event Calculus introduce a number of challenges for the proposed model. We show how and under what assumptions we can overcome these problems. Additionally, we study how probabilistic modelling changes the behaviour of the formalism, affecting its key property—the inertia of fluents. Furthermore, we demonstrate the advantages of the probabilistic Event Calculus through examples and experiments in the domain of activity recognition, using a publicly available dataset for video surveillance.|
|Foundation for Research and Technology Hellas||Institute of Computer Science||Information Systems Lab||http://www.ics.forth.gr/isl||Knowledge Representation, Reasoning, and Logic||Citation: Theodore Patkos, Dimitris Plexousakis, Abdelghani Chibani, Yacine Amirat. An Event Calculus Production Rule System for Reasoning in Dynamic and Uncertain domains. Theory and Practice of Logic Programming, vol. 16 no. 3, pp. 325-352, 2016.||Other (Specify in the cell below)||Citation: Theodore Patkos, Giorgos Flouris, Antonis Bikakis. A Multi-Aspect Evaluation Framework for Comments on the Social Web. In the proccedings of the 15th International Conference on Principles of Knowledge Representation and Reasoning (KR'16), pp. 593-596, 2016||Web and Knowledge-based Information Systems||Citation: Troullinou, G., Kondylakis, H., Daskalaki, E., & Plexousakis, D. Ontology Understanding without Tears: The summarization approach. In press for the Semantic Web journal, 2016||2||1||5||0||Yes|
|Abstract: Action languages have emerged as an important field of Knowledge Representation for reasoning about change and causality in dynamic domains. This article presents Cerbere, a production system designed to perform online causal, temporal and epistemic reasoning based on the Event Calculus. The framework implements the declarative semantics of the underlying logic theories in a forward-chaining rule-based reasoning system, coupling the high expressiveness of its formalisms with the efficiency of rule-based systems. To illustrate its applicability, we present both the modeling of benchmark problems in the field, as well as its utilization in the challenging domain of smart spaces. A hybrid framework that combines logic-based with probabilistic reasoning has been developed, that aims to accommodate activity recognition and monitoring tasks in smart spaces.||Computational Argumentation||Abstract: Users’ reviews, comments and votes on the Social Web form the modern version of word-of-mouth communication, which has a huge impact on people’s habits and businesses. Nonetheless, there are only few attempts to formally model and analyze them using Computational Models of Argument, which achieved a first significant step in bringing these two fields closer. This work formalizes standard features of the Social Web, such as commentary and social voting, and proposes methods for the evaluation of the comments’ quality and acceptance.||Abstract: Given the explosive growth in both data size and schema complexity, data sources are becoming increasingly difficult to use and comprehend. Summarization aspires to produce an abridged version of the original data source highlighting its most representative concepts. In this paper, we present an advanced version of the RDF Digest, a novel platform that automatically produces and visualizes high quality summaries of RDF/S Knowledge Bases (KBs). A summary is a valid RDFS graph that includes the most representative concepts of the schema, adapted to the corresponding instances. To construct this graph we designed and implemented two algorithms that exploit both the structure of the corresponding graph and the semantics of the KB. Initially we identify the most important nodes using the notion of relevance. Then we explore how to select the edges connecting these nodes by maximizing either locally or globally the importance of the selected edges. The extensive evaluation performed compares our system with two other systems and shows the benefits of our approach and the considerable advantages gained.|
|Aristotle University of Thessaloniki||Informatics||Intelligent Systems||http://intelligence.csd.auth.gr/||Machine Learning and Data Mining||Citation: E. Spyromitros-Xioufis, G. Tsoumakas, W. Groves, I. Vlahavas (2016) Multi-Target Regression via Input Space Expansion: Treating Targets as Inputs. Machine Learning Journal 104(1), 55-98.||Knowledge Representation, Reasoning, and Logic||Citation: N. Bassiliades, M. Symeonidis, G. Meditskos, E. Kontopoulos, P. Gouvas, I. Vlahavas, A Semantic Recommendation Algorithm for the PaaSport Platform-as-a-Service Marketplace, Expert Systems with Applications, Available online 22 Sep 2016, http://dx.doi.org/10.1016/j.eswa.2016.09.032.||Other (Specify in the cell below)||Citation: T. Stavropoulos, E. Kontopoulos, N. Bassiliades, J. Argyriou, A. Bikakis, D. Vrakas, I. Vlahavas, Rule-based Approaches for Energy Savings in an Ambient Intelligence Environment, Journal of Pervasive and Mobile Computing, Elsevier, Vol. 19, pp. 1-23, 2015||4||3||17||3||Yes|
|In many practical applications of supervised learning the task involves the prediction of multiple target variables from a common set of input variables. When the prediction targets are binary the task is called multi-label classification, while when the targets are continuous the task is called multi-target regression. In both tasks, target variables often exhibit statistical dependencies and exploiting them in order to improve predictive accuracy is a core challenge. A family of multi-label classification methods address this challenge by building a separate model for each target on an expanded input space where other targets are treated as additional input variables. Despite the success of these methods in the multi-label classification domain, their applicability and effectiveness in multi-target regression has not been studied until now. In this paper, we introduce two new methods for multi-target regression, called stacked single-target and ensemble of regressor chains, by adapting two popular multi-label classification methods of this family. Furthermore, we highlight an inherent problem of these methods—a discrepancy of the values of the additional input variables between training and prediction—and develop extensions that use out-of-sample estimates of the target variables during training in order to tackle this problem. The results of an extensive experimental evaluation carried out on a large and diverse collection of datasets show that, when the discrepancy is appropriately mitigated, the proposed methods attain consistent improvements over the independent regressions baseline. Moreover, two versions of Ensemble of Regression Chains perform significantly better than four state-of-the-art methods including regularization-based multi-task learning methods and a multi-objective random forest approach.||Platform as a service (PaaS) is one of the Cloud computing services that provides a computing platform in the Cloud, allowing customers to develop, run, and manage web applications without the complexity of building and maintaining the infrastructure. The primary disadvantage for an SME to enter the emerging PaaS market is the possibility of being locked in to a certain platform, mostly provided by the market's giants. The PaaSport project focuses on facilitating SMEs to deploy business applications on the best-matching Cloud PaaS offering and to seamlessly migrate these applications on demand, via a thin, non-intrusive Cloud-broker, in the form of a Cloud PaaS Marketplace. PaaSport enables PaaS provider SMEs to roll out semantically interoperable PaaS offerings, by annotating them using a unified PaaS semantic model that has been defined as an OWL ontology. In this paper we focus on the recommendation algorithm that has been developed on top of the ontology, for providing the application developer with recommendations about the best-matching Cloud PaaS offering. The algorithm consists of: a) a matchmaking part, where the functional parameters of the application are taken into account to rule out inconsistent offerings, and b) a ranking part, where the non-functional parameters of the application are considered to score and rank offerings. Τhe algorithm is extensively evaluated showing linear scalability to the number of offerings and application requirements. Furthermore, it is extensible upon future semantic model extensions, because it is agnostic to domain specific concepts and parameters, using SPARQL template queries.||Ambient Intelligence||Abstract: This paper presents a novel real-world application for energy savings in a Smart Building environment. The proposed system unifies heterogeneous wireless sensor networks under a Semantic Web Service middleware. Two complementary and mutually exclusive rule-based approaches for enforcing energy-saving policies are proposed: a reactive agent based on production rules and a deliberative agent based on defeasible logic. The system was deployed at a Greek University, showing promising experimental results (at least 4% daily savings). Although the percentage of energy savings may seem low, the greatest merit of the method is ensuring no energy is wasted by constantly enforcing the policies.|
|International Hellenic University||School of Science and Technology||Data mining group||http://tech.ihu.edu.gr||Machine Learning and Data Mining||Citation: Tzirakis P. and Tjortjis C., “T3C: Improving a Decision Tree Classification Algorithm's Interval Splits on Continuous Attributes”, to appear at Advances in Data Analysis and Classification, doi: 10.1007/s11634-016-0246-x||Knowledge Representation, Reasoning, and Logic||Citation: Kontopoulos, E., Berberidis, C., Dergiades, T., & Bassiliades, N. (2013). Ontology-based sentiment analysis of twitter posts. Expert systems with applications, 40(10), 4065-4074||Multidisciplinary Topics||Citation: Ougiaroglou S., Evangelidis G., “Efficient k-NN Classification based on Homogeneous Clusters”, Artificial Intelligence Review, Springer, Volume 42, Issue 3, pp. 491-513, 2014.||3||0||6||0||Yes|
|Abstract: This paper proposes, describes and evaluates T3C, a classiﬁcation algorithm that builds decision trees of depth at most three, and results in high accuracy whilst keeping the size of the tree reasonably small. T3C is an improvement over algorithm T3 in the way it performs splits on continuous attributes. When run against publicly available data sets, T3C achieved lower generalisation error than T3 and the popular C4.5, and competitive results compared to Random Forest and Rotation Forest.||Abstract: The emergence of Web 2.0 has drastically altered the way users perceive the Internet, by improving information sharing, collaboration and interoperability. Micro-blogging is one of the most popular Web 2.0 applications and related services, like Twitter, have evolved into a practical means for sharing opinions on almost all aspects of everyday life. Consequently, micro-blogging web sites have since become rich data sources for opinion mining and sentiment analysis. Towards this direction, text-based sentiment classifiers often prove inefficient, since tweets typically do not consist of representative and syntactically consistent words, due to the imposed character limit. This paper proposes the deployment of original ontology-based techniques towards a more efficient sentiment analysis of Twitter posts. The novelty of the proposed approach is that posts are not simply characterized by a sentiment score, as is the case with machine learning-based classifiers, but instead receive a sentiment grade for each distinct notion in the post. Overall, our proposed architecture results in a more detailed analysis of post opinions regarding a specific topic.||Abstract: The k-NN classifier is a widely used classification algorithm. However, exhaustively searching the whole dataset for the nearest neighbors is prohibitive for large datasets because of the high computational cost involved. The paper proposes an efficient model for fast and accurate nearest neighbor classification. The model consists of a non-parametric cluster-based preprocessing algorithm that constructs a two-level speed-up data structure and algorithms that access this structure to perform the classification. Furthermore, the paper demonstrates how the proposed model can improve the performance on reduced sets built by various data reduction techniques. The proposed classification model was evaluated using eight real-life datasets and compared to known speed-up methods. The experimental results show that it is a fast and accurate classifier, and, in addition, it involves low pre-processing computational cost.|
|Athens University of Economics and Business||Department of Informatics||Natural Language Processing Group||http://nlp.cs.aueb.gr||Natural Language Processing||G. Brokos, P. Malakasiotis and I. Androutsopoulos, "Using Centroids of Word Embeddings and Word Mover's Distance for Biomedical Document Retrieval in Question Answering". Proceedings of the 15th Workshop on Biomedical Natural Language Processing (BioNLP 2016), at the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), Berlin, Germany, pp. 114-118, 2016.||Machine Learning and Data Mining||M. Pontiki, D. Galanis, H. Papageorgiou, I. Androutsopoulos, S. Manandhar, M. AL-Smadi, M. Al-Ayyoub, Y. Zhao, B. Qin, O. De Clercq, V. Hoste, M. Apidianaki, X. Tannier, N. Loukachevitch, E. Kotelnikov, N. Bel, S.M. Jimenez-Zafra and G. Eryigit, "SemEval-2016 Task 5: Aspect Based Sentiment Analysis". Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval 2016), San Diego, CA, USA, pp. 19-30, 2016.||Knowledge Representation, Reasoning, and Logic||I. Androutsopoulos, G. Lampouras and D. Galanis, "Generating Natural Language Descriptions from OWL Ontologies: the NaturalOWL System". Journal of Artificial Intelligence Research, 48:671-715, 2013.||1||2||8||5||Yes|
|Abstract: We propose a document retrieval method for question answering that represents documents and questions as weighted centroids of word embeddings and reranks the retrieved documents with a relaxation of Word Mover's Distance. Using biomedical questions and documents from BioASQ, we show that our method is competitive with PubMed. With a top-k approximation, our method is fast, and easily portable to other domains and languages.||Abstract: This paper describes the SemEval2016 shared task on Aspect Based Sentiment Analysis (ABSA), a continuation of ther espective tasks of 2014 and 2015. In its third year, the task provided 19 training and 20 testing datasets for 8 languages and 7 domains, as well as a common evaluation procedure. From these datasets, 25 were for sentence-level and 14 for text-level ABSA; the latter was introduced for the first time as a subtask in SemEval. The task attracted 245 submissions from 29 teams.||Abstract: We present NaturalOWL, a natural language generation system that produces texts describing individuals or classes of OWL ontologies. Unlike simpler OWL verbalizers, which typically express a single axiom at a time in controlled, often not entirely fluent natural language primarily for the benefit of domain experts, we aim to generate fluent and coherent multi-sentence texts for end-users. With a system like NaturalOWL, one can publish information in OWL on the Web, along with automatically produced corresponding texts in multiple languages, making the information accessible not only to computer programs and domain experts, but also end-users. We discuss the processing stages of NaturalOWL, the optional domain-dependent linguistic resources that the system can use at each stage, and why they are useful. We also present trials showing that when the domain-dependent llinguistic resources are available, NaturalOWL produces significantly better texts compared to a simpler verbalizer, and that the resources can be created with relatively light effort.|
|Ioannina||Computer Science & Engineering||Information Processing and Analysis Research Group (IPAN)||http://ipan.cs.uoi.gr/||Machine Learning and Data Mining||Citation: Konstantinos Blekas, Aristidis Likas:
Sparse regression mixture modeling with the multi-kernel relevance vector machine. Knowl. Inf. Syst. 39(2): 241-264 (2014)
|Robotics, Perception and Vision||Citation: Vasileios Karavasilis, Christophoros Nikou, Aristidis Likas:
Visual tracking using spatially weighted likelihood of Gaussian mixtures. Computer Vision and Image Understanding 140: 43-57 (2015)
|Abstract: A regression mixture model is proposed where each mixture component is a multi-kernel version of the Relevance Vector Machine (RVM). This mixture model exploits the enhanced modeling capability of RVMs, due to their embedded sparsity enforcing properties. In order to deal with the selection problem of kernel parameters, a weighted multi-kernel scheme is employed, where the weights are estimated during training. The mixture model is trained using the maximum a posteriori approach, where the Expectation Maximization (EM) algorithm is applied offering closed form update equations for the model parameters. Moreover, an incremental learning methodology is also presented that tackles the parameter initialization problem of the EM algorithm along with a BIC-based model selection methodology to estimate the proper number of mixture components. We provide comparative experimental results using various artificial and real benchmark datasets that empirically illustrate the efficiency of the proposed mixture model.||Abstract: A probabilistic real time tracking algorithm is proposed where the target’s feature distribution is represented by a Gaussian mixture model (GMM). The target localization is achieved by maximizing its weighted likelihood in the image sequence. The role of the weight in the likelihood definition is important as it allows gradient based optimization to be performed, which would not be feasible in a context of standard likelihood representations. Moreover, the algorithm handles scale and rotation changes of the target, as well as appearance changes, which modifies the components of the GMM. The real time performance is experimentally confirmed, while the algorithms has comparative performance with other state-of-the-art tracking algorithms.|
|Piraeus||Digital Systems||Ai LAB||http://ai-group.ds.unipi.gr/ai-group||Autonomous Agents and Multi-agent Systems||Citation: G. A. Vouros, Decentralized semantic coordination of interconnected entities via belief propagation. AI Communications, vol. 28, no. 4, pp. 617-634, 2015||Knowledge Representation, Reasoning, and Logic||Citation: G.Santipantakis, G.A. Vouros. Distributed reasoning with coupled ontologies: the E-SHIQ representation framework,Knowl Inf Syst (2015) 45: 491. doi:10.1007/s10115-014-0807-2||Machine Learning and Data Mining||Citation: A. Skarlatidis et al. Probabilistic Event Calculus for Event Recognition. ACM Transactions on Computational Logic (TOCL)
Volume 16 Issue 2, March 2015
|1||2||5||0||Yes/No (This is compulsory.)|
|Abstract: Agents in inherently distributed and open settings cannot be assumed to share an agreed ontology of their common task environment. To interact effectively, these agents need to establish semantic correspondences between their ontology elements. However, the correspondences computed by two agents may differ due to (a) differences in their ontologies, (b) different alignment methods used, and due to (c) different information one makes available to the other. Although semantic coordination methods have already been proposed for the computation of subjective correspondences between agents (i.e. correspondences from the viewpoint of a specific agent), this paper proposes a decentralized method for communities, groups and arbitrarily formed networks of interconnected agents to reach semantic agreements on subjective ontology elements’ correspondences, via belief propagation: Agents detect disagreements on correspondences via feedback they receive from others, and they revise their decisions with respect to their preferences on correspondences and the semantics of ontological specifications. This work addresses this problem by means of a distributed extension of the max-plus algorithm. Experimental results from a large number of networks of varying complexity show the strengths of the proposed approach.||Abstract: Combining ontologies in expressive fragments of Description Logics in inherently distributed peer-to-peer settings with autonomous peers is still a challenge in the general case. Although several modular ontology representation frameworks have been proposed for combining Description Logics knowledge bases, each of them has its own strengths and limitations. In this paper, we consider networks of peers, where each peer holds its own ontology within the SHIQ fragment of Description Logics, and subjective beliefs on how its knowledge can be coupled with the knowledge of others. To allow peers to reason jointly with their coupled knowledge, while preserving their autonomy on evolving their knowledge, data, and subjective beliefs, we propose the E-E-SHIQ representation framework. The article motivates the need for E-E-SHIQ and compares it to existing representation frameworks for modular Description Logics. It discusses the implementation of the E-E-SHIQ distributed reasoner and presents experimental results on the efficiency of this reasoner.||Abstract: Symbolic event recognition systems have been successfully applied to a variety of application domains, extracting useful information in the form of events, allowing experts or other systems to monitor and respond when significant events are recognised. In a typical event recognition application, however, these systems often have to deal with a significant amount of uncertainty. In this article, we address the issue of uncertainty in logic-based event recognition by extending the Event Calculus with probabilistic reasoning. Markov logic networks are a natural candidate for our logic-based formalism. However, the temporal semantics of the Event Calculus introduce a number of challenges for the proposed model. We show how and under what assumptions we can overcome these problems. Additionally, we study how probabilistic modelling changes the behaviour of the formalism, affecting its key property—the inertia of fluents. Furthermore, we demonstrate the advantages of the probabilistic Event Calculus through examples and experiments in the domain of activity recognition, using a publicly available dataset for video surveillance.|
|University College London||Information Studies||http://www.ucl.ac.uk/dis/people/antonis||Knowledge Representation, Reasoning, and Logic||Citation: Antonis Bikakis, Grigoris Antoniou, Panayiotis Hassapis:
Strategies for contextual reasoning with conflicts in ambient intelligence. Knowledge and Information Systems 27(1): 45-84 (2011)
|Web and Knowledge-based Information Systems||Citation: Theodore Patkos, Antonis Bikakis, Giorgos Flouris:
A Multi-Aspect Evaluation Framework for Comments on the Social Web. KR 2016: 593-596 (2016)
|Agent-based and integrated systems||Citation: Antonis Bikakis, Patrice Caire:
Computing Coalitions in Multiagent Systems: A Contextual Reasoning Approach. EUMAS 2014: 85-100 (2014)
|Abstract: Ambient Intelligence environments host various agents that collect, process, change and share the available context information. The imperfect nature of context, the open and dynamic nature of such environments and the special characteristics of ambient agents have introduced new research challenges in the study of Distributed Artificial Intelligence. This paper proposes a solution based on the Multi-Context Systems paradigm, according to which local knowledge of ambient agents is encoded in rule theories (contexts), and information flow between agents is achieved through mapping rules that associate concepts used by different contexts. To resolve potential inconsistencies that may arise from the interaction of contexts through their mappings (global conflicts), we use a preference ordering on the system contexts, which may express the confidence that an agent has in the knowledge imported by other agents. On top of this model, we have developed four alternative strategies for global conflicts resolution, which mainly differ in the type and extent of context and preference information that is used to resolve potential conflicts. The four strategies have been respectively implemented in four versions of a distributed algorithm for query evaluation and evaluated in a simulated P2P system.||Abstract: Users' reviews, comments and votes on the Social Web form the modern version of word-of-mouth communication, which has a huge impact on people's habits and businesses. Nonetheless, there are only few attempts to formally model and analyze them using Computational Models of Argument, which achieved a first significant step in bringing these two fields closer. In this paper, we attempt their further integration by formalizing standard features of the Social Web, such as commentary and social voting, and by proposing methods for the evaluation of the comments' quality and acceptance.||Abstract: In multiagent systems, agents often have to rely on other agents to reach their goals, for example when they lack a needed resource or do not have the capability to perform a required action. Agents therefore need to cooperate. Some of the questions then raised, such as, which agent to cooperate with, are addressed in the field of coalition formation. In this paper we go further and first, address the question of how to compute the solution space for the formation of coalitions using a contextual reasoning approach. We model agents as contexts in Multi-Context Systems (MCS) and dependence relations among agents as bridge rules. We then systematically compute all potential coalitions using algorithms for MCS equilibria. Finally, given a set of functional and non-functional requirements, we propose ways to select the best solutions. We illustrate our approach with an example from robotics.|
|Crete||Computer Science||Mens Ex Machina||www.mensxmachina.org||Machine Learning and Data Mining||Sofia Triantafillou, Ioannis Tsamardinos (2014): Constraint-based Causal Discovery from Multiple Interventions over Overlapping Variable Sets, Journal of Machine Learning Research 16, p. 1-47||Multidisciplinary topics (Bioinformatics)||Vincenzo Lagani, Argyro D. Karozou, David Gomez-Cabrero, Gilad Silberberg, Ioannis Tsamardinos (2016): A comparative evaluation of data-merging and meta-analysis methods for reconstructing gene-gene interactions, BMC Bioinformatics 17(S5), p. 194, BioMed Central Ltd, doi:10.1186/s12859-016-1038-1||Uncertainty in AI||Vincenzo Lagani, Sofia Triantafillou, Gordon Ball, Jesper Tegner, Ioannis Tsamardinos (2016): Probabilistic computational causal discovery for systems biology, Uncertainty in Biology 17, p. 33-73||1||7||0||5||Yes/No (This is compulsory.)|
|Scientific practice typically involves repeatedly studying a system, each time trying to unravel a different perspective. In each study, the scientist may take measurements under different experimental conditions (interventions, manipulations, perturbations) and measure different sets of quantities (variables). The result is a collection of heterogeneous data sets coming from different data distributions. In this work, we present algorithm COmbINE, which accepts a collection of data sets over overlapping variable sets under different experimental conditions; COmbINE then outputs a summary of all causal models indicating the invariant and variant structural characteristics of all models that simultaneously fit all of the input data sets. COmbINE converts estimated dependencies and independencies in the data into path constraints on the data-generating causal model and encodes them as a SAT instance. The algorithm is sound and complete in the sample limit. To account for conflicting constraints arising from statistical errors, we introduce a general method for sorting constraints in order of confidence, computed as a function of their corresponding p-values. In our empirical evaluation, COmbINE outperforms in terms of efficiency the only pre-existing similar algorithm; the latter additionally admits feedback cycles, but does not admit conflicting constraints which hinders the applicability on real data. As a proof-of-concept, COmbINE is employed to co-analyze 4 real, mass-cytometry data sets measuring phosphorylated protein concentrations of overlapping protein sets under 3 different interventions.||We address the problem of integratively analyzing multiple gene expression, microarray datasets in order to reconstruct gene-gene interaction networks. Integrating multiple datasets is generally believed to provide increased statistical power and to lead to a better characterization of the system under study. However, the presence of systematic variation across different studies makes network reverse-engineering tasks particularly challenging. We contrast two approaches that have been frequently used in the literature for addressing systematic biases: meta-analysis methods, which first calculate opportune statistics on single datasets and successively summarize them, and data-merging methods, which directly analyze the pooled data after removing eventual biases. This comparative evaluation is performed on both synthetic and real data, the latter consisting of two manually curated microarray compendia comprising several E. coli and Yeast studies, respectively. Furthermore, the reconstruction of the regulatory network of the transcription factor Ikaros in human Peripheral Blood Mononuclear Cells (PBMCs) is presented as a case-study.||Discovering the causal mechanisms of biological systems is necessary to design new drugs and therapies. Computational Causal Discovery (CD) is a field that offers the potential to discover causal relations and causal models under certain conditions with a limited set of interventions / manipulations. This chapter reviews the basic concepts and principles of CD, the nature of the assumptions to enable it, potential pitfalls in its application, and recent advances and directions. Importantly, several success stories in molecular and systems biology are discussed in detail.|
|Piraeus||Digital Systems||CBM (Computational Biomedicine LAB / Ilias Maglogiannis||Biomedical Informatics||Citation: I. Valavanis, I. Maglogiannis, A. Chatziioannou, “Exploring robust diagnostic signatures for Cutaneous Melanoma utilizing Genetic and Imaging Data” IEEE Journal of Biomedical and Health Informatics JBHI 19 (1), 6849924, pp. 190-198 (2015)||Computer Vision||Citation: K. K. Delibasis, V. P. Plagianakos, I. Maglogiannis, “Refinement of human silhouette segmentation in omni-directional indoor videos”, Computer Vision and Image Understanding Elsevier 128: 65-83 (2014)||Pervasive Health Systems||Citation: I. Maglogiannis, C. Ioannou, P. Tsanakas, “Fall detection and activity identification using wearable and hand-held devices” Integrated Computer-Aided Engineering IOS Press 23(2): 161-172 (2016)||1||2||2||4|
|Multi-modal data combined in an integrated dataset can be used to aim the identification of instrumental biological actions that trigger the development of a disease. In this work, we use an integrated dataset related to cutaneous melanoma that fuses two separate sets providing complementary information (gene expression profiling and imaging). Our first goal is to select a subset of genes that comprise candidate genetic biomarkers. The derived gene signature is then utilized in order to select imaging features, which characterize disease at a macroscopic level, presenting the highest, mutual information content to the selected genes. Using information gain ratio measurements and exploration of Gene Ontology tree, we identified a set of thirty-two uncorrelated genes with a pivotal role as regards molecular regulation of melanoma, which expression across samples, correlates highly with the different pathological states. These genes steered the selection of a subset of uncorrelated imaging features based on their ranking according to mutual information measurements to the selected gene expression values. Selected genes and imaging features were used to train various classifiers that could generalize well when discriminating malignant from benign melanoma samples. Results on the selection on imaging features and classification were compared to feature selection based on a straight forward statistical selection and a stochastic based methodology. Genes in the backstage of low-level biological processes showed to carry higher information content than the macroscopic imaging features.||In this paper, we present a methodology for refining the segmentation of human silhouettes in indoor videos acquired by fisheye cameras. This methodology is based on a fisheye camera model that employs a spherical optical element and central projection. The parameters of the camera model are determined only once (during calibration), using the correspondence of a number of user-defined landmarks, both in real world coordinates and on a captured video frame. Subsequently, each pixel of the video frame is inversely mapped to the direction of view in the real world and the relevant data are stored in look-up tables for fast utilization in real-time video processing. The proposed fisheye camera model enables the inference of possible real world positions and conditionally the height and width of a segmented cluster of pixels in the video frame. In this work we utilize the proposed calibrated camera model to achieve a simple geometric reasoning that corrects gaps and mistakes of the human figure segmentation, detects segmented human silhouettes inside and outside the room and rejects segmentation that corresponds to non-human activity. Unique labels are assigned to each refined silhouette, according to their estimated real world position and appearance and the trajectory of each silhouette in real world coordinates is estimated. Experimental results are presented for a number of video sequences, in which the number of false positive pixels (regarding human silhouette segmentation) is substantially reduced as a result of the application of the proposed geometry-based segmentation refinement.||Human motion data captured from wearable devices such as smart watches can be utilized for activity recognition and emergency event detection, especially in the case of elderly or disabled people living independently in their homes. The output of such sensors is streams of physical activity data that require real-time recognition, especially in emergency situations. This paper presents a novel application that utilizes the low-cost Pebble Smart Watch together with an Android device (i.e. a smart phone) and allows the efficient capturing, transmission, storage and processing of such motion data. The paper includes the technical details of the stream data capturing and processing methodology, along with a comparison of the major algorithms used for the classification of physical activity type (i.e. Mild, Moderate, Intense and Sleep).|
|University of Patras||Electrical and Vomputer Engineering||AI Group||http://www.wcl.ece.upatras.gr/en/ai||Machine Learning and Data Mining||Citation: Self-trained LMT for semisupervised learning
N Fazakis, S Karlos, S Kotsiantis, K Sgarbas
Computational intelligence and neuroscience 2016, 10
|Natural Language Processing||Citation: Automatic Sound Recognition of Urban Environment Events
T Theodorou, I Mporas, N Fakotakis
International Conference on Speech and Computer, 129-136, 2015
|Other (Specify in the cell below)||Citation: A Quantum Probability Splitter and its Application to Qubit Preparation
Quantum Information Processing 12 (1), 601-610, 2013
|3||1||15||0||Yes (This is compulsory.)|
|Abstract: The most important asset of semisupervised classification methods is the use of available unlabeled data combined with a clearly smaller set of labeled examples, so as to increase the classification accuracy compared with the default procedure of supervised methods, which on the other hand use only the labeled data during the training phase. Both the absence of automated mechanisms that produce labeled data and the high cost of needed human effort for completing the procedure of labelization in several scientific domains rise the need for semisupervised methods which counterbalance this phenomenon. In this work, a self-trained Logistic Model Trees (LMT) algorithmis presented,which combines the characteristics of Logistic Trees under the scenario of poor available labeled data. We performed an in depth comparison with other well-known semisupervised classification methods on standard benchmark datasets and we finally reached to the point that the presented technique had better accuracy in most cases.||Abstract: The audio analysis of speaker's surroundings has been a first step for several processing systems that enable speaker's mobility though his daily life. These algorithms usually operate in a short-time analysis decomposing the incoming events in time and frequency domain. In this paper, an automatic sound recognizer is studied, which investigates audio events of interest from urban environment. Our experiments were conducted using a close set of audio events from which well known and commonly used ...||Quantum AI/IP||Abstract: This paper presents a quantum probability splitter, ie a quantum circuit that modifies the probability amplitudes of a qubit so that the probability on the selected basis state is halved. It also presents a potential application of this circuit related to qubit preparation: it shows how a qubit is prepared to a superposition of any valid pair of basis states probabilities, by repeated application of the quantum probability splitter.|
|Technical University of Crete (TUC)||School of Electrical and Computer Engineering (ECE)||InteLLigence (G. Chalkiadakis' Group)||http://www.intelligence.tuc.gr||Agent-based and integrated systems||Hliaoutakis A., Chalkiadakis G.: Agent-Based Modeling of Ancient Societies and their Organization Structure, Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS). In press. (Published online 25 Jan 2016)||Autonomous Agents and Multi-agent Systems||Akasiadis C., Chalkiadakis G.: Decentralized Large-Scale Electricity Consumption Shifting by Prosumer Cooperatives, In Proc. of the 22nd European Conference on Artificial Intelligence, The Hague, The Netherlands, September 2016||Uncertainty in AI||Angelidakis A., Chalkiadakis G.: Factored MDPs for Optimal Prosumer Decision-Making, Proceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-2015), Istanbul, Turkey, May 2015. Best student paper award finalist||1||0||5||5||Yes (This is compulsory.)|
|Some of the most interesting questions one can ask about early societies, are about people and their relations, and the nature and scale of their organization. In this work, we attempt to answer such questions with approaches introduced by multiagent systems. Specifically, we developed a generic agent-based model (ABM) for simulating ancient societies. Unlike most existing ABMs used in archaeology, our model includes agents that are autonomous and utility-based. Our model can (and does) also incorporate different social organization paradigms and technologies used in ancient societies. Equipped with such paradigms, our model allows us to explore the transition from a simple to a more complex society by focusing on the historical social dynamics—i.e., the flexibility and evolution of power relationships depending on social context and time. As a case study, we employ our model to evaluate the impact of the implemented social and technological paradigms on an artificial Early Bronze Age “Minoan” society located at a particular region of the island of Crete. Model parameter choices are based on archaeological evidence and studies, but are not biased towards any specific assumption. Results over a number of different simulation scenarios demonstrate an impressive sustainability for settlements consisting of and adopting a socio-economic organization model based on self-organization, and which was inspired by a recent framework for modern self-organizing agent organizations. This is the first time a self-organization approach is incorporated in an archaeology ABM system.||In this work we address the problem of coordinated consumption shifting for electricity prosumers. We show that individual optimization with respect to electricity prices does not always lead to minimized costs, thus necessitating a cooperative approach. A prosumer cooperative employs an internal cryptocurrency mechanism for coordinating members decisions and distributing the collectively generated profits. The mechanism generates cryptocoins in a distributed fashion, and awards them to participants according to various criteria, such as contribution impact and accuracy between stated and final shifting actions. In particular, when a scoring rules-based distribution method is employed, participants are incentivized to be accurate. When tested on a large dataset with real-world production and consumption data, our approach is shown to provide incentives for accurate statements and increased economic profits for the cooperative.||Tackling the decision-making problem faced by a prosumer (i.e., a producer that is simultaneously a consumer) when selling and buying energy in the emerging smart electricity grid, is of utmost importance for the economic profitability of such a business entity. In this paper, we model, for the first time, this problem as a factored Markov Decision Process. By so doing, we are able to represent the problem compactly, and provide an exact optimal solution via dynamic programming—notwithstanding its large size. Our model successfully captures the main aspects of the business decisions of a prosumer corresponding to a community microgrid of any size. Moreover, it includes appropriate sub-models for prosumer production and consumption prediction. Experimental simulations verify the effectiveness of our approach; and show that our exact value iteration solution matches that of a state-of-the-art method for stochastic planning in very large environments, while outperforming it in terms of computation time.|
|Technical University of Crete||School of Electrical and Computer Engineering||Intelligent Systems Lab||http://www.intelligence.tuc.gr/~lagoudakis||Machine Learning and Data Mining||Citation: Rexakis I., Lagoudakis M.: Directed Policy Search for Decision Making Using Relevance Vector Machines , International Journal on Artificial Intelligence Tools,Volume 23, Issue 04, August 2014.||Robotics, Perception and Vision||Citation: Piperakis S., Orfanoudakis E., Lagoudakis M.: Predictive Control for Dynamic Locomotion of Real Humanoid Robots, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Chicago, USA, September 2014, pp. 4036-4043.||Constraints, Satisfiability, and Search||Citation: Vasilikos V., Lagoudakis M.: Optimization of Heuristic Search using Recursive Algorithm Selection and Reinforcement Learning , Annals of Mathematics and Artificial Intelligence, 60 (1-2), 2010, pp. 119-151.
Volume 16 Issue 2, March 2015
|Abstract: Several recent learning approaches in decision making under uncertainty suggest the use of classifiers for representing policies compactly. The space of possible policies, even under such structured representations, is huge and must be searched carefully to avoid computationally expensive policy simulations (rollouts). In our recent work, we proposed a method for directed exploration of policy space using support vector classifiers, whereby rollouts are directed to states around the boundaries between different action choices indicated by the separating hyperplanes in the represented policies. While effective, this method suffers from the growing number of support vectors in the underlying classifiers as the number of training examples increases. In this paper, we propose an alternative method for directed policy search based on relevance vector machines. Relevance vector machines are used both for classification (to represent a policy) and regression (to approximate the corresponding relative action advantage function). Classification is enhanced by anomaly detection for accurate policy representation. Exploiting the internal structure of the regressor, we guide the probing of the state space only to critical areas corresponding to changes of action dominance in the underlying policy. This directed focus on critical parts of the state space iteratively leads to refinement and improvement of the underlying policy and delivers excellent control policies in only a few iterations, while the small number of relevance vectors yields significant computational time savings. We demonstrate the proposed approach and compare it with our previous method on standard reinforcement learning domains (inverted pendulum and mountain car).||Abstract: This article presents a complete formulation of the challenging task of stable humanoid robot omnidirectional walk based on the Cart and Table model for approximating the robot dynamics. For the control task, we propose two novel approaches: preview control augmented with the inverse system for negotiating strong disturbances and uneven terrain and linear model-predictive control approximated by an orthonormal basis for computational efficiency coupled with constraints for improved stability. For the generation of smooth feet trajectories, we present a new approach based on rigid body interpolation, enhanced by adaptive step correction. Finally, we present a sensor fusion approach for sensor-based state estimation and an effective solution to sensors' noise, delay, and bias issues, as well as to errors induced by the simplified dynamics and actuation imperfections. Our formulation is applied on a real NAO humanoid robot, where it achieves real-time onboard execution and yields smooth and stable gaits.||Abstract: The traditional approach to computational problem solving is to use one of the available algorithms to obtain solutions for all given instances of a problem. However, typically not all instances are the same, nor a single algorithm performs best on all instances. Our work investigates a more sophisticated approach to problem solving, called Recursive Algorithm Selection, whereby several algorithms for a problem (including some recursive ones) are available to an agent that makes an informed decision on which algorithm to select for handling each sub-instance of a problem at each recursive call made while solving an instance. Reinforcement learning methods are used for learning decision policies that optimize any given performance criterion (time, memory, or a combination thereof) from actual execution and profiling experience. This paper focuses on the well-known problem of state-space heuristic search and combines the A* and RBFS algorithms to yield a hybrid search algorithm, whose decision policy is learned using the Least-Squares Policy Iteration (LSPI) algorithm. Our benchmark problem domain involves shortest path finding problems in a real-world dataset encoding the entire street network of the District of Columbia (DC), USA. The derived hybrid algorithm exhibits better performance results than the individual algorithms in the majority of cases according to a variety of performance criteria balancing time and memory. It is noted that the proposed methodology is generic, can be applied to a variety of other problems, and requires no prior knowledge about the individual algorithms used or the properties of the underlying problem instances being solved.|
|Technical University of Crete (TUC)||School of Electrical and Computer Engineering (ECE)||InteLLigence (Euripides G.M Petrakis' Group)||http://www.intelligence.tuc.gr/~petrakis||Knowledge Representation, Reasoning, and Logic||Konstantinos Stravoskoufos, Euripides G.M. Petrakis, Nikolaos Mainas, Sotirios Batsakis, Vasilis Samoladas, "SOWL QL: Querying Spaito - Temporal Ontologies in OWL", Journal on Data Semantics, 2016 (to appear)||Knowledge Representation, Reasoning, and Logic||Sotirios Batsakis, Euripides G.M. Petrakis, Ilias Tachmazidis, Grigoris Antoniou, “Temporal Representation and Reasoning in OWL 2”, Semantic Web Journal (SWJ), July 2016 (to appear).||Web and Knowledge-based Information Systems||Alexandros Preventis, Euripides G.M. Petrakis, Sotirios Batsakis, "Chronos Ed: A Tool for Handling Temporal Ontologies in Protege" International Journal of Artificial Intelligence Tools (IJAIT), 2014.||1||1||3||3||Yes (This is compulsory.)|
|NCSR Demokritos||Institute of Informatics & Telecommunications, Software & Knowledge Engineering Lab (SKEL)||CER(Complex Event Recognition)||http://cer.iit.demokritos.gr||Knowledge Representation, Reasoning, and Logic||Artikis A., Sergot M. and Paliouras G. An Event Calculus for Event Recognition. IEEE Transactions on Knowledge and Data Engineering (TKDE), 27(4):895-908, 2015.||Machine Learning and Data Mining||Micheloudakis V., Skarlatidis A., Paliouras G. and Artikis A. OSLa: Online Structure Learning using Background Knowledge Axiomatization. Proceedings of European Conference on Machine Learning (ECML), 2016.||Uncertainty in AI||A. Skarlatidis et al. Probabilistic Event Calculus for Event Recognition. ACM Transactions on Computational Logic (TOCL)
Volume 16 Issue 2, March 2015
|NCSR "Demokritos"||Institute of Informatics & Telecommunications, Software & Knowledge Engineering Lab (SKEL)||Data Engineering Group||http://deg.iit.demokritos.gr||Planning and Scheduling||Citation: A. Charalambidis, A. Troumpoukis and S. Konstantopoulos, "SemaGrow: Optimizing federated SPARQL queries". In Proceedings of the 11th International Conference on Semantic Systems (SEMANTiCS 2015), Vienna, 15-17 September 2015. http://dx.doi.org/10.1145/2814864.2814886||Knowledge Representation, Reasoning, and Logic||Citation: K. Zamani, A. Charalambidis, S. Konstantopoulos, N. Zoulis, and E. Mavroudi, "Workload-Aware Self-Tuning Histograms for the Semantic Web". Transactions on Large Scale Data and Knowledge Centered Systems XXVIII, Sep 2016. http://dx.doi.org/10.1007/978-3-662-53455-7_6||Natural Language Processing||Citation: K. Papantoniou and S. Konstantopoulos, "Unravelling names of fictional characters". In Proceedings of the 54th Annual Meeting of the ACL (ACL 2016), Berlin, 7-12 August 2016.||1||2||2||0||Yes|
|Abstract: Processing SPARQL queries involves the construction of an efficient query plan to guide query execution. Alternative plans can vary in the resources and the amount of time that they need by orders of magnitude, making planning crucial for efficiency. On the other hand, the construction of optimal plans can become computationally intensive and it also operates upon detailed, difficult to obtain, metadata. In this paper we present Semagrow, a federated SPARQL querying system that uses metadata about the federated data sources in order to optimize query execution. We balance between a query optimizer that introduces little overhead, has appropriate fall backs in the absence of metadata, but at the same time produces optimal plans in as many situations as possible. Semagrow also exploits non-blocking and asynchronous stream processing technologies to achieve query execution efficiency and robustness. We also present and analyse empirical results using the FedBench benchmark to compare Semagrow against FedX and SPLENDID. Semagrow clearly outperforms SPLENDID and it is either on a par or much faster than FedX.||Abstract: Query processing systems typically rely on histograms, data structures that approximate data distribution, in order to optimize query execution. Histograms can be constructed by scanning the database tables and aggregating the values of the attributes in the table, or, more efficiently, progressively refined by analysing query results. Most of the relevant literature focuses on histograms of numerical data, exploiting the natural concept of a numerical range as an estimator of the volume of data that falls within the range. This, however, leaves Semantic Web data outside the scope of the histograms literature, as its most prominent datatype, the URI, does not offer itself to defining such ranges. This article first establishes a framework that formalises histograms over arbitrary data types and provides a formalism for specifying value ranges for different datatypes. This makes explicit the properties that ranges are required to have, so that histogram refinement algorithms are applicable. We demonstrate that our framework subsumes histograms over numerical data as a special case by using to formulate the state-of-the-art in numerical histograms. We then proceed to use the Jaro-Winkler metric to define URI ranges by exploiting the hierarchical nature of URI strings. This greatly extends the state of the art, where strings are treated as categorical data that can only be described by enumeration. We then present the open-source STRHist system that implements these ideas. We finally present empirical evaluation results using STRHist over a real dataset and query workload extracted from AGRIS, the most popular and widely used bibliographic database on agricultural research and technology.||Abstract: In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with
different mixtures of features, we identify
phonological features as being the most
discriminative by comparison to social and other types of features, and we delve
into a discussion of specific phonological
and phonotactic indicators of a character’s role’s polarity.
|NCSR "Demokritos"||Institute of Informatics & Telecommunications, Software & Knowledge Engineering Lab (SKEL)||Roboskel||http://roboskel.iit.demokritos.gr||Other (Specify in the cell below)||Citation: M. Dagioglou, A. Lydakis, F. Kirstein, S. Dogruoz, S. Konstantopoulos, "Interacting with and via mobile devices and mobile robots in an assisted living setting". EAI Endorsed Transactions on Pervasive Health and Technology 1(1), May 2015. http://dx.doi.org/10.4108/phat.1.1.e3||Planning and Scheduling||Citation: Charalambos Rossides and Stasinos Konstantopoulos, "Simultaneous Localisation and Mapping to Reach Linguistically-Defined Targets." In Proceedings of the 9th Hellenic Conference on Artificial Intelligence (SETN 2016), Thessaloniki, 28-20 May 2016. http://dx.doi.org/10.1145/2903220.2903232||Robotics, Perception and Vision||Citation: T. Varvadoukas, I. Giotis, S. Konstantopoulos, "Detecting Human Patterns in Laser Range Data". In Proceedings of the 20th European Conference on AI (ECAI 2012). Montpellier, France, 27-31 August 2012. http://dx.doi.org/10.3233/978-1-61499-098-7-804||1||2||2||0||Yes|
|Human-robot interaction||Abstract: Using robotic home assistants as a platform for remote health monitoring offers several advantages, but also presents considerable challenges related to both the technical immaturity of home robotics and to user acceptance issues. In this paper we explore tablets and similar mobile devices as the medium of communication between robots and their users, presenting relevant current and planned research in humanrobot interaction that can help the telehealth community circumvent technical shortcomings, improve user acceptance, and maximize the quality of the data collected by robotic home assistants.||Abstract: This paper introduces a framework that allows humans to give highly abstract navigation instructions to mobile robots. It uses simultaneous localisation and mapping (SLAM) to navigate a mobile robot through an unknown environment and combines it with a structured, pseudo-human language for describing the navigation instructions. This combination allows for the design of mobile robot navigation systems that may use any SLAM algorithm and any language that satisfies certain requirements. The described framework delineates the requirements that the SLAM system and the language must meet for this combination to work. It operates on the basis of having a robot carry out a task other than to create a map, resulting in the robot being able to start moving towards its target before even mapping the environment. The core innovation is the definition of a method for driving robot movement in a way that balances between reducing map uncertainty and reaching the requested target. Our method generalises the state-of-art in that it falls back to the conventional explore-then-navigate paths in the absence of a specific target, but focuses on following the navigation instructions when such are provided.||Abstract: In this paper we present a novel method for detecting humans from laser range scans, where the core idea is to treat neither individual frames, which hold so little information that the task is impossible, nor motion patterns, as is the case with tracking methods. Rather, we map short time series of planar scans to 3D objects with time as the depth dimension; we then cluster and classify these 3D objects using unsupervised and off-line training, circumventing the need for predefining and parametrizing motion models.|
|NCSR “Demokritos”||Institute of Informatics & Telecommunications, Software & Knowledge Engineering Lab (SKEL)||Content Analysis and Knowledge Technologies Group (CAKT)||https://www.iit.demokritos.gr/group/cakt||Web and Knowledge-based Information Systems||Papadakis, George, George Giannakopoulos, and Georgios Paliouras 2015;Graph vs. Bag Representation Models for the Topic Classification of Web Documents. World Wide Web: 1–34.||Machine Learning and Data Mining||Bougiatiotis, K. & Krithara, A., "Author Profiling using Complementary Second Order Attributes and Stylometric Features. Notebook for PAN at CLEF 2016", 7th Conference & Labs of the Evaluation Forum, "Colégio do Espírito Santo", University of Évora, Evora, Portugal, 05/09/2016, (abstract), (pdf),||Natural Language Processing||Petasis, G. & Karkaletsis, V., "Identifying Argument Components through TextRank", Association for Computational Linguistics, Humboldt University, Berlin, Germany, 07/08/2016||2||3||3||6||Yes|
|NCSR “Demokritos”||Institute of Informatics & Telecommunications, Software & Knowledge Engineering Lab (SKEL)||Personalisation and Social Network Analysis group (PERSONA)||http://persona.iit.demokritos.gr/||Personalisation of information||Geroge Paliouras, Symeon Papadopoulos, Dimitrios Vogiatzis, Y. Kompatsiaris 2015; User Community Discovery. Springer||Social Network Analysis||G. Katsimpras, D. Vogiatzis, and G. Paliouras, 2015 "Determining Influential Users with Supervised Random Walks", 24th International World Wide Web Conference (6th International Workshop on Modeling Social Media).||1||3||3||yes|
|Topics (DO NOT EDIT)|
|Autonomous Agents and Multi-agent Systems|
|Constraints, Satisfiability, and Search|
|Knowledge Representation, Reasoning, and Logic|
|Machine Learning and Data Mining|
|Natural Language Processing|
|Planning and Scheduling|
|Robotics, Perception and Vision|
|Uncertainty in AI|
|Web and Knowledge-based Information Systems|
|Cognitive Modeling and Cognitive Architectures|
|Agent-based and integrated systems|
|Other (Specify in the cell below)|