KEOD 2015 Abstracts


Full Papers
Paper Nr: 12
Title:

Visual Syntax of UML Class and Package Diagram Constructs as an Ontology

Authors:

Anitta Thomas, Aurona J. Gerber and Alta van der Merwe

Abstract: Diagrams are often studied as visual languages with an abstract and a concrete syntax (concrete syntax is often referred to as visual syntax), where the latter contains the visual representations of the concepts in the former. A formal specification of the concrete syntax is useful in diagram processing applications as well as in achieving unambiguous understanding of diagrams. Unified Modeling Language (UML) is a commonly used modeling language to represent software models using its diagrams. Class and package diagrams are two diagrams of UML. The motivation for this work is twofold; UML lacks a formal visual syntax specification and ontologies are under-explored for visual syntax specifications. The work in this paper, therefore, explores using ontologies for visual syntax specifications by specifying the visual syntax of a set of UML class and package diagram constructs as an ontology in the Web ontology language, OWL. The reasoning features of the ontology reasoners are then used to verify the visual syntax specification. Besides formally encoding the visual syntax of numerous UML constructs, the work also demonstrates the general value of using OWL for visual syntax specifications.

Paper Nr: 16
Title:

Exploiting Visual Similarities for Ontology Alignment

Authors:

Charalampos Doulaverakis, Stefanos Vrochidis and Ioannis Kompatsiaris

Abstract: Ontology alignment is the process where two different ontologies that usually describe similar domains are ’aligned’, i.e. a set of correspondences between their entities, regarding semantic equivalence, is determined. In order to identify these correspondences several methods and metrics that measure semantic equivalence have been proposed in literature. The most common features that these metrics employ are string-, lexical- , structure- and semantic-based similarities for which several approaches have been developed. However, what hasn’t been investigated is the usage of visual-based features for determining entity similarity in cases where images are associated with concepts. Nowadays the existence of several resources (e.g. ImageNet) that map lexical concepts onto images allows for exploiting visual similarities for this purpose. In this paper, a novel approach for ontology matching based on visual similarity is presented. Each ontological entity is associated with sets of images, retrieved through ImageNet or web-based search, and state of the art visual feature extraction, clustering and indexing for computing the similarity between entities is employed. An adaptation of a popular Wordnet-based matching algorithm to exploit the visual similarity is also proposed. Our method is compared with traditional metrics against a standard ontology alignment benchmark dataset and demonstrates promising results.

Paper Nr: 28
Title:

A General Schema for Solving Model-Intersection Problems on a Specialization System by Equivalent Transformation

Authors:

Kiyoshi Akama and Ekawit Nantajeewarawat

Abstract: A model-intersection problem (MI problem) is a pair of a set of clauses and an exit mapping. We define MI problems on specialization systems, which include many useful classes of logical problems, such as proof problems on first-order logic and query-answering (QA) problems in pure Prolog and deductive databases. The theory presented in this paper makes clear the central and fundamental structure of representation and computation for many classes of logical problems by (i) axiomatization and (ii) equivalent transformation. Clauses in this theory are constructed based on abstract atoms and abstract operation on them, which can be used for representation of many specific subclasses of problems with concrete syntax. Various computation can be realized by repeated application of many equivalent transformation rules, allowing many possible computation procedures, for instance, computation procedures based on resolution and unfolding. This theory can also be useful for inventing solutions for new classes of logical problems.

Paper Nr: 30
Title:

Linguistic Modifiers with Unbalanced Term Sets in Multi-valued Logic

Authors:

Nouha Chaoued, Amel Borgi and Anne Laurent

Abstract: Modeling human knowledge by machines should be as faithful as possible to reality. Therefore, it is imperative to take account of inaccuracies and uncertainties in this knowledge. This problem has been dealt with through different approaches. The most common approaches are fuzzy logic and multi-valued logic. These two logics propose a linguistic term modeling. Generally, problems modeling qualitative aspect use linguistic variables assessed in linguistic terms that are uniformly distributed on the scale. However, in many cases, linguistic information needs to be defined by unbalanced term sets whose terms are not uniformly and/or not symmetrically distributed. In the literature, it is shown that many researchers have dealt with these term sets in the context of fuzzy logic. Thereby, in our work, we introduce a new approach to represent and treat such term sets in the context of multi-valued logic. First, we propose an algorithm that allows representing terms within an unbalanced set. Then, we describe a second algorithm that permits the use of linguistic modifiers within unbalanced multi-sets.

Paper Nr: 34
Title:

Predicting the Empirical Robustness of the Ontology Reasoners based on Machine Learning Techniques

Authors:

Nourhène Alaya, Sadok Ben Yahia and Myriam Lamolle

Abstract: Reasoning with ontologies is one of the core tasks of research in Description Logics. A variety of reasoners with highly optimized algorithms have been developed to allow inference tasks on expressive ontology languages such as OWL (DL). However, unexpected behaviours of reasoner engines is often observed in practice. Both reasoner time efficiency and result correctness would vary across input ontologies, which is hardly predictable even for experienced reasoner designers. Seeking for better understanding of reasoner empirical behaviours, we propose to use supervised machine learning techniques to automatically predict reasoner robustness from its previous running. For this purpose, we introduced a set of comprehensive ontology features. We conducted huge body of experiments for 6 well known reasoners and using over 1000 ontologies from the ORE’2014 corpus. Our learning results show that we could build highly accuracy reasoner robustness predictive models. Moreover, by interpreting these models, it would be possible to gain insights about particular ontology features likely to be reasoner robustness degrading factors.

Paper Nr: 37
Title:

An Approach to Refine Translation Candidates for Emotion Estimation in Japanese-English Language

Authors:

Kazuyuki Matsumoto, Minoru Yoshida, Kenji Kita and Fuji Ren

Abstract: Researches on emotion estimation from text mostly use machine learning method. Because machine learning requires a large amount of example corpora, how to acquire high quality training data has been discussed as one of its major problems. The existing language resources include emotion corpora; however, they are not available if the language is different. Constructing bilingual corpus manually is also financially difficult. We propose a method to convert a training data into different language using an existing Japanese-English parallel emotion corpus. With a bilingual dictionary, the translation candidates are extracted against every word of each sentence included in the corpus. Then the extracted translation candidates are narrowed down into a set of words that highly contribute to emotion estimation and we used the set of words as training data. As the result of the evaluation experiment using the training data created by our proposed method, the accuracy of emotion estimation increased up to 66.7% in Naive Bayes. 1 INTRODUCTION Recently, there have been many researches on emotion estimation from text in the field of sentiment analysis or opinion mining (Ren, 2009), (Ren and Quan, 2015), (Ren and Wu, 2013), (Quan and Ren, 2010), (Quan and Ren, 2014), (Ren and Matsumoto, 2015) and many of them adopted machine learning methods that used words as a feature. When the type of the target sentence for emotion estimation and the type of the sentence prepared as training data are different, as in the case of terminology in the problem of domain adaptation for document classification, the appearance tendency of the emotion words differs. This causes a problem in fluctuation of accuracy. On the other hand, when a word is used as a feature for emotion estimation, the sentence structure does not have to be considered. As a result, it is easy to apply the method to other languages. Only if we prepare a large number of corpora with annotation of emotion tags on each sentence, emotion would be easily estimated by using the machine learning method. In the machine learning method, because manual definition of a rule is not necessary, we can reduce costs to apply the method to other languages. However, just like the problem in the domain, depending on the

Paper Nr: 49
Title:

Time Evolving Expert Systems Design and Implementation: The KAFKA Approach

Authors:

Fabio Sartori and Riccardo Melen

Abstract: Expert Systems design and implementation has been always conceived as a centralized activity, characterized by the relationship between users, domain experts and knowledge engineers. The growing diffusion of sophisticated PDAs and mobile operating systems opens up to new and dynamic application environments and requires to rethink this statement. New frameworks for expert systems design, in particular rule–based systems, should be developed to allow users and domain experts to interact directly, minimizing the role of knowledge engineer and promoting the real–time updating of knowledge bases when needed. This paper present the KAFKA approach to this challenge, based on the implementation of the Knowledge Artifact conceptual model supported by Android OS devices.

Paper Nr: 50
Title:

A Conceptual Model of the Research Methodology Domain - With a Focus on Computing Fields of Study

Authors:

Colin Pilkington and Laurette Pretorius

Abstract: Recognising the need for the development of research capacity and changing learning paradigms that include online and collaborative approaches, an ontology of research methodology needs to be developed to allow for the shared creation of knowledge in this domain. An ontology engineering approach is followed in developing a conceptual model of the domain using UML, with a focus on studies in the computing disciplines. A research scheme that is made up of a philosophical world view, a research design, and research methods is proposed. Appropriate relations between these are identified, as well as attributes of the various concepts in the conceptual model. A focus group consisting of senior researchers in the field of computing was utilised to validate the model.

Paper Nr: 58
Title:

A Formal Modeling Method to Enrich the Arabic Treebank ATB with Syntactic Properties

Authors:

Raja Bensalem Bahloul, Kais Haddar and Philippe Blache

Abstract: The enrichment of an Arabic treebank with syntactic properties can facilitate many types of parsing processes. This enrichment allows also the increase of its use in different NLP applications, the acquirement of new linguistic resources and the ease of the probabilistic parsing process by using statistics to limit the properties to the satisfied ones or to the most frequent ones. In this context, our proposed enrichment method is based on a formalization phase, a Property Grammar induction phase from a source treebank and a treebank regeneration phase with a new syntactic property-based representation. Starting with a formalization phase in our enrichment problem may succeed its resolution procedure. In fact, it limits the specification of the data sets and the interactions between them to the used ones, which avoids any duplication. The formalization allows also the anticipation of the constraints to respect in the problem. The implementation of this enrichment method is experimented essentially on the Arabic treebank ATB. This experiment provides us with good and encouraging results and various properties of different types.

Paper Nr: 64
Title:

Open Data for Improving Youth Policies

Authors:

Domenico Beneventano, Sonia Bergamaschi, Luca Gagliardelli and Laura Po

Abstract: The Open Data philosophy is based on the idea that certain data should be made available to all citizens, in an open form, without any copyright restrictions, patents or other mechanisms of control. Various government have started to publish open data, first of all USA and UK in 2009, and in 2015, the Open Data Barometer project (www.opendatabarometer.org) states that on 77 diverse states across the world, over 55 percent have developed some form of Open Government Data initiative. We claim Public Administrations, that are the main producers and one of the consumers of Open Data, might effectively extract important information by integrating its own data with open data sources. This paper reports the activities carried on during a one-year research project on Open Data for Youth Policies. The project was mainly devoted to explore the youth situation in the municipalities and provinces of the Emilia Romagna region (Italy), in particular, to examine data on population, education and work. The project goals were: to identify interesting data sources both from the open data community and from the private repositories of local governments of Emilia Romagna region related to the Youth Policies; to integrate them and, to show up the result of the integration by means of a useful navigator tool; in the end, to publish new information on the web as Linked Open Data. This paper also reports the main issues encountered that may seriously affect the entire process of consumption, integration till the publication of open data.

Short Papers
Paper Nr: 2
Title:

An Automated Semantic Annotation Tool Supported by an Ontology in the Computer Science Domain

Authors:

Rodrigo Espinoza and Andrés Melgar

Abstract: The annotation of documents can be performed manually, semi-assisted or automated, also it can use the help of different knowledge resources as a set of rules or ontology. In this paper, we show the design of a semantic annotation tool that works automatically on power in order to efficiently manage academic documents in spanish produced in the university related to computer science. The tool uses an ontology annotations to provide a corpus of documents the necessary attributes to be managed using other tools that use annotations as searchers or indexers. This is done by relating the concepts found in documents with concepts in the ontology performing semantic and syntactic comparisons, it is produced using open source tools for natural language processing and knowledge management.

Paper Nr: 3
Title:

The Web Integration & Interoperability Layer (WIIL) - Turning Web Content into Learning Content using a Lightweight Integration and Interoperability Technique

Authors:

Sokratis Karkalas, Manolis Mavrikis and Patricia Charlton

Abstract: This paper presents a technique that enables integration and interoperability of web components with learning platforms. This technique is proposed as a lightweight alternative to IMS LTI and OpenAjax and is especially suited to simple client-side widgets that have no back-end dependencies and potential security risks. The technique has already been used successfully in an experimental learning platform to provide data generated by various heterogeneous components for intelligent support and learning analytics.

Paper Nr: 20
Title:

Mapping Ontology with Probabilistic Relational Models - An Application to Transformation Processes

Authors:

Cristina Manfredotti, Cedric Baudrit, Juliette Dibie-Barthélemy and Pierre-Henri Wuillemin

Abstract: Motivated by the necessity of reasoning about transformation experiments and their results, we propose a mapping between an ontology representing transformation processes and probabilistic relational models. These extend Bayesian networks with the notion of class and relation of relational data bases and, for this reason, are well suited to represent concepts and ontologies’ properties. To easy the representation, we exemplify a transformation process as a cooking recipe and present our approach for an ontology in the cooking domain that extends the Suggested Upper level Merged Ontology (SUMO).

Paper Nr: 29
Title:

Function-variable Elimination and Its Limitations

Authors:

Kiyoshi Akama and Ekawit Nantajeewarawat

Abstract: The famous proof method by the conventional Skolemization and resolution has a serious limitation. It does not guarantee the correctness of proving theorems in the presence of built-in constraints. In order to understand this difficulty, we use meaning-preserving Skolemization (MPS) and equivalent transformation (ET), which together provide a general framework for solving query-answering (QA) problems on first-order logic. We introduce a rule for function variable elimination (FVE), by which we regard the conventional Skolemization as a kind of the composition of MPS and FVE. We prove that the FVE rule preserves the answers to a class of QA problems consisting of only user-defined atoms, while we cannot prove it in the presence of built-in constraints. By avoiding the application of the FVE rule in MPS & ET computation, we obtain a more general solution for proof problems, which guarantees the correctness of computation even in the presence of built-in constraints.

Paper Nr: 38
Title:

Finding the Key Concepts of Students’ Knowledge - A Network Analysis of Coherence and Contingency of Knowledge Structures

Authors:

Ismo T. Koponen and Maija Nousiainen

Abstract: The desired outcome of learning science is students’ expert-like subject knowledge, which is expected to be at the same time well-organized, coherent and contingent. However, it has proved difficult to find ways to represent these features and to identify the key conceptual elements or concepts that are responsible for them. In this study concept networks constructed by physics students’ representing their views of the relatedness of physics concepts are analyzed in order to clarify how coherence and contingency can be captured and measured. The data consist of concept networks (N=12) constructed by physics students, representing relationships between physics concepts of electricity and magnetism. The networks are first analyzed qualitatively for their epistemic acceptability. The structure of the concept networks is then analyzed quantitatively using a network graph theoretical approach. The analysis picks out a handful of key concepts which all play a central role in all of the concept networks examined. From the physics point of view these key concepts are relevant ones (most of them having to do with fields), which indicates the relevance and power of the method in describing knowledge structures.

Paper Nr: 39
Title:

Towards Ontology Exploration based on Path Structure Richness

Authors:

Ondřej Zamazal

Abstract: This paper presents an approach of path structure richness based ontology exploration. We focus on global richness as a way of characterizing ontology path richness in addition to using local richness to locate typical rich path structures for a given ontology. Ontology exploration is performed by extracting the shortest paths as a simplified ontology excerpt or summary. Proposed path structure richness metrics are based on shortest paths, their relationship diversity and their occurrences. We describe our general motivation, basic concepts, preliminary experimentation and future work for ontology exploration based on path structure richness.

Paper Nr: 41
Title:

Application of an Automatic Data Alignment & Structuring System for Intercultural Consumer Segmentation Analysis

Authors:

Fumiko Kano Glückstad

Abstract: This position paper introduces a conceptual framework of our ambitious international research project where the aim is extraction and alignment of heterogeneous consumer segment structures across a multiplicity of markets and cultures. We argue that an automatic data alignment and structuring system employing a non-parametric Bayesian relational modelling is an ideal approach that can address challenges in the conventional cross-cultural data analysis. The paper presents an example of our preliminary work that applies this approach to the analysis of opinion survey responses given by male populations in Sweden and Japan. The framework successfully extracts groups of males who express similar but also dissimilar response patterns from the two selected countries. Based on these preliminary studies, the paper discusses potential contributions and future challenges of the international consumer analysis project.

Paper Nr: 42
Title:

Semantic Spatial Reasoning - Developing a Conceptual Framework for Reasoning with Semantic, Qualitative-Quantitative Spatial Information

Authors:

Roman Katerinenko

Abstract: In recent years, significant achievements have been made on handling qualitative spatial relations in the field of qualitative spatial reasoning. These achievements can be utilized to bridge the gap between geometries and semantics of real-world objects. With this purpose we introduce the Semantic Spatial Reasoning conceptual framework for reasoning with information of mixed types: qualitative-quantitative spatial and information described with the help of the Semantic Web technologies. The objective of this framework is not to be a particular reasoning algorithm but a conceptual decomposition suitable for showing benefits of the combined reasoning approach, forecasting practical applications and giving a clue for an implementation. Modular structure of the framework makes it useful to model various tasks in areas, such as, GIS, cognitive vision, computer-aided design, data integration.

Paper Nr: 44
Title:

LODLI: Linked Open Data Back-End System for Libraries Interoperability

Authors:

Miriam Allalouf, Alex Artyomov and Dalia Mendelsson

Abstract: Linked Data principles offer significant advantages over current practices when publishing data. Linked Data allows library interoperability by linking to data from other organizations with authoritative data, which enriches library catalog-user search results. This paper describes LODLI, a Linked Open Data Back-End system that we designed and developed to enhance library catalog searches. We integrated our system with the Hebrew University library catalog, HUfind. While our platform can be used as is, it can also be customized by Linked Open Data providers that desire to convert their MARC records into Linked Data information library systems, making their data far more accessible. This research project faced the following challenges: finding the most efficient way to translate binary MARC into MARC records; mapping the MARC records into a variety of information models, such as Dublin Core, FRBR, RDA, OWL and FOAF, while selecting the most appropriate MARC field combinations; and providing links to resources in external datasets using a distance algorithm to identify string similarity. LODLI is a generic system to which additional ontologies can easily be added. We have demonstrated the system with two types of clients: FRBR visualization client and VIAF-extension client.

Paper Nr: 52
Title:

Position Paper: Ontology in the Rail Domain - The Railway Core Ontologies

Authors:

Christopher Morris, John Easton and Clive Roberts

Abstract: This paper presents the railway core ontologies, a group of related ontologies designed to model the rail domain in detail. The purpose of these ontologies is to enable improved data integration in the rail domain, which will deliver business benefits in the form of improved customer perceptions and more efficient use of the rail network. The modularity of the ontologies allows for both detailed modelling of the domain at a high level and the storing of instance data at lower levels. It concludes that the benefits of improved rail data integration are best realised through the use of the railway core ontologies.

Paper Nr: 55
Title:

Knowledge-based Engineering of Automation Systems using Ontologies and Engineering Data

Authors:

Matthias Glawe, Christopher Tebbe, Alexander Fay and Karl-Heinz Niemann

Abstract: Ontologies provide an effective way for describing and using knowledge of a specific domain. In engineering workflows the reusability and quick adoption of knowledge is needed for solving several tasks in efficient ways. Engineering data is mostly structured in hierarchical documents and exchange formats and not represented in ontologies. Therefore a connection between engineering data and the knowledge in ontologies is needed. In this article we present a bridge concept for connecting engineering data with an OWL-based ontology. For this we use an example ontology containing security knowledge of automation systems.

Paper Nr: 57
Title:

Identifying Pairs of Terms with Strong Semantic Connections in a Textbook Index

Authors:

James Geller, Shmuel T. Klein and Yuriy Polyakov

Abstract: Semantic relationships are important components of ontologies. Specifying these relationships is work-intensive and error-prone when done by experts. Discovering domain concepts and strongly related pairs of concepts in a completely automated way from English text is an unresolved problem. This paper uses index terms from a textbook as domain concepts and suggests pairs of concepts that are likely to be connected by strong semantic relationships. Two textbooks on Cyber Security were used as testbeds. To show the generality of the approach, the index terms from one of the books were used to generate suggestions for where to place semantic relationships using the bodies of both textbooks. A good overlap was found.

Paper Nr: 60
Title:

Debate Formed by Internet Comments - Towards the Automatic Analysis

Authors:

Mare Koit

Abstract: Together with an increasing role of online media in human communication it is necessary to perform automatic analysis of online texts. In this paper, we are studying dialogues formed by opinion articles and their comments on Internet. Such a dialogue can be considered as debate between two teams. One team connects the commentators with positive and another – with negative comments about the initial opinion, i.e. the commentators who respectively, support or reject the opinion presented in the source text. The members of both teams can in any time have the floor what is different as compared with conventional spoken debate. Internet users who spontaneously give marks +1 or -1 to the comments act as a board of ‘judges’. The winner is the team with a bigger total sum of marks. For every comment, we also assign a point in a mental space which we call communicative space. The values +1, 0 or -1 of the coordinates of communicative space make it possible to classify the comments not only as positive and negative but also as polite and impolite, friendly and hostile, etc. The set of comments forms a collective opinion about the main agent of the source text which introduces a social aspect into the text analysis. The further aim of this preliminary study is the automatic analysis of such debates.

Paper Nr: 61
Title:

Towards Vocabulary Development by Convention

Authors:

Irlán Grangel-González, Lavdim Halilaj, Gökhan Coskun and Sören Auer

Abstract: A major bottleneck for a wider deployment and use of ontologies and knowledge engineering techniques is the lack of established conventions along with cumbersome and inefficient support for vocabulary and ontology authoring. We argue, that the pragmatic development by convention paradigm well-accepted within software engineering, can be successfully applied for ontology engineering, too. However, the definition of a valid set of conventions requires broadly-accepted best-practices. In this regard, we empirically analyzed a number of popular vocabularies and ontology development efforts with respect to their use of guidelines and common practices. Based on this analysis, we identified the following main aspects of common practices: documentation, internationalization, naming, structure, reuse, validation and authoring. In this paper, these aspects are presented and discussed in detail. We propose a set of practices for each aspect and evaluate their relevance in a study with vocabulary developers. The overall goal is to pave the way for a new paradigm of vocabulary development similar to Software Development by Convention, which we name Vocabulary Development by Convention.

Paper Nr: 63
Title:

Domain Ontology to Support Open Data Analytics for Aquaculture

Authors:

Pedro Oliveira, Ruben Costa, José Lima, João Sarraipa and Ricardo Jardim-Gonçalves

Abstract: The Aquaculture industry, which comprises mainly of SME companies, represents a significant source of protein for people. From an IT perspective, aquaculture is characterized high volumes of heterogeneous data, and also lack of interoperability intra and inter-organisations. Each organization uses different data representations, using its native languages and legacy classification systems to manage and organize information. The lack of semantic interoperability that exists can be minimized, if innovative semantic techniques for representing, indexing and searching sources of non-structured information are applied. The work presented here, describes the achievements under AQUASMART EU project, which aims to accelerate innovation in Europe’s aquaculture through technology transfer for the deployment of an open data solution through multilingual data collection and analytics solutions and services, turning the large volumes of heterogeneous aquaculture data that is distributed across the value chain, into an open cloud of semantically interoperable data assets and knowledge. Results achieved so far do not address the final conclusions of the project but form the basis for the formalization of the AQUASMART semantic referential.

Paper Nr: 65
Title:

Towards Constructive Abduction - Solving Abductive Problems with Constraint Programming

Authors:

Antoni Ligęza

Abstract: Abduction can be considered as a principal way of reasoning for problem solving. Abductive inference consists in generation of hypotheses which explain — or logically imply — the phenomenon under investigation in view of accessible background knowledge and are consistent with all other observations. Looking for such hypotheses is typically performed with a spectrum of trial-and-error or search methods and tools. In case of purely logical statements the hypotheses take the form of a set of facts, both positive and negative ones. For example, in case of model based diagnostic reasoning, such diagnostic hypotheses can be generated by consistency based reasoning with minimal search effort. In more complex cases, where values of certain variables are to be found, pure backtracking search becomes inefficient. In this paper we attempt to put forward such abductive inference into a formal framework of Constraint Programming in order to enable the use of constraint propagation techniques. The main idea behind this approach is to make abduction more constructive. The discussion is illustrated with a diagnostic example of a multiplier-adder system.

Paper Nr: 68
Title:

A View on Advanced Standby Control in Industry from a Knowledge Engineering Perspective

Authors:

Andreas W. Mueller and Joern Peschke

Abstract: In improving the energy efficiency of industrial plants, advanced standby control depicts a strategic instrument subject to various influences and with impacts on models and software tools along the entire plant lifecycle. In order to effectively switch plants and components to different levels of energy consumption, the required knowledge needs to be properly engineered and applied. This paper presents an overview of this knowledge and its interrelations in the industrial domain, with special focus on the requirements for automated generation of energy state models and switching paths. For these tasks, integrative approaches are proposed.

Paper Nr: 71
Title:

Opinion Extraction from Editorial Articles based on Context Information

Authors:

Yoshimi Suzuki and Fumiyo Fukumoto

Abstract: Opinion extraction supports various tasks such as sentiment analysis in user reviews for recommendations and editorial summarization. In this paper, we address the problem of opinion extraction from newspaper editorials. To extract author’s opinion, we used context information addition to the features within a single sentence only. Context information are a location of the target sentence, and its preceding, and succeeding sentences. We defined the opinion extraction task as a sequence labeling problem, using conditional random fields (CRF). We used Japanese newspaper editorials in the experiments, and used multiple combination of features of CRF to reveal which features are effective for opinion extraction. The experimental results show the effectiveness of the method, especially, predicate expression, location and previous sentence are effective for opinion extraction.

Paper Nr: 73
Title:

KREM: A Generic Knowledge-based Framework for Problem Solving in Engineering - Proposal and Case Studies

Authors:

Cecilia Zanni-Merk

Abstract: This article presents a generic knowledge-based framework for problem solving in Engineering, in a broad sense. After a discussion about the drawbacks of the traditional architecture used for deploying knowledge-based systems (KBS), the KREM (Knowledge, Rules, Experience, Meta-Knowledge) architecture is presented. The novelty of the proposal comes from the inclusion of experience capitalization and of meta-knowledge use into the previously discussed traditional architecture. KREM improves the efficiency of classic KBSs, as it permits to deal with incomplete expert knowledge models, by progressively completing them, learning with experience. Also, the use of meta-knowledge can steer their execution more efficiently. This framework has been successfully used in different projects. Here, the architecture of the KREM model is presented along with some implementation issues and three case studies are discussed.

Paper Nr: 74
Title:

An Ontology-based Collaboration Recommender System using Patents

Authors:

Sandra Geisler, Rihan Hai and Christoph Quix

Abstract: Successful research and development projects start with finding the right partners for the venture. Especially for interdisciplinary projects, this is a difficult task as experts from foreign domains are not known. Furthermore, the transfer of knowledge from research into practice is becoming more important in research projects to enable the quick application of research results. This is in particular relevant for projects in medical engineering. Patents and publications contain technical knowledge which can be exploited to find suitable experts. Patents are usually more product-oriented as the inventors have to describe an application area and products might be protected by patents. On the other hand, scientific publications represent the state-of-the-art in research. The challenge is finding the right mixture of research- or application-oriented experts from different domains. Hence, we propose a recommender system for experts for a certain topic based on patent topic clustering, ontologies, and ontology matching, which maps patents to corresponding innovation fields. The medical engineering domain serves as a first test bed, since projects in this area are highly interdisciplinary.

Paper Nr: 77
Title:

ArabRelat: Arabic Relation Extraction using Distant Supervision

Authors:

Reham Mohamed, Nagwa M. El-Makky and Khaled Nagi

Abstract: Relation Extraction is an important preprocessing task for a number of text mining applications, including: Information Retrieval, Question Answering, Ontology building, among others. In this paper, we propose a novel Arabic relation extraction method that leverages linguistic features of the Arabic language in Web data to infer relations between entities. Due to the lack of labeled Arabic corpora, we adopt the idea of distant supervision, where DBpedia, a large database of semantic relations extracted from Wikipedia, is used along with a large unlabeled text corpus to build the training data. We extract the sentences from the unlabeled text corpus, and tag them using the corresponding DBpedia relations. Finally, we build a relation classifier using this data which predicts the relation type of new instances. Our experimental results show that the system reaches 70% for the F-measure in detecting relations.

Paper Nr: 85
Title:

Facilitating Ontology Co-evolution with Ontology Instance Migration

Authors:

Mark Fischer and Juergen Dingle

Abstract: An ontology is typically defined in terms of some vocabulary that is defined using other ontologies. When the vocabulary changes, it is important for a dependent ontology to evolve in a consistent manner. Automating the migration of ontologies based on changes to their vocabulary is requisite to proper adoption of ontologies for varying fields of use. Oital is a transformation language capable of automatically migrating dependent ontologies. It helps make ontologies easier to maintain as they evolve.

Paper Nr: 86
Title:

Innovations in Organisational Knowledge Management - Typology, Methodology and Recommendations

Authors:

Tatiana Gavrilova, Dmitry Kudryavtsev and Anna Menshikova

Abstract: The INNOVARRA project is focused on the research and development of new models and methods of knowledge management in the enterprises. The project aims to identify and develop knowledge management methods and tools, which are the most appropriate for particular knowledge type and domain of any company, as well as have the greatest impact on the final results of Russian companies. Special attention is paid to the knowledge typology development, which helps to differentiate and select knowledge management tools and methods. Research methodology is interdisciplinary and includes both the behaviourist methods of empirical studies (surveys, statistical analysis) and design-oriented methods such as ontology engineering, system analysis and enterprise architecture management.

Paper Nr: 87
Title:

Towards Automated Integrity Constraints Modelling and Validation - A Survey and Approach

Authors:

Sandra Bergner

Abstract: Semantic web techniques based on ontologies are a possible means for modelling and validating complex, safety-critical products like airplanes or automobiles. For validation purposes, checks based on the Open World Assumption (OWA) as well as checks based on the Closed World Assumption (CWA) are both valuable. Based on a survey of existing semantic-based approaches, we present a novel approach that provides hybrid OWA/CWA checks and thereby reduces the maintenance burden for managing two different kinds of checks.

Posters
Paper Nr: 11
Title:

PersonLink: A Multilingual and Multicultural Ontology Representing Family Relationships

Authors:

Noura Herradi, Fayçal Hamdi, Elisabeth Métais and Assia Soukane

Abstract: Many existing open linked datasets include descriptions of real world persons, with the relationships between them. For some traditional and/or emerging relationships, existing ontologies do not provide the adequate links. This paper represents PersonLink, an ontology that defines rigorously and precisely family relationships, and takes into account the differences that may exist between cultures, including new relationships emerging in our societies nowadays. Moreover, the transition from one culture/language to another one cannot be solved with a simple translation of terms, especially when concepts do not intersect in different languages; thus our solution refers to a multicultural meta-ontology of concepts and associated mechanisms. A validation has been performed on two linked datasets DBpedia and Freebase.

Paper Nr: 13
Title:

Developing an Ontology to Capture Documents’ Semantics

Authors:

Elena Lloret, Yoan Gutiérrez and José M. Gómez

Abstract: Ontologies have been shown to be one of the best mechanisms to represent knowledge within a domain for later reasoning and inferring new knowledge that may not be initially explicitly stated. Most of the research works focus on the representation of a particular domain, thus designing and building domain ontologies (e.g. tourism, medical, etc.). However, the development of task-oriented ontologies may be more appropriate, since they can be applied to different domains, avoiding the limitation of the ad-hoc ontologies. Therefore, the goal of this paper is to present a task-oriented ontology, with the purpose of capturing the semantics of a document, in order to be used for Natural Language Processing applications, and more specifically, for the automatic generation of personalized information. The preliminary evaluation and validation of our ontology through a wide range of competence questions clearly shows its potentiality to extract the information according to specific information needs.

Paper Nr: 14
Title:

Ontology Reuse - Experiences from Ontology Design Pattern Selection and Integration

Authors:

Birger Lantow, Kurt Sandkuhl and Vladimir Tarasov

Abstract: While the main purpose of Ontology Design Patterns (ODPs) is to support the process of ontology engineering, they can also be used to improve existing ontologies. This paper has a focus on ODP selection and integration for ontology improvement. Based on the case of the ExpertFinder ontology, which allows for competency description of researchers, selection and integration of ODP is investigated with an explorative view. The current state of ODP selection strategies is discussed and problems arising during integration of ODP are shown. On this base, suggestions for improvements are made. Although this study deals with the integration into an existing ontology, most of the assumptions and suggestions are also valid for the general case of ODP usage.

Paper Nr: 22
Title:

A Survey on Ontology Evaluation Methods

Authors:

Joe Raad and Christophe Cruz

Abstract: Ontologies nowadays have become widely used for knowledge representation, and are considered as foundation for Semantic Web. However with their wide spread usage, a question of their evaluation increased even more. This paper addresses the issue of finding an efficient ontology evaluation method by presenting the existing ontology evaluation techniques, while discussing their advantages and drawbacks. The presented ontology evaluation techniques can be grouped into four categories: gold standard-based, corpus-based, task-based and criteria based approaches.

Paper Nr: 23
Title:

An Aspect-Oriented Extension to the OWL API - Specifying and Composing Views of OWL Ontologies using Ontology Aspects and Java Annotations

Authors:

Ralph Schäfermeier, Lidia Krus and Adrian Paschke

Abstract: Aspect-Oriented Programming (AOP) is a technology for the decomposition of software systems based on cross-cutting concerns. As shown in our previous work, cross-cutting concerns are also present in ontologies, and Aspect-Oriented Ontology Development (AOOD) can be used for flexible and dynamic ontology modularization based on functional and non-functional requirements. When ontologies are used in applications, application and ontology-related requirements often coincide. In this paper, we show that aspects in ontologies can be expressed as software aspects and directly referred to from software code using the well-known AspectJ language and Java annotations. We present an extension of the well-known OWL API with aspect-oriented means that allow transparent access to and manipulation of ontology modules that are based on requirements.

Paper Nr: 24
Title:

A Semantic Web Service Description of Learning Object

Authors:

Chaker Ben Mahmoud, Ikbel Azaiez, Fathia Bettahar and Faïez Gargouri

Abstract: How to find and identify learning objects according with the learner profile represents a main interest in the quality of learning process. Thus, using the paradigm of Semantic Web Services ensure the independence and reusability of learning object in a different context. In this paper, we propose an extension of OWLS that encompass the description of the learning intention and the context of use that characterize a learning object. We also describe the generic scenario of the publication and discovery process.

Paper Nr: 27
Title:

An Ontology-based Methodology for Reusing Data Cleaning Knowledge

Authors:

Ricardo Almeida, Paulo Maio, Paulo Oliveira and João Barroso

Abstract: The organizations’ demand to integrate several heterogeneous data sources and an ever-increasing volume of data is revealing the presence of quality problems in data. Currently, most of the data cleaning approaches (for detection and correction of data quality problems) are tailored for data sources with the same schema and sharing the same data model (e.g., relational model). On the other hand, these approaches are highly dependent on a domain expert to specify the data cleaning operations. This paper extends a previously proposed data cleaning methodology that reuses cleaning knowledge specified for other data sources. The methodology is further detailed/refined by specifying the requirements that a data cleaning operations vocabulary must satisfy. Ontologies in RDF/OWL are proposed as the data model for an abstract representation of the data schemas, no matter which data model is used (e.g., relational; graph). Existing approaches, methods and techniques that support the implementation of the proposed methodology, in general, and specifically of the data cleaning operations vocabulary are also presented and discussed in this paper.

Paper Nr: 31
Title:

Designing a Mediation Vocabulary for Authoring Learning Analytics

Authors:

Patricia Charlton, Sokratis Karkalas and Manolis Mavrikis

Abstract: This paper provides a knowledge representation process for authoring of learning experiences that capture feedback designed in the context of learning environments. The paper reports on a year long study with designers who are creating mathematical teaching and learning resources as part of an EU project (M C Squared). In this paper we examine the knowledge representation process we used in design and creation of a mediation vocabulary. The model to be designed has to provide different layers of ‘knowledge integration’ and thus offers insights into the importance of knowledge mediation in the emergence of new learning environments and experiences. Hence, authoring of designs and feedback through use of ontologies to form part of the annotating of the learning activities. The annotations form part of the context to be used as part of the learning analytics.

Paper Nr: 36
Title:

A Proposal for a Method of Graph Ontology by Automatically Extracting Relationships between Captions and X- and Y-axis Titles

Authors:

Sarunya Kanjanawattana and Masaomi Kimura

Abstract: A two dimensional graph is a powerful method for representing a set of objects that usually appears in many sources of literature. Numerous efforts have been made to discover image semantics based on contents of literature. However, conventional methods have not been fully able to satisfy users because a wide variety of techniques are being developed, and each is very useful for enhancing system capabilities in their own way. In this paper, we have developed a method to automatically extract relationships from graphs on the basic of their captions and image content, particularly from graph titles. Furthermore, we improved our idea by applying several technologies such as ontology and a dependency parser. The relationships discovered in a graph are presented in the form of a triple (subject, predicate, object). Our objectives are to find implicit and explicit information in the graph and reduce the semantic gap between an image and literature context. Accuracy was manually estimated to identify the most reliable triple. Based on our results, we concluded that the accuracy via our method was acceptable. Therefore, our method is dependable and worthy of future development.

Paper Nr: 43
Title:

Exploiting Users’ Feedbacks - Towards a Task-based Evaluation of Application Ontologies Throughout Their Lifecycle

Authors:

Perrine Pittet and Jérôme Barthélémy

Abstract: This paper presents the basis of our approach for evaluation of application ontologies. Adapting an existing task-based evaluation, this approach explains how crowdsourcing, involving application users, can efficiently help in the improvement of an application ontology all along the ontology lifecycle. A real case experiment on an application ontology designed for the semantic annotation of geobusiness user data illustrates the proposal.

Paper Nr: 51
Title:

Hashtag of Instagram: From Folksonomy to Complex Network

Authors:

Simona Ibba, Matteo Orrù, Filippo Eros Pani and Simone Porru

Abstract: The Instagram is a social network for smartphones created in 2010 and acquired by Facebook in 2012. It currently has more than 300 million registered users and allows for the immediate upload of images (square, inspired by Polaroid), to which users can associate hashtags and comments. Moreover, connections can be created between users that share the same interests. In our work, we intend to analyze the hashtags entered by users: the use of such hashtags, as it happens in other social networks like Twitter, generates a folksonomy, that is a user-driven classification of information. We intend to map that folksonomy as a complex network to which we can associate all the typical analysis and evaluations of such a mathematical model. Our purpose is to use the resulting complex network as a marketing tool, in order to improve brand or product awareness.

Paper Nr: 56
Title:

Ontology Modelling of Malaysian Food Exchange List

Authors:

Norlia Yusof, Shahrul Azman Noah and Samirah Taufiq Wahid

Abstract: Designing dietary menu planning is a complex problem-solving task. It involves several constraints and extensive common sense. Case-based reasoning (CBR) solves the complexity by storing an expert common sense in the case base. Case adaptation (CA) is important for design task using CBR since old cases are partially similar as a current one. An automatic CA mainly focused on the processing level rather than at the data level. On the other hand, semantic technology (ST) inserts the intelligence features by shifting the focus on the application code to the data. This can leverage the burden on the logical processing of adaptation engine. Ontology is a prerequisite in ST. Thus, this research proposes a computational model of design CA using an ontological approach. This paper discusses the experience we gained during the process of ontology modelling based on the OD101 method. The Malaysian food ontology was successfully developed to make the domain assumptions explicit in supporting the reasoning process of case adaptation for dietary menu planning recommendation.

Paper Nr: 59
Title:

Using Ontologies to Support Model-based Exploration of the Dependencies between Causes and Consequences of Hazards

Authors:

Abigail Parisaca Vargas and Robin Bloomfield

Abstract: Hazard identification and hazard analysis are difficult and essential parts of safety engineering. These activities are very demanding and mostly manual. There is an increasing need for improved analysis tools and techniques. In this paper we report research that focuses on supporting the early stages of hazard identification. A state-based hazard analysis process is presented to explore dependencies between causes and consequences of hazards. The process can be used to automate the analysis of preliminary hazard worksheets with the aims of making them more precise, disambiguating causal relationships, and supporting the proper definition of system boundaries. An application example is presented for a railway system.

Paper Nr: 66
Title:

Multi-modal Mu-calculus Semantics for Knowledge Construction

Authors:

Susumu Yamasaki and Mariko Sasakura

Abstract: This position paper aims at setting a new semantics for multi-modal mu-calculus to represent interactive states where abstract actions may be applied to. A least fixed point formula may be available to denote states allowing interaction. A simple algebraic representation for interactive states can be definable. For communication between human and machinery, a modality is reserved. In applicative task domains, knowledge construction is focused on with respect to interactive action applications through communications. Panel touch behaviour on iDevice as practice, URL references as functions and grammatical rule applications for sequential effects are studied, as knowledge construction technologies. These views coherent with abstract state machine are finally related to recent trends as semiring in algebraic structure and coalgebra for streams as sequential knowledge structures. A refinement of interactive techniques is positioned into a formal approach to multi-modal logic, applicable to some practices.

Paper Nr: 69
Title:

UML Activity Diagrams for OWL Ontology Building

Authors:

Joanna Isabelle Olszewska

Abstract: Building efficiently an ontology is a crucial task for most of the applications involving knowledge representation. In particular, applications dealing with dynamic processes directly shaping the ontological domain need the conceptualization of complex activities within this domain. For this purpose, we propose to develop an OWL ontology based on UML activity diagrams. Indeed, the Unified Modeling Language (UML) is a well-known visual language widely adopted for software specification and documentation. UML consists in structure as well as behaviour notations such as activity diagrams which describe the flow of control and data through the various stages of a procedure. Our approach has been successfully validated in a study case of an ontology with a publication repository domain.

Paper Nr: 75
Title:

A Formal Specifiction for Conceptualizationsin Computer Aided Visual Design Processes

Authors:

Ewa Grabska

Abstract: The paper is the next attempt to formalize an ontology-based knowledge framework helpful for CAD process. Our previous research has showed the need for a more expressive specification in order to capture the intended models corresponding to a certain design conceptualization. This paper presents a more general approach to ontological framework which will be illustrated with examples of designing floor-layouts. This formal framework can be useful for many different applications, for instance to biological systems, cultural heritage and economical aspects.

Paper Nr: 76
Title:

Preference Dissemination by Sharing Viewpoints - Simulating Serendipity

Authors:

Guillaume Surroca, Philippe Lemoisson, Clément Jonquet and Stefano Cerri

Abstract: The Web currently stores two types of content. These contents include linked data from the semantic Web and user contributions from the social Web. Our aim is to represent simplified aspects of these contents within a unified topological model and to harvest the benefits of integrating both content types in order to prompt collective learning and knowledge discovery. In particular, we wish to capture the phenomenon of Serendipity (i.e., incidental learning) using a subjective knowledge representation formalism, in which several “viewpoints” are individually interpretable from a knowledge graph. We prove our own Viewpoints approach by evidencing the collective learning capacity enabled by our approach. To that effect, we build a simulation that disseminates knowledge with linked data and user contributions, similar to the way the Web is formed. Using a behavioral model configured to represent various Web navigation strategies, we seek to optimize the distribution of preference systems. Our results outline the most appropriate strategies for incidental learning, bringing us closer to understanding and modeling the processes involved in Serendipity. An implementation of the Viewpoints formalism kernel is available. The underlying Viewpoints model allows us to abstract and generalize our current proof of concept for the indexing of any type of data set.

Paper Nr: 79
Title:

A Construction of Knowledge Base for Personality Estimation based on Submitted Text Data in Twitter or Blogs

Authors:

Noriyuki Okumura and Manabu Okumura

Abstract: The personality that is estimated based on documents of blogs or tweets in Twitter can not agree in the sender’s real personality. It is important that we recognize the difference between these estimated and real personalities. This paper constructs a knowledge-base for extracting the sender’s virtual personality in customer-generated media. We focus on sender’s emotions that are included in sender’s posts for automatic personality estimation. We examined the correlation between the ratio of each emotion term (anger, sadness, fear, disappointment, regret, guilt, shame, pleasure, and ease) in all sentences of each participant and the values of NEO-FFI (Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience) based on the experiment that human subjects who stayed in each sender’s character answered to NEO-FFI.As an evaluation result, we find out that the sender’s virtual personality is potentially-correlated with emotions in sender’s posts.

Paper Nr: 80
Title:

Using Semantic Technologies for More Intelligent Steel Manufacturing

Authors:

Nikolaos Matskanis, Stephane Mouton, Alexander Ebel and Francesca Marchiori

Abstract: In recent years, the steel industry has significantly raised its demands regarding product quality, optimization of production cost, environmental issues and lead-time. The demand for improved production performance has in turn increased the demand on information systems, in particular highlighting the need for improved factory- and company-wide collaboration and information exchange. The heterogeneity in structure, technology and architecture of the information systems deployed in manufacturing plants presents further challenges to the design and implementation of a data exchange system for process optimization.

Paper Nr: 81
Title:

Place Name Ambiguities in Urban Planning Domain Ontology

Authors:

Yi-Pei Liao and Feng-Tyan Lin

Abstract: Integrating spatial information is a crucial step in construction of Urban Planning Domain Ontology (UPDO), and taking spatial information from the web as the input of self-learning method are commonly used in constructing ontology. In this case, the use of place names could be important indicators of understanding the spatial information on the web. However, place names expressed in natural language bring diverse ambiguities, which would bring great challenges to several research fields such as Geographic Information System (GIS) and Geographic Information Retrieval (GIR). GIR has more contribution on place name ambiguities than GIS. Nevertheless, from the perspective of the urban planning domain, it is still lacking in application. This paper is a position paper that aims to bring out an argument of place name ambiguities in UPDO, and introduce two kinds of ambiguity frequently appearing in the urban planning domain. The paper also proposes a hierarchical structure of spatial ontology that allows constructors to deal with ambiguities. We believe in that the ambiguity issue is critical for urban planning, and the argument is worth discussing to all relevant domains.

Paper Nr: 83
Title:

A Multi-level Ontological Approach for Change Monitoring in Remotely Sensed Imagery

Authors:

Fethi Ghazouani, Wassim Messaoudi and Imed Riadh Farah

Abstract: Land-use/cover change, climate change, sea level evolution are examples of application that are associated with change detection. Actually, we use satellite image time series to monitor the change where entities are often dynamic along time. Moreover, knowledge associated to these spatio-temporal objects can evolve when changes occur. Thus, for modeling this kind of knowledge it is necessary to deal with four aspects: spectral, spatial, temporal and semantic. Such approach can be modeled by ontologies in many levels. Thereby, a shared ontology can be an ontology or a combination of some ontologies based on some mechanisms of linking. Such link process should maintain consistency between represented knowledge. In this paper, we propose a multi-level ontological approach for monitoring dynamics in remote sensing images. The proposed methodology aims to link our domain ontology to an upper level ontology thus enabling to represent existing change processes.