KEOD 2012 Abstracts


Full Papers
Paper Nr: 11
Title:

Enhancing the Results of Recommender Systems using Implicit Ontology Relations

Authors:

Lamiaa Abdelazziz and Khaled Nagi

Abstract: Sharing unstructured knowledge between peers is a must in virtual organizations. The huge number of doc-uments available for sharing makes modern recommender systems indispensable. Recommender systems use several information retrieval techniques to enhance the quality of their results. Unfortunately, every peer has his/her own point of view to categorize his/her own data. The problem arises when a user tries to search for some information in his/her peers’ exposed data. The seeker categories must be matched with its responders categories. In this work, we propose a way to enhance the recommendation process based on using simple implicit ontology relations. This helps in recognizing better matched categories in the exposed data. We show that this approach improves the quality of the results with an acceptable increase in computation cost.

Paper Nr: 12
Title:

Java2OWL: A System for Synchronising Java and OWL

Authors:

Hans Jürgen Ohlbach

Abstract: Java2OWL is a Java software library for synchronising Java class hierarchies with OWL concept hierarchies. With a few extra annotations in Java class files, the Java2OWL library can automatically map Java class hierarchies to OWL ontologies. The instances of these Java classes are automatically mapped to OWL individuals and vice versa. OWL reasoners can be used as query processors to retrieve instances of OWL concepts, and these OWL individuals are mapped to corresponding instances of the Java classes. Changes of the Java object’s attributes are automatically mapped to changes of the corresponding attributes of the OWL individuals, thus keeping Java and OWL synchronised. With minimal programming overhead the library allows one to combine the power of programming (in Java) with the expressivity and the reasoning power of OWL. This paper introduces the main ideas and techniques. The detailed documentation and the open source library itself can be downloaded from http://www.pms.ifi.lmu.de/Java2OWL.

Paper Nr: 34
Title:

A Product Development System using Knowledge-intensive Support Approach

Authors:

Janus S. Liang, Kuo-Ming Chao and Paul Ivey

Abstract: The author is carrying out research studies to explore the applicability of knowledge-based system technologies to today’s competitive product design and development, with an emphasis on the design of high quality products at the design stage. A framework of knowledge-intensive support approach for new product concepts is proposed in this paper. Based on the proposed approach and methodologies, a prototype system named KB@Pds, which can assist inexperienced users to perform the process in design and knowledge management. KB@Pds integrates the intelligent design process and knowledge management. This paper presents the underlying concepts of the development and shows the practical application with the prototype system with a case study.

Paper Nr: 38
Title:

Persona Modeling Process - From Microdata-based Templates to Specific Web Ontologies

Authors:

Stefan Negru and Sabin Buraga

Abstract: The use of Personas method for communicating user requirements in Human-Computer Interaction is well established and widely used. However, little research has been conducted regarding linking personas data with other pieces of data concerning the product development and design process or even creating a personas model. We consider that using semantic web technologies such as microdata and RDFa to annotate personas data is and adequate step towards solving these issues. This paper aims to provide a conceptual model (a HTML5 microdata schema and an OWL specification) that includes concepts and properties used to model personas. In order to exemplify our model and extract data, we made use of a semantically annotated persona template.

Paper Nr: 42
Title:

Creating an Educational Roadmap for Engineering Students via an Optimal and Iterative Yearly Regression Tree using Data Mining

Authors:

Marie Khair, Chady El Moucary and Walid Zakhem

Abstract: Targeting high academic standards is required in engineering studies. Advisors usually play an important role in helping students keeping good records and transcripts along their educational path by helping them choose their courses and keeping track of their grades. However, performance in some courses in the curriculum embodies determining repercussions and might inadvertently jeopardize the overall students’ Grade Point Average (GPA) in an irreversible manner. The purpose of this paper is to draw an educational roadmap that helps advisors and students being aware of the turning points that decisively affect their overall cumulative GPA and act upon a current outcome. This roadmap is based on Classification and Regression Trees where nodes and branches denote the aforementioned courses and students’ performance, respectively, with the ultimate outcome being the overall student’s GPA upon graduation. The tree is constructed based on a relatively large number of records with 10-fold cross-validation and testing. Moreover, the tree is produced on a yearly basis with a twofold objective. The first is to secure a high level of precision by applying it over a short period of time and the second is to allow for injecting each-year computed GPA with the remaining courses as to reflect the actual situation with maximum vraisemblance. This iterative and recursive tree achieves a very close tracking of students’ performance and provide a powerful tool to rectify courses’ track and grades for each student individually while aiming at a predefined final GPA. Furthermore, the choice of the optimal tree was carefully examined in the light of the relatively elevated number of attributes. In this context, diverse models were created and performance and/or precision were computed in terms of different values of “pruning levels” and “splitting criteria”. The choice of the best tree to be adopted for advising is thoroughly explained. Besides, it is shown, in this context, that the structure of the tree remains highly versatile in the sense that it can be revisited at any point for further assessment, adjustment, and expansion. Finally, yet importantly, simulation results were carried out using Matlab CART and demonstrated high efficiency and reasonably precise results.

Paper Nr: 43
Title:

Ontology Enrichment based on Generic Basis of Association Rules for Conceptual Document Indexing

Authors:

Lamia Ben Ghezaiel, Chiraz Latiri and Mohamed Ben Ahmed

Abstract: In this paper, we propose the use of a minimal generic basis of association rules (ARs) between terms, in order to automatically enrich an initial domain ontology. For this purpose, three distance measures are defined to link the candidate terms identified by ARs, to the initial concepts in the ontology. The final result is a proxemic conceptual network which contains additional implicit knowledge. Therefore, to evaluate our ontology enrichment approach, we propose a novel document indexing approach based on this proxemic network. The experiments carried out on the OHSUMED document collection of the TREC 9 filtring track and MeSH ontology showed that our conceptual indexing approach could considerably enhance information retrieval effectiveness.

Paper Nr: 51
Title:

Natural Language Query Processing in Multimedia Ontologies

Authors:

Filiz Alaca Aygul, Nihan Kesim Cicekli and Ilyas Cicekli

Abstract: In this paper a natural language query interface is developed for semantic and spatio-temporal querying of MPEG-7 based domain ontologies. The underlying ontology is created by attaching domain ontologies to the core Rhizomik MPEG-7 ontology. The user can pose concept, complex concept, spatial, temporal, object trajectory and directional trajectory queries. Furthermore, the system handles the negative meaning in the user query. When the user enters a natural language query, it is parsed with the link parser. According to query type, the objects, attributes, spatial relation, temporal relation, trajectory relation, time filter and time information are extracted from the parser output by using predefined information extraction rules. After the information extraction, SPARQL queries are generated, and executed against the ontology by using an RDF API. The results are used to calculate spatial, temporal, and trajectory relations between objects. The results satisfying the required relations are displayed in a tabular format and the user can navigate through the multimedia content.

Paper Nr: 59
Title:

Bringing Order to Legal Documents - An Issue-based Recommendation System Via Cluster Association

Authors:

Qiang Lu and Jack G. Conrad

Abstract: The task of recommending content to professionals (such as attorneys or brokers) differs greatly from the task of recommending news to casual readers. A casual reader may be satisfied with a couple of good recommendations, whereas an attorney will demand precise and comprehensive recommendations from various content sources when conducting legal research. Legal documents are intrinsically complex and multi-topical, contain carefully crafted, professional, domain specific language, and possess a broad and unevenly distributed coverage of issues. Consequently, a high quality content recommendation system for legal documents requires the ability to detect significant topics from a document and recommend high quality content accordingly. Moreover, a litigation attorney preparing for a case needs to be thoroughly familiar the principal arguments associated with various supporting opinions, but also with the secondary and tertiary arguments as well. This paper introduces an issue-based content recommendation system with a built-in topic detection/segmentation algorithm for the legal domain. The system leverages existing legal document metadata such as topical classifications, document citations, and click stream data from user behavior databases, to produce an accurate topic detection algorithm. It then links each individual topic to a comprehensive pre-defined topic (cluster) repository via an association process. A cluster labeling algorithm is designed and applied to provide a precise, meaningful label for each of the clusters in the repository, where each cluster is also populated with member documents from across different content types. This system has been applied successfully to very large collections of legal documents, O(100M), which include judicial opinions, statutes, regulations, court briefs, and analytical documents. Extensive evaluations were conducted to determine the efficiency and effectiveness of the algorithms in topic detection, cluster association, and cluster labeling. Subsequent evaluations conducted by legal domain experts have demonstrated that the quality of the resulting recommendations across different content types is close to those created by human experts.

Paper Nr: 62
Title:

Ontology Alignment for Classification of Low Level Sensor Data

Authors:

Marjan Alirezaie and Amy Loutfi

Abstract: In this work we show how alignment techniques can be used to align an ontology to a decision tree representing the features used in classification of sensor signals. The sensor data represents time-series data from an electronic nose when measuring bacteria in blood samples. The objective is to provide from the classification of these signals an estimate of the type of bacteria present in the sample. As these classification are inherently uncertain, knowledge about standard laboratory tests are used together with the classification result in order to determine a subset of tests to conduct that should result in a fast identification of the bacteria. The information about the laboratory tests are contained in an ontology. The result from the alignment is new classifier where recommendations are given to a user (expert) based on the interpretation of the sensor data that is done automatically.

Paper Nr: 75
Title:

FuzzyAlign - A Fuzzy Method for Ontology Alignment

Authors:

Susel Fernández, Juan R. Velasco, Ivan Marsa-Maestre and Miguel A. Lopez-Carmona

Abstract: The need of sharing information and services makes data integration as one of the most requested issues in the Semantic Web. Ontologies are crucial for formally specifying the vocabulary and the concepts within a domain, so, for better interoperability is important to translate data from one ontological framework to another. Ontology matching is the process of finding correspondences between the concepts of different ontologies. This problem is being addressed in many studies but has not managed to automate the matching process fully considering all the complex structure of the ontologies. This paper aims to provide mechanisms to support experts in the ontology matching process by using fuzzy logic techniques to determine the similarity between entities from different ontologies. We propose FuzzyAlign, a Multi-Layer fuzzy rule-based system, which obtains the alignments by taking into account both the lexical and semantic elements of names, and the relational and the internal structures of the ontologies to obtain the alignments. The ideas presented in this work were validated using the OAEI evaluation tests for ontology alignment systems in which we have obtained good results.

Paper Nr: 82
Title:

Ontology Driven Approach for Intelligent Energy Management in Discrete Manufacturing

Authors:

Hendro Wicaksono, Sven Rogalski and Jivka Ovtcharova

Abstract: In recent years ontologies have been used for knowledge representation in different domains, such as energy management and manufacturing. Researchers have developed approaches in applying ontologies for intelligent energy management in households. In the manufacturing domain, ontologies have been used for knowledge management in order to provide a common formal understanding between the stakeholders, who have different background knowledge. Energy management in a manufacturing company involves different organizational entities and technical processes. This paper proposes an approach to applying ontology for intelligent energy management in discrete manufacturing companies. The ontology provides a formal knowledge representation that is accessible by different human stakeholders as well as machines in the company. This paper also demonstrates the methods used to construct and to process the ontology.

Paper Nr: 87
Title:

Building Application Ontologies through Knowledge System Goals

Authors:

Luis Eduardo Santos and Rosario Girardi

Abstract: This article presents a case study in building an application ontology for the development of a knowledge-based system in the field of Inheritance Law, developed with GAODT, a goal oriented technique. GAODT proposes the translation of the goals necessary to build a knowledge-based system expressed in natural language to rules in First-order logic, from which the elements that constitute the application ontology (classes, relations, properties and axioms) are extracted. It is also presented a comparative evaluation between GAODT and other state of the art techniques.

Paper Nr: 98
Title:

Recall-driven Product Tracing and Supply Chain Tracking using Answer Set Programming

Authors:

Monica L. Nogueira and Noel P. Greis

Abstract: Incomplete information and the inability to trace the movement of contaminated products across the food chain has hindered our ability to locate and remove contaminated products once a food recall has been announced. The FDA Food Safety Modernization Act (FSMA) that was signed into law in 2011, however, supports traceability by both expanding the registration requirements for companies that are involved in food production and, in the event of a food recall, requiring companies to provide information about their immediate suppliers and customers—what is referred to as “one step forward” and “one step backward” traceability. In this paper we implement the logic-based approach called answer set programming that uses inference rules to determine the set of all companies that may be linked to a contaminated product. Unlike other approaches, we do not depend on the availability of common standards or unique identifiers. Rather, the proposed approach utilizes information about the company’s primary suppliers and customers along with their products—consistent with the “one step forward” and “one step backward” required under FMSA as noted above. We demonstrate this approach using the example of a food recall involving pork products.

Short Papers
Paper Nr: 6
Title:

iTree: Skill-building User-centered Clarification Consultation Interfaces

Authors:

Martina Freiberg and Frank Puppe

Abstract: Developing web-based, knowledge-based systems (wKBS) still challenges developers, mostly due to the inherent complexity of the overall task. The increased focus on knowledge-base development/evaluation and consequent neglect of UI/interaction design and usability evaluation raises the need for a tailored wKBS development tool, leveraging the overall task while specifically supporting the latter activities. As an example for such a tool, we introduce the wKBS development tool ProKEt. With the help of that tool, we developed the novel UI concept interactive clarification tree (iTree) with skill-building ability, that specifically is suitable for clarification consultation systems. Also, we report a recent case study, where iTree was implemented for knowledge-based clarification consultation in the legal domain.

Paper Nr: 9
Title:

Modelling How Students Organize Knowledge

Authors:

Ismo Koponen

Abstract: We discuss here how students organize their knowledge (in physics) by connecting closely related concepts. Attention is paid on the relational structure of the ordering of concepts so that the introduction of new concepts is justified on the basis of concepts which have already been learned. Consequently, there is then direction of progress in introducing new concepts - there is ``flux of information'' so that what was learned before is the basis for learning new conceptual knowledge. Such ordered and directed process of introducing the concepts can be conveniently described and analysed in the framework of directed ordered graphs. We propose here a model of knowledge organization for such concept maps. The model is based on the assumption that students use simple procedures connecting new concepts mostly to concepts introduced few steps before. On basis of the model results we suggest that the most important properties of concept maps can be understood on a basis of such simple rules for organising knowledge.

Paper Nr: 14
Title:

Hontology: A Multilingual Ontology for the Accommodation Sector in the Tourism Industry

Authors:

Marcirio Silveira Chaves, Larissa Freitas and Renata Vieira

Abstract: Ontologies have been used to support both web agents reasoning and human decision making. However, ontology development is a new area and for some knowledge domains they are still rare. Although ontologies had been developed in the context of Semantic Web, it is the Web 2.0 content that is actually pervasive on the web. One of the properties of this content is to be multilingual, which requires multilingual resources to deal with it. Online reviews are examples of multilingual texts provided by products and services consumers. This paper presents a multilingual ontology for the accommodation sector. As a result, we deliver Hontology, a freely available domain-specific ontology. Hontology reuses concepts of other vocabularies such as Dbpedia.org and Schema.org. It is useful for a wide range of applications within the accommodation sector, including ontology-based information extraction, text annotation and information visualisation.

Paper Nr: 16
Title:

An Ontology-based Data Acquisition Infrastructure - Using Ontologies to Create Domain-independent Software Systems

Authors:

Dominic Girardi, Klaus Arthofer and Michael Giretzlehner

Abstract: We created an ontology-based data acquisition infrastructure which is able to store data of almost arbitrary structure and can be set up for a certain domain of application within hours. An ontology editor helps the domain expert to define and maintain the domain specific ontology. Based on the user-defined ontology, a web-based data acquisition system and an ETL data import interface are automatically created at runtime. Furthermore, rules for semantic data plausibility can be established in the ontology to provide semantic data quality for subsequent processing of the collected data. After a comprehensive requirement analysis we decided to use a special meta model instead of standard OWL ontologies. In this paper, we describe our meta-model and the reason for not using OWL in our case in detail as well as we present the infrastructure and the project it is currently used for.

Paper Nr: 18
Title:

About Creating Intelligence Systems in Ternary Logic

Authors:

I. A. Bessmertny, S. V. Eliseev and A. V. Nevidimov

Abstract: Rule based intelligent systems traditionally use binary or fuzzy logic. Binary logic by implications causes contradictions as far as the knowledge base grows, and the bottlenecks of fuzzy logic are fuzzification/defuzzification processes and computational complexity of inference. A common problem of all information systems is vulnerability to missing data that can yield wrong results. The paper shows the opportunity and expediency of creating intelligent systems with the rule-based model of knowledge in ternary logic basis operating with states “true, false, possible”. Intelligent systems based on ternary logic allow recognition of a contradiction by the presence in the knowledge base of both the fact and its negation. Inference from ternary rules not only derives facts by a query but also reveals what facts are missing for the goal be true. This feature could help to solve the problem of basic level facts that seem to be obvious for a person but are not presented in the knowledge base. The paper contains examples of Prolog rules for conversion of binary knowledge bases to ternary ones and some rules for manipulating with ternary facts.

Paper Nr: 20
Title:

Topic and Subject Detection in News Streams for Multi-document Summarization

Authors:

Fumiyo Fukumoto, Yoshimi Suzuki and Atsuhiro Takasu

Abstract: This paper focuses on continuous news streams and presents a method for detecting salient, key sentences from stories that discuss the same topic. Our hypothesis about key sentences in multiple stories is that they include words related to the target topic, and the sub ject of a story. In addition to the TF-IDF term weighting method, we used the result of assigning domain-specific senses to each word in the story to identify a subject. A topic, on the other hand, is identified by using a model of ”topic dynamics”. We defined a burst as a time interval of maximal length over which the rate of change is positive acceleration. We adapted stock market trend analysis technique, i.e., Moving Average Convergence Divergence (MACD). It shows the relationship between two moving averages of prices, and is popular indicator of trends in dynamic marketplaces. We utilized it to measure topic dynamics. The method was tested on the TDT corpora, and the results showed the effectiveness of the method.

Paper Nr: 26
Title:

Confidence Management for Learning Ontologies from Dynamic Web Sources

Authors:

Gerhard Wohlgenannt, Albert Weichselbraun, Arno Scharl and Marta Sabou

Abstract: Dynamic environments require effective update mechanisms for ontologies to incorporate new knowledge. In this position paper we present a dynamic framework for ontology learning which integrates automated learning methods with rapid user feedback mechanism to build and extend lightweight domain ontologies at regular intervals. Automated methods collect evidence from a variety of heterogeneous sources and generate an ontology with spreading activation techniques, while crowdsourcing in the form of Games with a Purpose validates the new ontology elements. Special data structures support dynamic confidence management in regards to three major aspects of the ontology: (i) the incoming facts collected from evidence sources, (ii) the relations that constitute the extended ontology, and (iii) the observed quality of evidence sources. Based on these data structures we propose trend detection experiments to measure not only significant changes in the domain, but also in the conceptualization suggested by user feedback.

Paper Nr: 28
Title:

An Approach to Ontology-based Intention Recognition using State Representations

Authors:

Craig Schlenoff, Sebti Foufou and Stephen Balakirsky

Abstract: In this paper, we present initial thoughts on an approach to ontology/logic-based intention recognition based on the recognition, representation, and ordering of states. This is different than traditional approaches to intention recognition, which use activity recognition and the ordering of activities. State recognition and representation offer numerous advantages, including the ability to infer the intention of multiple people working together and the fact that states are easier for a sensor system to recognize than actions. The focus of this work is on the domain of manufacturing assembly, with an emphasis on human/robot collaboration during the assembly process.

Paper Nr: 35
Title:

Knowledge Engineering Requirements for Generic Diagnostic Systems

Authors:

Andreas Mueller, Ingmar Hofmann, Heiner Oberkampf and Sonja Zillner

Abstract: Diagnostics is the process of determining the nature of malfunctions or faults of systems in various domains. With regard to the complexity of systems and their composition of different subsystems or subcomponents, for which different diagnostic approaches are optimal, no means exist for seamless and agile cooperation and information exchange between currently isolated diagnostic approaches. However, we consider this essential for an integrated diagnostic mechanism covering complex systems in their entirety. Hence, in this paper, we show the basic requirements for a generic diagnostic knowledge representation language (DKRL) by investigating typical diagnostic examples from different domains, namely industry and medicine. DKRL is intended to facilitate the generic representation, handling, and interchange of diagnostic knowledge required for performing diagnostics without regard to specific diagnostic approaches.

Paper Nr: 36
Title:

Conflict Resolution in Overlapping Information Fields for Context-based Activity Design

Authors:

Zuraini Zainol and Keiichi Nakata

Abstract: Norms are a set of rules that govern the behaviour of human agent, and how human agent behaves in response to the given certain conditions. This paper investigates the overlapping of information fields (set of shared norms) in the Context State Transition Model, and how these overlapping fields may affect the choices and actions of human agent. This paper also includes discussion on the implementation of new conflict resolution strategies based on the situation specification. The reasoning about conflicting norms in multiple information fields is discussed in detail.

Paper Nr: 50
Title:

Towards Semantic Summaries over Ontologies

Authors:

Sebastian Wandelt and Ralf Möller

Abstract: Industry is increasingly dependent on the gathering and processing of data to support crucial product development activities. However, support systems for engineers or computer scientists may need to consider terabytes of data, making it very hard to automatically extract useful information. Querying data repositories in order to extract just the right information for the situation at hand remains a challenging problem. We propose a notion of semantic summaries on top of description logic knowledge bases that supports querying and summarizing information in large (ontological) data repositories. The idea of a semantic summary is to characterize the result set from a broader perspective, instead of describing each domain object. We show that our summarization approach scales for benchmark ontologies up to several million assertional axioms.

Paper Nr: 52
Title:

Modeling Genealogical Domain - An Open Problem

Authors:

Joan Campanyà Artés, Jordi Conesa Caralt and Enric Mayol

Abstract: The automated processing, storing and knowledge inference of genealogical data presents several difficulties. Roughly eighteen years ago, the FamilySearch organization published GEDCOM, a new standard file format to allow genealogy software and tools to exchange genealogical data. Five years later, the GENTECH Data Modeling Project, proposed a new genealogical logic data model to support research in genealogy and to allow data inter-exchange between genealogy software. Despite being initial reference models, they still have some limitations to adapt to different cultural and social environments. Additionally, sharing genealogical data between systems is difficult since, even though they are syntactical reference models, they may have semantic mismatches. Today, we have not a common and unified proposal as a standard recognized genealogical model. In this way, in this paper we propose to consider the ontology paradigm to extend expressiveness of concepts and relationships in such standards.

Paper Nr: 54
Title:

From 9-IM Topological Operators to Qualitative Spatial Relations using 3D Selective Nef Complexes and Logic Rules for Bodies

Authors:

Helmi Ben Hmida, Christophe Cruz, Frank Boochs and Christophe Nicolle

Abstract: This paper presents a method to compute automatically topological relations using SWRL rules. The calculation of these rules is based on the definition of a Selective Nef Complexes Nef Polyhedra structure generated from standard Polyhedron. The Selective Nef Complexes is a data model providing a set of binary Boolean operators such as Union, Difference, Intersection and Symmetric difference, and unary operators such as Interior, Closure and Boundary. In this work, these operators are used to compute topological relations between objects defined by the constraints of the 9 Intersection Model (9-IM) from Egenhofer. With the help of these constraints, we defined a procedure to compute the topological relations on Nef polyhedra. These topological relationships are Disjoint, Meets, Contains, Inside, Covers, CoveredBy, Equals and Overlaps, and defined in a top-level ontology with a specific semantic definition on relation such as Transitive, Symmetric, Asymmetric, Functional, Reflexive, and Irreflexive. The results of the computation of topological relationships are stored in an OWL-DL ontology allowing after what to infer on these new relationships between objects. In addition, logic rules based on the Semantic Web Rule Language allows the definition of logic programs that define which topological relationships have to be computed on which kind of objects with specific attributes. For instance, a “Building” that overlaps a “Railway” is a “RailStation”.

Paper Nr: 63
Title:

Clustering of Medical Terms based on Morpho-syntactic Features

Authors:

Agnieszka Mykowiecka and Malgorzata Marciniak

Abstract: The paper presents the first results of clustering terms extracted from hospital discharge documents written in Polish. The aim of the task is to prepare data for an ontology reflecting the domain of documents. To begin, the characteristic of the language of texts, which differs significantly from general Polish, is given. Then, we describe the method of term extraction. In the process of finding related terms, we use lexical and syntactical information. We define term similarity based on: term contexts; coordinated sequences of terms; words that are parts of terms, e.g. their heads and modifiers. Then we performed several experiments with hierarchical clustering of the 300 most frequent terms. Finally, we describe the results and present an evaluation that compares the results with manually obtained groups.

Paper Nr: 67
Title:

Towards a Reference Plant Trait Ontology for Modeling Knowledge of Plant Traits and Phenotypes

Authors:

Elizabeth Arnaud, Laurel Cooper, Rosemary Shrestha, Naama Menda, Rex T. Nelson, Luca Matteis, Milko Skofic, Ruth Bastow, Pankaj Jaiswal, Lukas Mueller and Graham McLaren

Abstract: Ontology engineering and knowledge modeling for the plant sciences is expected to contribute to the understanding of the basis of plant traits that determine phenotypic expression in a given environment. Several crop- or clade-specific plant trait ontologies have been developed to describe plant traits important for agriculture in order to address major scientific challenges such as food security. We present three successful species and/or clade-specific ontologies which address the needs of crop scientists to quickly access a wide range of trait related data, but their scope limits their interoperability with one another. In this paper, we present our vision of a species-neutral and overarching Reference Plant Trait Ontology which would be the basis for linking the disparate knowledge domains and that will support data integration and data mining across species.

Paper Nr: 72
Title:

Prototypes Vs Exemplars in Concept Representation

Authors:

Marcello Frixione and Antonio Lieto

Abstract: Concept representation is still an open problem in the field of ontology engineering and, more in general, of knowledge representation. In particular, it still remains unsolved the problem of representing "non classical" concepts, i.e. concepts that cannot be defined in terms of necessary and sufficient conditions. In this paper we review empirical evidence from cognitive psychology, which suggests that concept representation is not an unitary phenomenon. In particular, it seems that human beings employ both prototype and exemplar based representations in order to represent non classical concepts. We suggest that a similar, hybrid prototype-exemplar based approach could be useful also in the field of formal ontology technology.

Paper Nr: 73
Title:

Information Retrieval in Collaborative Engineering Projects - A Vector Space Model Approach

Authors:

Paulo Figueiras, Ruben Costa, Luis Paiva, Celson Lima and Ricardo Jardim-Gonçalves

Abstract: This work introduces a conceptual framework and its current implementation to support the classification and discovery of knowledge sources, where every knowledge source is represented through a vector (named Semantic Vector - SV). The novelty of this work addresses the enrichment of such knowledge representations, using the classical vector space model concept extended with ontological support, which means to use ontological concepts and their relations to enrich each SV. Our approach takes into account three different but complementary processes using the following inputs: (1) the statistical relevance of keywords, (2) the ontological concepts, and (3) the ontological relations. SVs are compared against each other, in order to obtain their similarity index, and better support end users with a search/retrieval of knowledge sources capabilities. This paper presents the technical architecture (and respective implementation) supporting the conceptual framework, emphasizing the SV creation process. Moreover, it provides some examples detailing the indexation process of knowledge sources, results achieved so far and future goals pursued here are also presented.

Paper Nr: 81
Title:

Towards an Ontology for Ubiquitous User Modeling Interoperability

Authors:

María de Lourdes Martínez-Villaseñor and Miguel González-Mendoza

Abstract: In order to obtain a broader understanding of the user, some researchers in the community of user modeling envision the need to share information of user models between applications. But gathering distributed user information from heterogeneous sources to obtain user models interoperability implies handling syntactic and semantic heterogeneity. It is also important to provide means for a ubiquitous user model to evolve over time. We present U2MIO a dynamic ontology with flexible structure for user modeling interoperability based in SKOS ontology. The U2MIO provides mediation based user modeling for sharing and reusing information from heterogeneous user models. A two-tier matching strategy is proposed for the process of concept alignment that permits the interoperability between profile suppliers and consumers.

Paper Nr: 84
Title:

An Ontology for Social Networking Sites Interoperability

Authors:

Konstantinos Tserpes, George Papadakis, Magdalini Kardara, Athanasios Papaoikonomou, Fotis Aisopos, Emmanuel Sardis and Theodora Varvarigou

Abstract: The Social Networking Sites (SNS) comprise a pool from which developers can pump functionality and data. Usable REST APIs are providing access to two valuable business assets: the users' generated content and social graph. The lack of standards and the antagonistic nature of the SNSs have resulted in the use of proprietary API specifications and -in turn- data models. Each SNS uses a different method to access and a different way to describe notions which are largely similar, e.g. “friends” or shared “multimedia items”. The conceptual similarity between entities “living” in the SNS, creates a remarkable opportunity: The aggregation of the social functionality and data can provide the basis for a unique platform on top of which third parties can deploy new added value services, seamlessly using the underlying SNSs APIs. This paper presents an attempt to implement this concept by means of building a core ontology for supporting SNS interoperability.

Paper Nr: 86
Title:

Correlating and Cross-linking Knowledge Threads in Informledge System for Creating New Knowledge

Authors:

T. R. Gopalakrishnan Nair and Meenakshi Malhotra

Abstract: There has been a considerable advance in computing, to mimic the way in which the brain tries to comprehend and structure the information to retrieve meaningful knowledge. It is identified that neuronal entities hold whole of the knowledge that the species makes use of. We intended to develop a modified knowledge based system, termed as Informledge System (ILS) with autonomous nodes and intelligent links that integrate and structure the pieces of knowledge. We conceive that every piece of knowledge is a cluster of cross-linked and correlated structure. In this paper, we put forward the theory of the nodes depicting concepts, referred as Entity Concept State which in turn is dealt with Concept State Diagrams (CSD). This theory is based on an abstract framework provided by the concepts. The framework represents the ILS as the weighted graph where the weights attached with the linked nodes help in knowledge retrieval by providing the direction of connectivity of autonomous nodes present in knowledge thread traversal. Here for the first time in the process of developing Informledge, we apply tenor computation for creating intelligent combinatorial knowledge with cross mutation to create fresh knowledge which looks to be the fundamentals of a typical thought process.

Paper Nr: 92
Title:

Conceptualization - A Novel Intensional-based Model

Authors:

Islam Ali and Hamada Ghenniwa

Abstract: A formal treatment of conceptualization is essential and a fundamental aspect of knowledge representation, Ontologies and information engineering. Several approaches have been proposed based on extensional logic and extensional reduction model. However, in this paper we highlight several limitations of their applicability for modelling conceptualizations in dynamic and open environments, due to several strong assumptions that are not adequate for dynamic and open environments. To this end we argue that intension based model is a natural and adequate model. We present a model based on the theory of Properties Relations and Propositions. This description takes the concepts and relations as primitives and, as such, irreducible. The proposed description is then extended to describe the world in more details by capturing the properties of the domain concepts.

Paper Nr: 95
Title:

I3OM – An Iterative, Incremental and Interactive Approach for Ontology Navigation based on Ontology Modularization

Authors:

Ricardo Brandão, Paulo Maio and Nuno Silva

Abstract: Although ontologies are used to describe a specific domain of interest, they can grow in size exponentially, compromising their usage. Furthermore, current ontology engineering tools do not effectively support the data/information visualization and navigation described through large ontologies. To address these issues, we claim that the experience and results of navigating/browsing ontology-described data can profit from the modularization of the ontologies underlying the repositories. For that, we propose the I3OM process that facilitates ontology-oriented navigation and contextualized information retrieval by combining different ontology modularization techniques into an iterative, incremental and interactive process.

Paper Nr: 96
Title:

Ontology Library - A New Approach for Storing, Searching and Discovering Ontologies

Authors:

Daniel Kotowski and Deborah A. Stacey

Abstract: The backbone of semantic web technologies is the ontology. This is a powerful structure, which allows for the capture, reasoning and storing of expert knowledge across various domains. Ideally these structures should be developed and implemented by experts in a set domain as well as designed with re-usability in mind. However, often due to the lack of availability and difficulties of discovering ontologies, these structures are repeatedly recreated. Current methods for storing, discovering and sharing ontologies employ similar techniques as to those used for software source code or static web pages. These are exposed to the limitation inherent with keyword-based searches, such as ambiguity with the keywords themselves and therefore, the most relevant ontology may not be discovered. This paper will examine some of the existing techniques used for the storing and sharing of ontologies. It will offer a contrasting method analogous to software libraries to develop a standard to store, share, discover, and distribute common ontologies.

Paper Nr: 105
Title:

UCASFUM: A Ubiquitous Context-aware Semantic Fuzzy User Modeling System

Authors:

Hilal Tarakci and Nihan Kesim Cicekli

Abstract: In this paper, we propose a ubiquitous user modeling system which illustrates different aspects of the individual’s interests and his/her current and future context. The user model is constructed by aggregating and semantically enhancing the partial profiles obtained by mining socially enhanced online traces of the user on a regular basis. Those traces include actions performed and relationships established in the social web accounts in addition to the local machine traces such as bookmarks and web history. The semantical enrichment process consists of two phases: constructing an overlay model by using concepts and hierarchical information from external knowledge bases and creating links from the constructed user model concepts to supported ontologies. The former phase outputs a semantically enhanced user model whereas the latter enables interoperability between applications which use the proposed system for personalization. Moreover, fuzzy membership values are computed for each interest and context item in the user model. In order to model the semantically enhanced user profile and represent fuzziness values, fuzzy hypergraph is used as data structure. Fuzzy hypergraph representation enables extraction of partial user profiles in the requested domains besides answering user modeling queries such as the degree of the user’s interest for the given concepts. By extracting partial profiles by specifying domains, the proposed system can be used for personalization purposes in multi application environments.

Paper Nr: 106
Title:

Knowledge Engineering and Ontologies for Object Manipulation in Collaborative Virtual Reality

Authors:

Manolya Kavakli

Abstract: This project describes an ontology for a collaborative engineering task. The task is to take apart an interactive 3D model in 3D space using virtual reality and to manipulate an object. The project examines a virtual environment in which two engineers can perform a number of tasks for manipulating object parts controlling a wiimote inside an immersive projection system. The interface recognizes hand-gestures of the engineers, pass commands to a VR modelling package via a gesture recognition system, perform the actions on the 3D model of the object, generating it on the immersive projection screen. We use retrospective protocol analysis for knowledge engineering and ontology building analysing the cognitive processes.

Paper Nr: 107
Title:

Aspect-based Product Review Summarizer

Authors:

Hsiang Hui Lek and Danny C. C. Poo

Abstract: Consumers are now relying on product reviews websites to aid them in deciding which product to buy. These sites contain large number of reviews and reading through them is tedious. In this work, we propose building a product review summarizer which will process all the reviews for a product and present them in an easy to read manner. The generated summaries show a list of product features or aspects and their corresponding rating, allowing users in comparing between different products easily. Our system first makes use of an aspect/sentiment extractor to extract the list of aspects and their sentiment words. Sentiment classification is then performed to obtain the polarity of aspects. Finally, these aspects are combined and assigned a rating to form the final summary. The experimental results on various domains have shown that our system is promising.

Paper Nr: 109
Title:

Datalog for Inconsistency-tolerant Knowledge Engineering

Authors:

Hendrik Decker

Abstract: Inconsistency tolerance is widely discussed and accepted in the scientific community of knowledge engineering. From a principled, theoretical point of view, however, the fundamental conflict of sound reasoning with unsound data has remained largely unresolved. The vast majority of applications that need inconsistency tolerance either does not care about a firm theoretical underpinning, or recurs on non-standard logics, or superficially refers to well-established classical foundations. We argue that hardly any of these paradigms will survive in the long run. We defend the position that datalog (Abiteboul et al., 1995), including integrity constraints, is a viable candidate for a sound and robust foundation of inconsistency-tolerant knowledge engineering. We line our argument by a propaedeutic glance at the history of issues related to inconsistency.

Paper Nr: 113
Title:

Optimization of Knowledge Availability in an Institutional Repository

Authors:

Filippo Eros Pani, Maria Ilaria Lunesu, Giulio Concas, Carlo Stara and Maria Pia Tilocca

Abstract: The Institutional Repositories (IRs) based on Open Archives represent one of the main free access tools for the results of scientific research, and their diffusion is continuously growing. In fact, more and more universities and research centers rely on IRs in order to optimize the management and diffusion of scientific work. During an industrial project aimed to the creation of the Analytic Sound Archive of Sardinia, was born the idea to create an Institutional Archive to solve the problems of organization and availability of information. The Archive would contain a linguistically annotated corpus to preserve, enhance and support the oral traditions in the Sardinian language. The distinctive feature of this work is the proposal of a new approach for formalization and management of knowledge using the tool DSpace (a Knowledge Management System typically used for the organization and management of text documents) to store, manage and query the electronic corpus made of audio clips. In this specific case we worked with a group of audio recordings in a corpus and linguistic information added to that corpus with annotations. The customization of the structures and interfaces of the chosen KMS, to ensure the availability and sharing of knowledge, are also closely tied to this research.

Paper Nr: 117
Title:

Geometric Knowledge Analysis based in Part Functional Descriptions

Authors:

João Carlos Linhares and Altamir Dias

Abstract: Parts have associate elementary functions descriptions to each constructive detail. A first try to understand this relationship is to look for, if exists a clear-cut syntactic standard to create the functions description. After, it is necessary to establish some measure's method to evaluate how the functions are repeated in several mechanical parts with some existent standards and how to associate it to one or more elementary solid geometries, such as groove, hole, groove, round, chamfer, and others. This correlation may lead, initially, to the likely feature operator in CAD system, which is used to create solid geometries that accomplish the desired functions to the product. So, CAD 3D geometric modeling can be accomplished from the functions description executed for the parts that integrate the products technical systems. When a functions group with design meaning can be, in some way, associated to the solid geometry, it may realize a new paradigm in the mechanical design: the product design from the function-form transformation. This elementary functions description is closely to the geometric structure that gives the part's shapes. This article shows the process to identify functions description and its phrasal structure applied to the mechanical parts of an off-road Baja vehicle design.

Posters
Paper Nr: 4
Title:

Case Study: Ontology for Metadata in e-Learning

Authors:

Ana María Fermoso García, Salvador Sánchez Alonso, Encarnación Beato Gutierrez and Roberto Berjón Gallinas

Abstract: Learning objects are a fundamental concept in new educational paradigms based on e-learning. These resources are not only content but also include metadata descriptions organized into categories, often in the form of an application profile of the IEEE LOM standard. These metadata descriptions facilitate the design, search, exchange and reuse of educational contents reachable through learning object repositories. The aim of this paper is to present the case of the Organic.Edunet portal, where the semantic search capabilities implemented made necessary to design an OWL mapping for the IEEE LOM standard. We called this effort LOM2OWL. LOM2OWL is in fact an ontology aimed at improving the search, reuse and use of learning objects stored in the Organic.Edunet Web portal and repository.

Paper Nr: 5
Title:

Lexicon based Algorithm for Domain Ontology Merging and Alignment

Authors:

Tomasz Boiński and Henryk Krawczyk

Abstract: More and more systems contain some kind of knowledge describing their field of operation. Such knowledge in many cases is stored as an ontology. A need arises for ability to quickly match those ontologies to enable interoperability of such systems. The paper presents a lexicon based algorithm for merging and aligning of OWL ontologies. The proposed similarity levels are being presented and the proposed algorithm is being described. Results of test showing the algorithm quality are presented.

Paper Nr: 15
Title:

MFO—The Federated Financial Ontology for the MONNET Project

Authors:

Hans-Ulrich Krieger, Thierry Declerck and Ashok Kumar Nedunchezhian

Abstract: This paper describes work carried out in the European project MONNET which deals in part with the extraction of company data from stock exchange pages and its representation in a semantic repository on which inferences and queries are carried out. The special focus of the paper lies on the construction of an integrated ontology MFO—the MONNET Financial Ontology—that has been constructed from several independent ontologies which are brought together by an interface specification, expressed in OWL.

Paper Nr: 25
Title:

Development of Intelligent Assistance System to Support Eco-efficient Planning

Authors:

Sarfraz Ul Haque Minhas and Ulrich Berger

Abstract: The automotive industry is facing challenges due to high mass customization and consequent decentralization of manufacturing systems. Currently, the evaluation and optimization of eco-efficiency of production processes is complicated due to time consuming LCA simulations and inexperience of production planners to make respective decisions. This paper addresses this issue by developing ontology based intelligent assistance system to support planner in environmental assessment of manufacturing of customized production in decentralized manufacturing networks as well as decision making in production planning.

Paper Nr: 37
Title:

Modelling Agent Protocol Requirements

Authors:

Jason Heard and Rob Kremer

Abstract: Currently, there is no method for describing the requirements of a multi agent protocol. Much work has been done on describing protocols themselves, and this work has continually shifted from low-level protocols, such as situation-action pairs, to higher-level protocols, such as conversation policies. Despite the wealth of work in methods for describing policies, there is no work on describing what a policy should do. A method to describe what (and not how) would be useful in the area of automated protocol evaluation and even protocol generation. To address this gap, this paper outlines a situation model that contains all of the information needed to define the requirements of a new protocol in a declarative manner. The situation model describes the needed information to develop new protocols, try those protocols in an appropriate environment and then evaluate the performance of the protocols once they have been executed. In addition, this model has a fairly simple textual representation that is designed to be easily parsed. This paper also outlines how this model could be used to generate possible protocols and to evaluate potential protocols.

Paper Nr: 39
Title:

Ontology Summarization through Simple Pruning Measures

Authors:

Isaac Lera, Carlos Juiz and Ramon Puigjaner

Abstract: This paper addresses the problem of synthesizing an ontology by defining prunning measures based on OWL axioms. From a deep structural and axiomatic analysis of current ontologies, we have defined a set of basic measures of selection of important elements that it has a linear computational cost.

Paper Nr: 47
Title:

A Context Sensitive Experience Feeder for Computer Aided Engineering

Authors:

Bo Song and Zuhua Jiang

Abstract: Computer aided engineering (CAE) tries to map properties and interactions of real world entities with symbols and values readable to the machine. Modern CAE software packages are powerful in function, but users usually need a lot of knowledge and experience to manipulate them. As a kind of tacit knowledge, experiences require gauged context in order to be fully understood and applied. To better exploit the freely written, hard-to-encode experiences on the web, we propose in this paper a context sensitive experience feeding mechanism which is able to recommend experiences matching the context of a given CAE task. Our method makes use of information extraction and natural language processing techniques to find experience valuable to engineer’s trouble shooting. Empirical evaluation of a prototypical feeder suggests that our method is effective.

Paper Nr: 55
Title:

Machine Translation System on the Pair of Arabic / English

Authors:

Khaireddine Bacha and Mounir Zrigui

Abstract: Our work fits into the project entitled "TELA": an environment for learning the Arabic language computer-assisted, which covers many issues related to the use of words in Arabic. This environment contains several sub-systems whose purpose is to provide an important educational function by allowing the learner to discover information beyond the scope of the phrase of the year. In these subsystems there are semantic analyzers which have several features and multifunctions (Arabic-English machine translation, Arabic-English machine translation, derivation, and conjugation, etc.). Therefore, in this article we focused upon the design of machine translation systems on the pair of Arabic / English based on statistical models.

Paper Nr: 56
Title:

Designing a Model of Arabic Derivation, for Use in Computer Assisted Teaching

Authors:

Khaireddine Bacha and Mounir Zrigui

Abstract: This work falls within the general framework of natural language processing, particularly that of the Arabic language. The objective of this work was the construction of a multifunctional electronic dictionary "ElMohit" oriented environment-assisted Learning PC vic "TELA". We started by introducing our overall architecture for the generation system. We continued with the description of the drift function and the methods used in the preparation of pre-processing modules and automatic processing. We are interested in the generation of nominal and verbal forms: the derivation of verbs and non-standard and standard names.

Paper Nr: 57
Title:

Multi-agent Systems and Ontologies Applied to New Industrial Domains - Case Study: Ornamental Plants

Authors:

Lorena Otero Cerdeira, Francisco J. Rodríguez Martínez, Tito Valencia Requejo and Loxo Lueiro Astray

Abstract: This paper describes a real solution applied to an enterprise of ornamental plant selling and distribution. The platform that we propose uses intelligent agent technologies and ontologies to meet the special needs of an enterprise of this kind. We present the architecture defined with the agents involved in both parties, the plant wholesaler and the plant producers. A description of the ontologies that these agents use to interact is also provided. In the final section some relevant issues detected and conclusions will be presented.

Paper Nr: 66
Title:

RDF/XHTML: Ontology Editing in HTML

Authors:

Kun Ji and Lauri Carlson

Abstract: Although global, the web and its standards are not language independent. Semantic Web document standards remain skewed toward Western languages and alphabets. This causes problems for truly multilingual terminology work in the Semantic Web. Some of the problems are discussed and remedied by the proposals in this paper. In particular, we present a format for representing RDF triple sets in XHTML and software that supports multilingual editing of RDF/OWL ontologies in this format. The format was developed for and is used in TermFactory, an ontology based multilingual terminology management environment.

Paper Nr: 70
Title:

Software for Data and Knowledge Management in Winemaking Fermentations

Authors:

Pascal Neveu, Virginie Rossard, Anne Tireau, Evelyne Aguera, Marc Perez, Christian Picou and Jean-Marie Sablayrolles

Abstract: An increasing amount of data is generated by the on-line monitoring of biotechnological processes. Classical data management solutions, which have proved effective in many application domains, are not efficient at dealing with scientific data in life science. We describe a management software of data from wine fermentations and associated knowledge. The information dealt with in this framework relates to the knowledge of real-time events occurring during fermentations. The data have been entered into a database and we propose an organisation of this knowledge to improve efficiency, based on the use of methods and tools from the Semantic Web. A specific ontology of events (faults or enological operations) is used to automatically identify wrong on-line measurements which clearly improved data quality and understanding.

Paper Nr: 71
Title:

A New Proposal for Learning Objects Quality Evaluation in Learning Strategies based on Ontology for Education

Authors:

Lilibeth M. González Ruiz, Jesús M. Hermida and Andrés Montoyo Guijarro

Abstract: In Computer Science world several proposals have been developed for the assessment of the quality of the digital objects, based on the facilities offered by current technologies. Years ago researchers and specialists from both educational and technological areas have been committed to the development of strategies that improve the quality of education. At present, in the field of teaching-learning, another important aspect is the need to improve the manner of gaining knowledge and learning in education, which the use of learning strategies is a major advance in the teaching-learning process in institutions of higher education. This paper presents QEES, a proposal for evaluating the quality of the learning objects employed on learning strategies to support students during their education processes.

Paper Nr: 76
Title:

Combining Qualitative Spatial Reasoning and Ontological Reasoning for Supporting Robot Tasks

Authors:

Stephanie von Riegen, Lothar Hotz and Pascal Rost

Abstract: This paper presents an application of spatial and ontology reasoning technologies for mobile robot tasks. We provide a combination of the spatial reasoning calculi RCC-8 and CDC as well as their integration with OWL-based ontologies. An architecture that combines inference tools like Prolog, description logic reasoners, and complex-event processing implements our approach. We illustrate the results with a mobile robot scenario in a restaurant. In this paper, global path-finding demonstrates the use of qualitative spatial and ontological reasoning.

Paper Nr: 79
Title:

Segmentation of Review Texts by using Thesaurus and Corpus-based Word Similarity

Authors:

Yoshimi Suzuki and Fumiyo Fukumoto

Abstract: Recently, we can refer to user reviews in the shopping or hotel reservation sites. However, with the exponential growth of information of the Internet, it is becoming increasingly difficult for a user to read and understand all the materials from a large-scale reviews that is potentially of interest. In this paper, we propose a method for review texts segmentation by guest’s criteria, such as service, location and facilities. Our system firstly extracts words which represent criteria from hotel review texts. We focused on topic markers such as ``ha'' in Japanese to extract guest’s criteria. The extracted words are classified into classes with similar words. The classification is proceeded by using Japanese WordNet. Then, for each hotel, each text with all of the guest reviews is segmented into word sequence by using criteria classes. Review text segmentation is difficult because of short text. We thus used Japanese WordNet, extracted similar word pairs, and indexes of Wikipedia. We performed text segmentation of hotel review. The results showed the effectiveness of our method and indicated that it can be used for review summarization by guest’s criteria.

Paper Nr: 80
Title:

Ontological Framework for Integrating Environmental Issues within Sustainable Enterprise - Enhancing Enterprise Decision-making

Authors:

Edrisi Muñoz, Elisabet Capón, José Miguel Laínez, Antonio Espuña and Luis Puigjaner

Abstract: Ontologies stand for an excellent choice for building complex models while allowing a high level of flexibility, re-usability, usability and easiness of maintenance. This work proposes the re-use of an ontological model for the integrated enterprise in order to include the environmental assessment function. Since enterprises are complex systems involving different functionalities, decision-making becomes a highly challenging task, and decision process is usually separated in several levels. Nevertheless, such levels are closely related by the sharing of data and information. Therefore, effective integration among the different hierarchical levels, by means of tools which improve information sharing and communication, may play a crucial role for the enhanced enterprise operation, and consequently for fulfilling the enterprise’s goals. The ontological framework provides a common modeling framework which facilitates integration among the different decision levels, and works as the mechanism for supporting information and knowledge sharing among multiple applications. The general semantic framework developed is applied to a case study comprising an enterprise supply chain network design-planning problem which considers environmental issues.

Paper Nr: 93
Title:

Developing and Maintaining an Ontology for Rehabilitation Robotics

Authors:

Zeynep Dogmus, Gizem Gezici, Volkan Patoglu and Esra Erdem

Abstract: Representing the available information about rehabilitation robots in a structured form, like ontologies, facilitates access to various kinds of information about the existing robots, and thus it is important both from the point of view of rehabilitation robotics and from the point of view of physical medicine. Rehabilitation robotics researchers can learn various properties of the existing robots and access to the related publications to further improve the state-of-the-art. Physical medicine experts can find information about rehabilitation robots and related publications (possibly including results of clinical studies) to better identify the right robot for a particular therapy or patient population. Therefore, considering also the advantages of ontologies and ontological reasoning, such as interoperability of various heterogenous knowledge resources (e.g., patient databases or disease ontologies), such an ontology provides the underlying mechanisms for translational physical medicine, from bench-to-bed and back, and personalized rehabilitation robotics. With these motivations, we have designed and developed the first formal rehabilitation robotics ontology, called REHABROBO-ONTO, in OWL, collaborating with experts in robotics and in physical medicine. We have also built a software (called REHABROBO-QUERY) with an easy-to-use intelligent user-interface that allows robot designers to add/modify information about their rehabilitation robots to/from REHABROBO-ONTO.

Paper Nr: 97
Title:

An Ontology-based Framework for Syndromic Surveillance Method Selection

Authors:

Gabriela Henriques and Deborah Stacey

Abstract: Syndromic surveillance is the detection of a disease outbreak or bioterrorist attack. The process of surveillance includes various steps: data collection, data analysis and result interpretation. The goal of syndromic surveillance is to be able to make a rapid and accurate diagnostic of a potential outbreak. Method types range from traditional statistical approaches to algorithms which have been adapted from other fields. With a variety of options it can be difficult selecting the method best suited for analysis on a given set of data. This paper will focus on developing an ontology-based framework for selecting the best suited method(s) for data analysis, focusing on the end-users perspective.

Paper Nr: 101
Title:

A Non-concept is Not a ¬Concept

Authors:

Iaakov Exman

Abstract: Often objects with removed parts or functionality also lose their identity. But, there are situations in which this is not the case: identity is preserved. We refer to such objects, by means of Non- concepts, non- implying partial negation and concept implying preserved identity. In this work Non- concepts are defined and pluggable ontologies are proposed for their representation. These ontologies are made pluggable by sockets, a novel kind of class. These are abstract place-holders for removed/added parts, functionalities or identities. The space of Non- concepts has been extensively explored. Pragmatic implications of Non- concepts include manageable design of products with a multitude of models. Non- concepts are also relevant to the formal controversy whether composition is/isn’t identity. The resolution is not sharp. Identity is entangled with composition, such that identity is preserved to a certain extent, until further removal causes identity break-down.

Paper Nr: 103
Title:

Towards an Ontology-driven Framework for Workflow Analysis

Authors:

Hlomani Hlomani and Deborah Stacey

Abstract: Workflow management and the the whole field of business process management has seen a lot of research interest. This interest has evolved from the initial quest to automate manufacturing processes to the formalization of process models. The reason for this interest can arguably be attributed to the fact that process models form the core of workflow management systems. A plethora of modelling languages and notations have been created through the years, albeit with dominance of proprietary languages that has been argued to be lacking in terms of having formal semantics. The informal languages have seen more adoption at the expense of those that are termed “academic languages” even though academic languages are believed to be more formal. This paper considers the aspects of model transformation with the intension to bridge the gap between modelling and analysis. The paper proposes a semantic approach (using ontologies) to both the mapping and transformation of business process models written in one language (source) to another (target).

Paper Nr: 111
Title:

A Linked Data Approach for Querying Heterogeneous Sources - Assisting Researchers in Finding Answers to Complex Clinical Questions

Authors:

Nikolaos Matskanis, Vassiliki Andronikou, Philippe Massonet, Kostas Mourtzoukos and Joseph Roumier

Abstract: Clinical trials for drug repositioning aim at evaluating the effectiveness and safety of existing drugs as new treatments. This involves managing and semantically correlating many interdependent parameters and details in order to clearly identify the research question of the clinical trial. This work, which is carried out within the PONTE (Efficient Patient Recruitment for Innovative Clinical Trials of Existing Drugs) project, aims to improve the trial design process, by not only offering access to a variety of relevant data sources – including, but not limited to, drug profiles, diseases and their mechanisms, genes and past trial results – but also providing the ability to navigate through these sources, perform queries on them and intelligently fuse the available information through semantic reasoning. This article describes our intention to consume and aggregate information from Linked Data sources in order to produce answers for the clinical researcher’s questions.

Paper Nr: 112
Title:

Most Popular Contents Requested by Users in Different Wikipedia Editions

Authors:

Antonio J. Reinoso, Juan Ortega-Valiente, Rocío Muñoz-Mansilla and Carlos León

Abstract: This paper aims to analyze how the most requested and contributed contents in Wikipedia may significantly vary depending on the considered edition. The on-line Encyclopedia has become a prolific research topic, mainly in aspects related to the assessment of its contents and in its evolution forecasting. However, very little effort has been devoted to deal with the kind of use given to Wikipedia by its visitors, either occasional or subscribers. Thus, our work aims to explore the utilization made of Wikipedia through a classification of the most requested and contributed contents in some of its editions. This way, we will be in position of determining which type of contents attracts the highest numbers of visits and contributions in these editions and which can be a good indicator of the use given to them by their respective community of users. Apart from the subsequent comparison purposes, such examination may reveal interesting topics such as the transmission of tendencies over the different Wikipedia editions, as well as particular user patterns exhibited by the corresponding communities of users.

Paper Nr: 114
Title:

ONTOTracED: A Framework to Capture and Trace Ontology Development Processes

Authors:

Marcela Vegetti, Luciana Roldán, Silvio Gonnet, Gabriela Henning and Horacio Leone

Abstract: In the last two decades several methodologies to assist the ontology development process have been reported in the literature. However, despite important advances, there are no computational tools supporting them yet. Thus, when an ontology development process ends, the things that remain are just design products (e.g., competency questions, class diagrams, implementations, etc.), without an explicit representation of how they were obtained. This paper presents a framework meant to explicitly capture and trace ontology development processes (the activities carried out, the actors executing them, etc.), along with their associated products.

Paper Nr: 115
Title:

Improving Disaster Responsiveness using a Mix of Social Media and e-Government

Authors:

Asanee Kawtrakul, Intiraporn Mulasastra, Hutchatai Chanlekha, Sachit Rajbhandari, Kulapramote Prathumchai, Masahiko Nagai and Vasuthep Khunthong

Abstract: Data sharing is essential for government agencies during disaster management as it requires high collaborative efforts among various organizations. Recently, social media have been increasingly used during the disasters for disseminating and receiving information to and from the public. By using social media for communications, the government can receive real-time data from the public and from organizations. The challenge lies in how to combine social media with government data, which is gathered from multiple sources, in multiple formats using multiple terminologies. This paper focuses on how to manage, integrate, and verify data acquired from multiple sources. The proposed model was designed by using frame-based data collection and ontology-based data integration, combined with the effective use of dynamic data from social media, with the aim of improving the disaster assistance.