Logic List Mailing Archive

Seminar on Reasoning over ontologies and data

27 Feb 2013
Amsterdam, The Netherlands

We cordially invite you to a seminar on reasoning over ontologies and 
data, organized at the VU University Amsterdam. The seminar is held in the 
context of the PhD defence of Szymon Klarman from the Knowledge 
Representation and Reasoning group (http://krr.cs.vu.nl/).

Time: Wednesday, February 27, 10:15 - 13:00 Place: VU University 
Amsterdam, W&N building, room WN-S631 (De Boelelaan 1081a 
http://alturl.com/jsb9n).

Schedule:
10:15  Welcome & Coffee

10:30 - 11:15 Shape and Evolve Living knowledge - a case on procedural and
ontological knowledge.
Chiara Ghidini, Data and Knowledge Management group, FBK Trento.

11:20 - 12:05  Non-Uniform Data Complexity in Ontology-Based Data Access with
Description Logics.
Carsten Lutz, Theory of Artificial Intelligence group, University of Bremen.

12:10 - 12:55 Probabilistic Reasoning for Web-Scale Information Extraction.
Heiner Stuckenschmidt, Data and Web Science group, University of Mannheim.

13:00 End


Abstracts:

Chiara Ghidini: Shape and Evolve Living knowledge - a case on procedural and
ontological knowledge

The ability to effectively manage business processes is fundamental to ensure
the efficiency of complex organizations, and a key step towards the achievement
of this ability is the explicit representation of static and dynamic aspects of
the organization in the form of conceptual models.
Shaping and maintaining these conceptual representations, and represent them in
appropriate logical formalisms still presents many open challenges. The aim of
the newly launched SHELL (Shape and Evolve Living knowledge) project is to
tackle key interdisciplinary challeges in the fields of (i) the shaping of
conceptual models of an organisation, (ii) their representation in appropriate
formalisms, and (iii) their co-evolution and adaptation w.r.t. data.
In this talk I will provide: (i) an overview of the SHELL project, (ii) an
illustration of our approach for the representation and verification of
structural properties of integrated BPMN business processes and OWL ontologies,
and (iii) hints on some on-going work towards the connection of this
representation with data coming from real process executions.


Carsten Lutz: Non-Uniform Data Complexity in Ontology-Based Data Access with
Description Logics

The idea of ontology-based data access (OBDA) is that an ontology (a logical
theory) gives a semantics to the predicates used in a database, thus allowing
more complete answers to queries, enriching the vocabulary available for
querying, and mediating between data sources with different vocabularies. In
this presentation, I will discuss OBDA with ontologies formulated in description
logics (DLs) and advocate a novel approach to studying the data complexity of
query answering in this context. The approach is non-uniform in the sense that
individual ontologies are considered instead of all ontologies that can be
formulated in a given DL. It allows us to ask rather fine-grained questions
about the data complexity of DLs, such as: given a DL L, how can one
characterize the ontologies for which query answering is in PTime or
FO-rewritable? Is there a dichotomy between being in PTime and being coNP-hard?
We provide several answers to such questions, some of which are based on a new
connection between query answering w.r.t. DL ontologies and constraint
satisfaction problems (CSPs) that allows us to transfer results from CSPs to
DLs. We also identify a class of ontologies within the expressive DL ALCFI that
enjoy PTime data complexity; the new class strictly extends the Horn fragment of
ALCFI, which was was the largest known tractable fragment of ALCFI so far.


Heiner Stuckenschmidt: Probabilistic Reasoning for Web-Scale Information
Extraction

Most of the current information extraction systems, therefore, provide a degree
of confidence associated with each extracted statement. The higher this
numerical value the more likely is it a-priori that the statement is indeed
correct. While probabilistic knowledge bases provide a natural representational
framework for this type of problem, probabilistic inference poses a
computationally challenging problem. In our work, we want to distribute a
sampling-based inference algorithm whose input is (a) a large set of statements
with confidence values and (b) existing background knowledge, and whose output
is a set of statements with a-posteriori probabilities. We propose the
development and implementation of a distributed inference algorithm that has two
separate processes running on the Hadoop platform. The first process constructs
hypergraphs modeling the statements and their conflicts given known background
knowledge. The second process runs Markov chains that sample consistent sets of
statements from the conflict hypergraph ultimately computing the a-posteriori
probabilities of all extracted statements.

??-
Frank.van.Harmelen@cs.vu.nl

??ttp://www.cs.vu.nl/~frankh
Department of Computer Science  & 
??he Network Institute
VU University, de Boelelaan 1081a, 1081HV Amsterdam, The Netherlands tel
(+31)-20-598 7731/7718