Skip to content. | Skip to navigation

Personal tools
You are here: Home Publications FactRunner: Fact extraction over Wikipedia


Prof. Dr. S. Decker
RWTH Aachen
Informatik 5
Ahornstr. 55
D-52056 Aachen
Tel +49/241/8021501
Fax +49/241/8022321

How to find us

Annual Reports





FactRunner: Fact extraction over Wikipedia

Year 2013

The increasing role of Wikipedia as a source of human-readable knowledge is evident as it contains an enormous amount of high quality information written in natural language by human authors. However, querying this information using traditional keyword based approaches requires often a time-consuming, iterative process to explore the document collection to find the information of interest. Therefore, a structured representation of information and queries would be helpful to be able to directly query for the relevant information. An important challenge in this context is the extraction of structured information from unstructured knowledge bases which is addressed by Information Extraction (IE) systems. However, these systems struggle with the complexity of natural language and produce frequently unsatisfying results. In addition to the plain natural language text, Wikipedia contains links between documents which directly link a term of one document to another document. In our approach for fact extraction from Wikipedia, we consider these links as an important indicator for the relevance of the linked information. Thus, our proposed system FactRunner focusses on extracting structured information from sentences containing such links. We show that a natural language parser combined with Wikipedia markup can be exploited for extracting facts in form of triple statements with a high accuracy.


Proceedings of the 9th International Conference on Web Information Systems and Technologies (WEBIST), Aachen, Germany


Published in

Proceedings of the 9th International Conference on Web Information Systems and Technologies .

Document Actions