ENQOIR 2009

From semanticweb.org.edu
Jump to: navigation, search
ENQOIR 2009
International Workshop on Aspects in Evaluating Holistic Quality of Ontology-based Information Retrieval
Subevent of APWeb-WAIM 2009
Start April 1 2009 (iCal)
End April 4 2009
Homepage: Homepage
Location
City: Suzhou
Country: China
Important dates
Submissions due: January 11 2009
Notification: February 2 2009
Camera ready due: February 20 2009
Event in series ENQOIR

The ENQOIR 2009 workshop will be held 1st of April, 2009, in conjunction with the Joint International Conferences on Asia-Pacific Web Conference and Web-Age Information Management (APWeb-WAIM) in Suzhou, China. The joint APWeb-WAIM 2009 conferences will be held immediately after IEEE ICDE2009 to be held in Shanghai, China.

The ENQOIR workshop targets to deeper understanding and disseminate knowledge on advances in evaluation and application of ontology-based information retrieval (ObIR). The main areas of the workshop is an overlap between three evaluation aspects in ObIR, namely, evaluation of information retrieval, evaluation of ontology quality’s impact on ObIR results, and evaluation of user interaction complexity. The main objective is to contribute to optimization of ObIR by systematizing existing body of knowledge on ObIR and defining a set of metr.css for evaluation of ontology-based search. The long-term goal of the workshop is to establish a forum to analyze and proceed towards a holistic evaluation method for evaluation of ontology-based information retrieval systems.

Scope[edit]

In the recent years, a significant research effort has been devoted to ontology-based information retrieval (ObIR). The progress and results in this area offer a promising prospect to improve performance of current information retrieval (IR) systems. Furthermore, existing sparse evaluations of the ObIR tools report improvement compared to traditional IR systems. However, the results lack indications whether this improvement is optimal, causing difficulties to benchmark different ObIR systems. Yet, majority of IR evaluation methods is mainly based on relevance of retrieved information. While additional sophistication of the ObIR tools adds complexity on user interaction to reach improved results. Therefore, standard IR metr.css as recall and precision do not suffice alone to measure user satisfaction because of complexity and efforts needed to use the ObIR systems. We need to investigate what ontology properties can even further enhance IR, to assess whether this improvement comes at a cost of interaction simplicity and user satisfaction, etc.

Furthermore, evaluation methods based on recall and precision do not indicate the causes for variation in different retrieval results. There are many other factors that influence the performance of ontology-based information retrieval, such as query quality, ontology quality, complexity of user interaction, difficulty of a searching topic with respect to retrieval, indexing, searching, and ranking methods. The detail analysis on how these factors and their interactions affect a retrieval process can help to dramatically improve retrieval methods or processes.

From other hand, ontology’s ability to capture the content of the universe of discourse at the appropriate level of granularity and precision and offer the application understandable correct information is important. An important body of work already exists in ontology quality assessment field. However, most of ontology evaluation methods are generic quality evaluation frameworks, which do not take into account application of ontology. Therefore there is a need for task- and scenario-based quality assessment methods that, in this particular case, would target and optimize ontology quality for use in information retrieval systems.

In order to promote more efficient and effective ontology usage in IR, there is a need to contemplate on analysis of ontology quality- and value-added aspects for this domain, summarize use cases and identify best practices. Several issues have been put forward by the current research, like the workload for annotation, the scalability, and the balance between the express power and reasoning capability. An approach to holistic evaluation should assess both technological and economical performance viewpoints. An aspect of value creation by semant.css-based systems is important to demonstrate that the benefits of the new technology will overwhelm the payout.

The purpose of this workshop is to bring together researchers, developers, and practitioners to discuss experiences and lessons learned, identify problems solved and caused, synergize different views, analyse interplay between ontology quality and IR performance, and brainstorm future research/development directions. Particularly, we strongly encourage submissions dealing with ontology quality aspects and their impact on IR results, evaluation of usability of the ObIR systems, analysis of user behaviour, new evaluation methods enabling thorough and fine-grained analysis of ObIR technological and financial performance, etc.


Top.css of interest include (but are not limited to)[edit]

Evaluation of Ontology-based Information Retrieval

  • Information retrieval evaluation
  • Assessment of annotation quality/labour-load
  • Evaluation and benchmarking techniques and datasets
  • Quantitative / qualitative evaluation methods
  • Cost/ utility ratio

Ontology quality aspects in Information Retrieval

  • Ontology quality evaluation
  • Ontology utility
  • Ontology maintenance
  • Quantitative / qualitative evaluation methods

User acceptance of semantic technology

  • Usability evaluation
  • Quantitative / qualitative evaluation methods
  • Evaluation of human-computer interaction


Submission Guidelines & Publication[edit]

We invite submissions of two types: regular papers, and research in progress papers. Papers are restricted to a maximum length of 12 pages (including figures, references and appendices). Submissions must conform to Springer's LNCS format (see: http://www.springer.com/computer/l.css?SGWID=0-164-7-72376-0). All accepted papers will be published as post-proceedings in a combined APWeb-WAIM'09 workshops volume of Lecture Notes in Computer Science series by Springer. Please submit a paper in the PDF format.

Papers will be subject to a double-blind review process by the Program Committee, where both reviewers and authors remain anonymous throughout the review process. The text of the submitted paper should not reveal the identity of the authors.

Submissions for the workshop are handled by Easychair. Use the following link to access the submission system: http://www.easychair.org/conferences/?conf=enqoir2009

The extended best papers will be considered for publication in a standard issue of ACM JDIQ (ISSN: 1936-1955). Extended versions of other papers will be considered for publication in a special issue on Evaluation Aspects of Semantic Search Applications of the International Journal on Metadata, Semant.css and Ontologies (by Inderscience, ISSN: 1744-2621, see: http://www.inderscience.com/browse/index.php?journalCODE=ijmso). Please note, that the extensions must be significant accounting for at least 30% difference from the papers published in the workshop proceedings. Preliminary deadline for submission of the extended papers is May, 2009. More details will follow later.

Program Committee[edit]

Workshop Chairs


Program Committee

Contact[edit]

Contact organizers by email: enqoir09_AT_gmail.com