This page contains the additional material results for the paper "OntView: What you See is What you Meant" by Carlos Bobed, Carlota Quintana, Eduardo Mena, Jorge Bobed and Fernando Bobillo.
OntView, OntoGraf, OWLGrEd, OWLViz and WebVOWL are all capable of loading each ontology listed in the Data Set section; however, OWLViz is unable to display any of these ontologies in their entirety.
DBpedia_3.8.owl
DBpedia_3.8.owl
ontology comprises approximately 450 classes and nearly 10,000 axioms, which qualifies it as an ontology of
considerable size. OntoGraf could not be captured in its entirety as an image because it lacks both an export-to-image
function and a sufficiently powerful zoom mechanism to allow manual snapshotting of the full graph.
personasonto.owl
The personasonto.owl
ontology contains 53 classes and 530 axioms, making it a relatively compact ontology. Its
modest size ensures quick loading and smooth interaction across different visualization tools, and it serves as the
sole ontology supported by WebVOWL in our experiments due to its inclusion in WebVOWL's internal dataset.
pizza.owl
The pizza.owl
ontology contains 100 classes and 801 axioms, representing a mid-sized model that remains
responsive across visualization tools. Note that with OWLGrEd it was not possible to capture the ontology in its
entirety without losing the textual annotations; for this reason, the OWLGrEd rendering omits the text content in
the image.
koala.owl
The koala.owl
ontology is very small, containing only 21 classes and 71 axioms. Despite its compact size,
it clearly differentiates the various ontology elements and illustrates how each visualization tool renders those
elements.
proyectos.owl
The proyectos.owl
ontology is quite compact—just five classes and 48 axioms—yet it offers a clear depiction of elements
across different visualizers, showing exactly how each tool represents them. Our aim is to compare the visual languages
used by various ontology viewers when presenting a small but semantically rich ontology, and to determine which one is
the most intuitive, concise, and clear. The underlying idea is that by examining the ontology—without needing in-depth
knowledge of the graphical conventions or color semantics—you should be able to infer the Description Logic definition
of each term.
As ontologies grow in size and complexity, understanding their full structure can become challenging. Large models often introduce an overload of information, making it easy to lose sight of the most critical concepts and relationships. To address this, OntView offers three automated summarization techniques—KCE, PageRank, and RDFRank—that extract the most relevant portions of an ontology. In addition, a manual selection function allows users to choose which nodes to display, although that feature is not covered on this page.
KCE (Key Concept Extraction) ranks concepts by combining cognitive, statistical, and topological measures, then selects the top n most relevant concepts. Currently, it only handles named concepts, so extending its metrics to incorporate anonymous classes would further enhance its coverage.
PageRank/RDFRank n applies graph-centrality measures to an ontology's class taxonomy by running the PageRank algorithm over RDF triples. The two variants differ in how they treat edge direction: PageRank treats the graph as directed, while RDFRank treats it as undirected (bidirectional). Both techniques can also assess the importance of anonymous classes.
DBpedia_3.8.owl
personasonto.owl
pizza.owl
koala.owl
OntView lets users control how much of a node's subtree is expanded or collapsed at each step by specifying a percentage of its descendants to show or hide. That percentage is applied locally to each node's descendants, but set globally for the whole view. The system's selection logic is modular, and currently supports three strategies for choosing which descendants to show or hide next:
The pizza.owl
ontology was used to demonstrate these techniques, with the Thing node
selected for applying the percentage.