Training:PhD Course on Computational Ontologies @ University of Bologna 2011/Ontology Testing:Method-based testing

From Odp

(Difference between revisions)
Jump to: navigation, search
(Method 1: Unit testing through SPARQL queries)
(Method 2: Performing inferences)
Line 31: Line 31:
To test the desired inferences of the ontology module, follow these steps:
To test the desired inferences of the ontology module, follow these steps:
-
1) List the explicit reasoning requirements that you can find among the requirements of the module, if any.  
+
# List the explicit reasoning requirements that you can find among the requirements of the module, if any.  
-
2) Determine if there are any implicit reasoning requirements, and list those as well. This may include some interaction with the customer or a domain expert, if the requirements are ambiguous.
+
# Determine if there are any implicit reasoning requirements, and list those as well. This may include some interaction with the customer or a domain expert, if the requirements are ambiguous.
-
3) For each such requirement (in the joint list from 1) and 2) above), list what is the input and expected output, e.g. what facts need to be present before actually running the inference engine, and what facts can be produced by the inference procedure.   
+
# For each such requirement (in the joint list from 1) and 2) above), list what is the input and expected output, e.g. what facts need to be present before actually running the inference engine, and what facts can be produced by the inference procedure.   
-
4) Create a new "test case"-module, by importing the module to be tested and adding facts to it. For each reasoning requirement in the joint list from 1) and 2), add some facts that can be considered as the "input" to the reasoning process (according to the result of step 3). Add both facts that will produce the inference, and facts that will not.
+
# Create a new "test case"-module, by importing the module to be tested and adding facts to it. For each reasoning requirement in the joint list from 1) and 2), add some facts that can be considered as the "input" to the reasoning process (according to the result of step 3). Add both facts that will produce the inference, and facts that will not.
-
5) List the expected inferences that should be made based on your test data.
+
# List the expected inferences that should be made based on your test data.
-
5) Run an OWL reaosner over the "test case"-module and record the results.
+
# Run an OWL reaosner over the "test case"-module and record the results.
-
6) Compare the actual results with the expected ones (from step 5), and decide if the test of each reasoning requirement was successful or not. A test is successful if it produces the expected inferences, and no harmful sideffects, i.e. no additional inferences that add undesirable or incorrect facts to the knowledge base.
+
# Compare the actual results with the expected ones (from step 5), and decide if the test of each reasoning requirement was successful or not. A test is successful if it produces the expected inferences, and no harmful sideffects, i.e. no additional inferences that add undesirable or incorrect facts to the knowledge base.
-
7) For any unsuccessful tests, analyze the reason for the failure by inspecting the ontology. Document the problem, so that it can be solved by the ontology developers.
+
# For any unsuccessful tests, analyze the reason for the failure by inspecting the ontology. Document the problem, so that it can be solved by the ontology developers.
-
Task:
 
-
Download the following ontology module xxx.owl, and put it in your NeOn Toolkit workspace folder. Access the ontology through the NeOn toolkit. Start the reasoning plugin. The requirements for this module are the following:
 
-
-...
 
-
-...
 
-
-...
 
-
Now, use the method described above to test the ontology module. While you are working, document what you are doing, e.g. are you following the steps exactly or not, how are you performing your task in practice, how long time does it take, and what kind of errors are you able to find. When you feel satisfied with the testing, please answer the questionnaire below:
 
 +
'''Task:'''
 +
 +
Download the following ontology module xxx.owl, and put it in your NeOn Toolkit workspace folder. Access the ontology through the NeOn toolkit. Start the reasoning plugin. The requirements for this module are the following:
 +
*...
 +
*...
 +
*...
 +
Now, use the method described above to test the ontology module. While you are working, document what you are doing, e.g. are you following the steps exactly or not, how are you performing your task in practice, how long time does it take, and what kind of errors are you able to find. When you feel satisfied with the testing, please answer the questionnaire for ''method 2''.
== Method 3: Performing "stress tests" ==
== Method 3: Performing "stress tests" ==

Revision as of 09:02, 9 May 2011

Method 1: Unit testing through SPARQL queries

Parts of the ontology requirements are usually expressed as competency questions (CQs), i.e. natural language questions that the ontology should be able to provide answers to. One way to practically allow the ontology to answer such questions is to reformulate them as SPARQL queries, in order to retrieve the appropriate facts from the knowledge base. This also introduces a way of testing your ontology module!

Assuming that your module attempts to solve the two CQs, CQ1 and CQ2. Then each of those should be possible to formulate as a SPARQL query, Q1 and Q2, over the ontology module, such that the retrieved instances and/or literals constitute the answer to the CQs. Q1 and Q2 could be considered as unit tests for the module, since they assure that the requirements CQ1 and CQ2 are actually met.

This method of testing contains the following steps:

  1. Retrieve all the CQs that are realized by the module in question.
  2. For each CQ reformulate it as a SPARQL query over the ontology module, using the classes, properties, and instances that the module provides.
  3. Add test data to your module, by creating a new "test case"-module importing the module to be tested and adding facts to it. The test data can come from the initial customer story, it can be provided by a domain expert, or it has to be "invented" by the developer. Make sure the test data covers the classes and properties that are present in the SPARQL queries from step 2).
  4. For each SPARQL query developed in step 2) decide, based on the test data added in 3), what a successful test should return, i.e. for each SPARQL query, what instances and or literal values should be returned.
  5. Run the SPARQL queries over your "test case"-module and record the results.
  6. Compare the actual results to the expected ones, and decide if the test was sucessful or not.
  7. For any unsuccessful tests, analyze the reason for the failure by inspecting the ontology. Document the problem, so that it can be solved by the ontology developers.


Task:

Download the following ontology module xxx.owl, and put it in your NeOn Toolkit workspace folder. Access the ontology through the NeOn toolkit. Start the SPARQL plugin. The requirements for this module are the following:

  • ...
  • ...
  • ...

Now, use the method described above to test the ontology module. While you are working, document what you are doing, e.g. are you following the steps exactly or not, how are you performing your task in practice, how long time does it take, and what kind of errors are you able to find. When you feel satisfied with the testing, please answer the questionnaire for method 1.

Method 2: Performing inferences

Although CQs express what information the ontology should be able to provide, they do not say how this information is produced, i.e. if it is input into the ontology explicitly or produced through some inference mechanism. Therefore an ontology can have additional requirements, explicit or implicit, describing some desired inferences that the ontology should support. For instance, in an ontology about people we may define the class "parent" and say that any person who has at least one child is a parent. If we are not expecting the information about being a parent or not to be explicitly entered into the ontology's knowledge base, we are instead expecting it to be derived from the presence of "hasChild" relations. This is a reasoning requirement that requires the ontology to include the appropriate axioms to make this inference.

To test the desired inferences of the ontology module, follow these steps:

  1. List the explicit reasoning requirements that you can find among the requirements of the module, if any.
  2. Determine if there are any implicit reasoning requirements, and list those as well. This may include some interaction with the customer or a domain expert, if the requirements are ambiguous.
  3. For each such requirement (in the joint list from 1) and 2) above), list what is the input and expected output, e.g. what facts need to be present before actually running the inference engine, and what facts can be produced by the inference procedure.
  4. Create a new "test case"-module, by importing the module to be tested and adding facts to it. For each reasoning requirement in the joint list from 1) and 2), add some facts that can be considered as the "input" to the reasoning process (according to the result of step 3). Add both facts that will produce the inference, and facts that will not.
  5. List the expected inferences that should be made based on your test data.
  6. Run an OWL reaosner over the "test case"-module and record the results.
  7. Compare the actual results with the expected ones (from step 5), and decide if the test of each reasoning requirement was successful or not. A test is successful if it produces the expected inferences, and no harmful sideffects, i.e. no additional inferences that add undesirable or incorrect facts to the knowledge base.
  8. For any unsuccessful tests, analyze the reason for the failure by inspecting the ontology. Document the problem, so that it can be solved by the ontology developers.


Task:

Download the following ontology module xxx.owl, and put it in your NeOn Toolkit workspace folder. Access the ontology through the NeOn toolkit. Start the reasoning plugin. The requirements for this module are the following:

  • ...
  • ...
  • ...

Now, use the method described above to test the ontology module. While you are working, document what you are doing, e.g. are you following the steps exactly or not, how are you performing your task in practice, how long time does it take, and what kind of errors are you able to find. When you feel satisfied with the testing, please answer the questionnaire for method 2.

Method 3: Performing "stress tests"

Although the ability to perform correct inferences, and provide the resulting information as a result of queries, allow us to see that the ontology actually realizes its requirements, another important characteristic of an ontology is to allow as few erroneous facts and/or inferences as possible. A high-quality ontology allows exactly the desired inferences and queries, while avoiding to produce irrelevant or erroneous side-effects. It may also be desirable to be able to check input data agains some constraints and business rules.

This category of testing can be compared to software testing, when usually a system is fed random data, or data considered as "end conditions" for the data, i.e. the extremes of value ranges, in order to check its robustness and capability to handle unexpected input. When dealing with ontologies, we are not expecting error messages and recovery strategies, as for software, but rather that erroneous facts and data is detected in the first place. One way to detect such problems is by using the concistency checking facilities of a reasoning engine. A high-quality ontology facilitates the reasoner to detect inconcistencies when inappropriate or erroneous facts are entered.

For instance, in a user model ontology with the classes "female user" and "male user" where this information is going to be collected form a form with a radio button the classes should be disjoint, i.e. since no user can select both alternatives from the form any given user can only be instance of one class. After detecting such implicit requiremnets, or common sense constraints, they can be tested by entering obviously inconsistent facts, and checking that the reasoner is able to detect the inconsistency.

To test this ability of the ontology, the following steps can be used: 1) List the explicit requirements of the ontology, i.e. CQs, reasoning requirements and other constraints that have been explicitly expressed. 2) Determine if there are any implicit requirements, in the form of additional constraints that can be derived from the context but were not previously expressed as requirements. 3) For each of the requirements, determine if there is a range of values that constrain the acceptable input or output that should be allowed by the ontology. For each constraint, determine at least one case (literal value or fact) that lies outside of that range, and what response would be expected to get from the reasoner given that input. 4) Create a new "test case"-module, by importing the module to be tested and adding facts to it (see next steps). 5) Add each of the cases listen in 3) in turn to the "test case"-module, and for each one, run the reasoner and record the result. 6) For each run in 5) check the result against the expected result determined in 3), and decide if the test was successful or not. A test is successful if it produces the expected inferences, e.g. an inconsistency, and no harmful sideffects. 7) For any unsuccessful tests, analyze the reason for the failure by inspecting the ontology. Document the problem, so that it can be solved by the ontology developers.

Task: Download the following ontology module xxx.owl, and put it in your NeOn Toolkit workspace folder. Access the ontology through the NeOn toolkit. Start the reasoning plugin. The requirements for this module are the following: -... -... -... Now, use the method described above to test the ontology module. While you are working, document what you are doing, e.g. are you following the steps exactly or not, how are you performing your task in practice, how long time does it take, and what kind of errors are you able to find. When you feel satisfied with the testing, please answer the questionnaire below:

Personal tools
Quality Committee
Content OP publishers