Predicting the Understandability of OWL Inferences thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Predicting the Understandability of OWL Inferences

Published on Jul 08, 20132671 Views

In this paper, we describe a method for predicting the understandability level of inferences with OWL. Speci cally, we present a probabilistic model for measuring the understandability of a multiple s

Related categories

Chapter list

Predicting the Understandability of OWL Inferences00:00
Semantic Web Authoring Tool00:01
Ontology editor00:34
Explanation01:22
Debugging task01:53
General approach02:41
Understandable justification?03:19
Justification not enough03:38
Proof tree04:20
Deduction rules04:36
Rule 12: Subclass transitivity04:56
Rule 35: Incompatible superclasses05:14
Multiple explanations05:33
Specific approach05:49
Architecture06:18
Collecting deduction rules06:37
Understandability of rules07:13
Sample question07:38
Results from study08:09
Rule 12: Subclass transitivity08:35
Rule 35: Incompatible superclasses08:58
Rule 51: Trivial satisfaction09:07
Generating proof trees09:26
Understandability of proof trees10:05
Combining facility indexes10:54
Survey design11:33
Participants12:20
Sample question12:45
Response bias13:19
Testing the main hypothesis14:05
Obtained vs Predicted14:53
Alternative models15:58
Explanation tool17:08
Reasoner checks17:42
Explanation editor17:49
Justification18:01
Explanation - 118:14
Explanation - 218:22
Explanation - 318:26
Proof tree18:57
Proof tree (expanded)19:00
Conclusion19:09
Questions?19:55