Browse wiki

From Odp

Jump to: navigation, search
Submissions:Gesture Interaction
ContentODPDescription Ontology pattern to model concepts related to human gesture interactions.
GraphicallyRepresentedBy Gesture-interaction-pattern.jpg +
HasIntent The Gesture Interaction Pattern aims to mo The Gesture Interaction Pattern aims to model the pose and movement of human body that are used to interact with devices (particularly with device affordances). This helps to describe a human gesture with its relationship between certain device affordances, related body parts and the temporal components associated with those. This might be helpful in creating user specific gesture profiles. This ontology pattern is geared at mapping the ubiquitousness in gesture vocabularies by linking them appropriately and does not enforce designers and manufacturers to follow a standard. rs and manufacturers to follow a standard.
Modification dateThis property is a special property in this wiki. 4 September 2020 00:57:48  +
Name Gesture Interaction  +
OWLBuildingBlock  +
PatternDomain Gesture Interaction +, Gesture +, Internet of Things (IoT) +
Scenario This pattern is applicable for a wide rang This pattern is applicable for a wide range of scenarios in gesture interaction systems. For example: Consider a user who uses a certain gesture to turn on their personal air conditioner. If this user visits a hotel room with an air conditioner of a different model that carries different interactions model that carries different interactions, how can the system accommodate the user's preferred gesture and let the user continue, as in their own room, without having to read instructions. This ontology pattern will help to model parsonlised gesture details. Further, online search engines currently do not provide sufficient information for gesture related semantics. For example, search query to retrieve ‘gestures to turn on a TV’, would not provide relevant gesture vocabul would not provide relevant gesture vocabularies supported by different vendors. Designers/developers have to find individual studies separately and read/learn necessary data manually. Being able to retrieve semantics of gestures which are related to the affordance of ‘turn on a TV’ would be convenient for designers and developers in such situations. signers and developers in such situations.
SubmittedBy MadhawaPerera +, ArminHaller +
Categories ProposedContentOP +
hide properties that link here 
  No properties link to this page.


Enter the name of the page to start browsing from.
Personal tools
Quality Committee
Content OP publishers