[BibTeX] [RIS]
Talking about the Moving Image: A Declarative Model for Image Schema Based Embodied Perception Grounding and Language Generation
Type of publication: Techreport
Citation: Talking-MovingImage-2015
Publication status: Published
Year: 2015
Month: August
Institution: arXiv. Cornell University Library.
Note: arXiv Report (19 pages. Unpublished technical report. arXiv:1508.03276)
URL: http://arxiv.org/abs/1508.0327...
Abstract: We present a general theory and corresponding declarative model for the embodied grounding and natural language based analytical summarisation of dynamic visuo-spatial imagery. The declarative model ---ecompassing spatio-linguistic abstractions, image schemas, and a spatio-temporal feature based language generator--- is modularly implemented within Constraint Logic Programming (CLP). The implemented model is such that primitives of the theory, e.g., pertaining to space and motion, image schemata, are available as first-class objects with `deep semantics' suited for inference and query. We demonstrate the model with select examples broadly motivated by areas such as film, design, geography, smart environments where analytical natural language based externalisations of the moving image are central from the viewpoint of human interaction, evidence-based qualitative analysis, and sensemaking.
Keywords: Artificial Intelligence (cs.AI), Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), Human-Computer Interaction (cs.HC)
Authors Suchan, Jakob
Bhatt, Mehul
Jhavar, Harshita
Attachments
    Notes
      Topics