Multimodal Geo-tagging in Social Media Websites using Hierarchical Spatial Segmentation
Abstract
These days the sharing of photographs and videos is very popular in social networks. Many of these social media web- sites such as Flickr, Facebook and Youtube allows the user to manually label their uploaded videos with geo-information using a interface for dragging them into the map. However, the manually labelling for a large set of social media is still borring and error-prone. For this reason we present a hierarchical, multi-modal approach for estimating the GPS information. Our approach makes use of external resources like gazetteers to extract toponyms in the metadata and of visual and textual features to identify similar content. First, the national borders detection recognizes the country and its dimension to speed up the estimation and to eliminate geographical ambiguity. Next, we use a database of more than 3.2 million Flickr images to group them together into geographical regions and to build a hierarchical model. A fusion of visual and textual methods for different granularities is used to classify the videos' location into possible regions. The Flickr videos are tagged with the geo-information of the most similar training image within the regions that is previously filtered by the probabilistic model for each test video. In comparison with existing GPS estimation and image retrieval approaches at the Placing Task 2011 we will show the effectiveness and high accuracy relative to the state-of-the art solutions.
Paper
![]() |
People
Pascal Kelm, Sebastian Schmiedeke and Thomas Sikora Citation Multimodal Geo-tagging in Social Media Websites using Hierarchical Spatial Segmentation Proceedings of the 20th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, volume 978-1-4503-1698-9/12/11, 06.11.2012 - 09.11.2012, pp. 8 Details BibTeX Download PDF (615 kB) |
This demonstrator shows a random video of the Placing Task dataset and all textual and visual results in the map.
Funding
The research leading to these results has received funding from the European Community's FP7 under grant agreement number 261743 (NoE VideoSense). We would also like to thank the MediaEval organisers for providing this data set.
Comments and questions to Pascal Kelm