Title
Jointly exploiting visual and non-visual information for event-related social media retrieval
Abstract
In this contribution, we propose a watershed-based method with support from external data sources and visual information to detect social events in web multimedia. The idea is based on two main observations: (1) people cannot be involved in more than one event at the same time, and (2) people tend to introduce similar annotations for all images associated to the same event. Based on these observations, the metadata is turned to an image so that each row contains all records belonging to one user; and these records are sorted by time. Thus, the social event detection is turned to watershed-based image segmentation, where Markers are generated by using (keyword, location, visual) features with support of external data sources, and the Flood progress is carried on by taking into account (tags set, time, visual) features. We test our algorithm on the MediaEval 2012 dataset both using only external data but also introducing visual information.
Year
DOI
Venue
2013
10.1145/2461466.2461494
ICMR
Keywords
Field
DocType
flood progress,main observation,social event,event-related social media retrieval,social event detection,similar annotation,external data source,watershed-based image segmentation,watershed-based method,external data,visual information,non-visual information,relatedness,watershed
Metadata,Social event detection,Social media,Information retrieval,Pattern recognition,Computer science,Image segmentation,Watershed,Artificial intelligence,Machine learning
Conference
Citations 
PageRank 
References 
3
0.38
20
Authors
4
Name
Order
Citations
PageRank
Minh-son Dao19321.42
Giulia Boato237340.80
Francesco G.B. De Natale3324.77
Truc-Vien Nguyen4141.26