Old news and announcements ...
Call for Papers for 2nd Workshop on Web-scale Vision and Social Media (VSM)
The world-wide-web has become a large ecosystem that reaches billions of users through information processing and sharing, and most of this information resides in pixels. Web-based services like YouTube and Flickr, and social networks such as Facebook have become more and more popular, allowing people to easily upload, share and annotate massive amounts of images and videos. Vision and social media thus has recently become a very active inter-disciplinary area, involving computer vision, multimedia, machine-learning, information retrieval, and data mining.
This workshop aims to bring together leading researchers in the related fields to advocate and promote new research directions for problems involving vision and social media, such as large-scale visual content analysis, search and mining. VSM will provide an interactive platform for academic and industry researchers to disseminate their most recent results, discuss potential new directions in vision and social media, and promote new interdisciplinary collaborations. The program will consist of invited talks, panels, discussions, and reviewed paper submissions.
Topics of interest include (but are not limited to):
- Content analysis for vision and social media
- Efficient learning and mining algorithms for large-scale vision and social media analysis
- Understanding social media content and dynamics
- Contextual models for computer vision and social media
- Machine learning and data mining for social media
- Indexing and retrieval for largescale social media information
- Tagging, semantic annotation, and object recognition on massive multimedia collections
- Scalable and distributed machine learning and data mining methods for vision
- Interfaces for exploring, browsing and visualizing large visual collections
- Construction and evaluation of large-scale visual collections
- Crowdsourcing for vision problems Scene reconstruction and matching using large scale web images
Call for Papers for ACM Multimedia 2013
The 21st ACM International Conference on Multimedia
http://www.acmmm13.org
October 21–25, 2013 Barcelona, Spain.
Since the founding of ACM SIGMM in 1993, ACM Multimedia has been the worldwide premier conference and a key world event to display scientific achievements and innovative industrial products in the multimedia field.
At ACM Multimedia 2013, we will celebrate its twenty-first iteration with an extensive program consisting of technical sessions covering all aspects of the multimedia field in forms of oral and poster presentations, tutorials, panels, exhibits, demonstrations and workshops, bringing into focus the principal subjects of investigation, competitions of research teams on challenging problems, and also an interactive art program stimulating artists and computer scientists to meet and discover together the frontiers of artistic communication.
UPCOMING DEADLINES
- Abstracts for Papers Due: March 1, 2013
- Full/short Papers Due: March 8, 2013
http://acmmm13.org/submissions/call-for-papers/ - Workshop Proposals Due: January 9, 2013
http://www.icwsm.org/2013/submitting/workshops/ - Tutorials Due: January 15, 2013
http://www.icwsm.org/2013/submitting/tutorials/
PAPER SUBMISSION GUIDELINES
Full paper format: Full paper submissions to ACM MM ‘13 are recommended to be 10 pages long at maximum, including figures and citations. The final camera-ready length for each full paper in the proceedings will be at the discretion of the program chairs. All papers must follow the ACM formatting guidelines.
Anonymity: Paper submissions to ACM MM ‘13 must be anonymized.
TOPIC AREAS
- Art, Entertainment, and Culture
- Authoring and Collaboration
- Crowdsourcing
- Media Transport and Delivery
- Mobile & Multi-device
- Multi-media Analysis
- Multimedia HCI
- Music & Audio
- Search, Browsing, and Discovery
- Security and Forensics
- Social Media & Presence
- Systems and Middleware
ORGANISATION
General Chairs
Alejandro (Alex) Jaimes, Yahoo!, Spain
Nicu Sebe, Univ. of Trento, Italy
Nozha Boujema, INRIA, France
Program Co-Chairs
Daniel Gatica-Perez, IDIAP & EPFL, CH
David A. Shamma, Yahoo!, US
Marcel Worring, Univ. of Amsterdam, The Netherlands
Roger Zimmermann, Natl. Univ. of Singapore, SG
Author’s Advocate
Pablo Cesar (CWI, The Netherlands)
Call for Papers for ARTEMIS 2013 Workshop (in conjunction with ACM MUltimedia 2013)
Recently, it can be argued that the intelligence behind many pattern recognition and computer vision systems is mainly focused on two main approaches; (i) extraction of smart features able to efficiently represent the rich visual content and (ii) adoption of non-linear and adaptable (semi-supervised) learning strategies able to fill the gap between the extracted low level features and the high level concepts, humans use to perceive the content. The feature extraction is a data dimensionality reduction strategy that addresses the difficulty that learning complexity grows exponentially upon a linear increase in the dimensionality of data. It is also clear that extraction of representational features is a challenging and application-dependent process. Non-representative features significantly affect the recognition accuracy, especially for complex and dynamic environments even though they are processed by highly non-linear feature transformation models.
Emulating the efficiency and robustness by which the human brain represents information has been a core challenge in machine learning research. The human brain does not work by explicitly pre-processing sensory signals but rather allows them to propagate into complex hierarchies. Then, as time elapses, we learn to represent these observations using (structured or not) regularities. This implies that the human information processing mechanisms suggest “deep architectures” for learning, i.e., hierarchical, multi-layer models. This discovery motivated the emergence of the subfield of deep machine learning, which focuses on computational models for information representation that exhibit similar characteristics to that of the humans.
Such contemporary machine learning applications are important for cognitive video supervision and event analysis in video sequences, that are critical tasks in many multimedia applications. Methods, tools and algorithms that aim to detect and recognize high level concepts and their respective spatio-temporal and causal relations in order to identify semantic video activities, actions and procedures, have been in the focus of the research community over the last years.
This research area has strong impact on many real-life multimedia applications based on a semantic characterization and annotation of video streams in various domains (e.g., sports, news, documentaries, movies and surveillance), either broadcast or user-generated videos. Although a first critical issue is the estimation of quantitative parameters describing where events are detected, recent trends are facing the analysis of multimedia footage by applying image and video understanding techniques to that detected/tracked motion. That is, the challenge is becoming the generation of qualitative descriptions about the meaning of motion, therefore describing not only where, but also why an event is being observed.
The goal of the 4th Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Streams is to seek for innovative contribution in the above fields bringing together researchers from machine learning, image processing and computer vision. The new research achievements should be demonstrated on real-world and complex application scenarios promoting the current research achievements. Potential topics include, but are not limited to:
- Advanced machine learning strategies in computer vision;
- Transfer, learning, deep learning, active learning, on-line learning;
- Methods for robust detection of semantic concepts in video streams;
- Object/human detection and tracking using advanced machine learning tools;
- Annotation of events and human motion and activity in large-scale multimedia content;
- Identification of spatio-temporal, causal and contextual relations of events;
- Semantic and event-based summarization, matching and retrieval of monitored video footage;
- Enhancement of events analysis based on attention models or multiscale/multisource data fusions;
- Event- and context-oriented relevance feedback algorithms;
- Strategies for context learning (background scene and its regions, objects and agents);
- Research projects in the respective fields (international standardization activities, national/international research projects);
- Real-life applications, like industrial, traffic analysis, critical infrastructures, athletic events, etc.
Call For Papers for ECCV 2012 Workshop on Web-scale Vision and Social Media (VSM)
Workshop on Web-scale Vision and Social Media (VSM) - held in conjunction with European Conference on Computer Vision 2012, 7-13 October 2012, Firenze, Italy
The world-wide-web has become a large ecosystem that reaches billions of users through information processing and sharing, and most of this information resides in pixels. Web-based services like YouTube and Flickr, and social networks such as Facebook have become more and more popular, allowing people toeasily upload, share and annotate massive amounts of images and videos all over the web. Although the so-called web 2.0is an amazing source of information, in order to interpret the tremendous amount of visual content, online social platforms usually rely on user tags, which are known to be ambiguous, overly personalized, and limited. Hence, to effectively exploit social media at the web-scale, it is critical to design novel methods and algorithms that are able to jointly represent the visual aspect and (noisy) user annotations of multimedia data. Vision and social media thus has recently become a very active inter-disciplinary research area, involving computer vision, multimedia, machine-learning, information retrieval, and data mining.
This workshop aims to bring together leading researchers in the related fields to advocate and promote new research directions for problems involving vision and social media, such as large-scale visual content analysis, search and mining. The workshop will provide an interactive platform for researchers to disseminate their most recent research results, discuss potential new directions and challenges towards vision and social media, and promote new collaborations among researchers. Topics of interest include (but are not limited to):
- Content analysis for vision and social media
- Efficient learning and mining algorithms for scalable vision and social media analysis
- Understanding social media content and dynamics
- Contextual models for vision and social media
- Machine learning and data mining for social media
- Indexing and retrieval for large-scale social media information
- Machine tagging, semantic annotation, and object recognition on massive multimedia collections
- Scalable/distributed machine learning and data mining methods for vision
- Interfaces for exploring, browsing and visualizing large visual collections
- Construction and evaluation of large-scale visual collections
- Crowdsourcing for vision problems
- Scene reconstruction and matching using large scale web images
Important Dates
- Submission deadline: July 8, 2012
- Notification of acceptance: August 1, 2012
- Camera ready submission: August 8, 2012
- Workshop date: October 7, 2012
Keynote speakers
- Shih-Fu Chang, Columbia University, US
- Fei-Fei Li, Stanford University, US
Paper submission instructions
The maximum paper length is 10 pages.
The workshop paper format guidelines are the same as the Main Conference papers.
Latex/Word templates can be found at: http://eccv2012.unifi.it/submissions/call-for-paper/paper-submission/
Submission site: https://cmt.research.microsoft.com/ECCVWS2012/
Organizers
Lamberto Ballan, University of Florence, Italy
Alex C. Berg, Stony Brook University, US
Marco Bertini, University of Florence, Italy
Cees G. M. Snoek, University of Amsterdam, Netherlands
Contact
VSM Website: http://www.micc.unifi.it/vsm2012
For any questions or more information, please contact workshop co-chairs: Lamberto Ballan (lamberto.ballan@unifi.it), Alex C. Berg (aberg@cs.stonybrook.edu), Marco Bertini (marco.bertini@unifi.it), or Cees G. M. Snoek (cgmsnoek@uva.nl).