Auswahl der wissenschaftlichen Literatur zum Thema „Semantic multimedia representation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Semantic multimedia representation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Semantic multimedia representation"

1

Mylonas, Phivos, Thanos Athanasiadis, Manolis Wallace, Yannis Avrithis und Stefanos Kollias. „Semantic representation of multimedia content: Knowledge representation and semantic indexing“. Multimedia Tools and Applications 39, Nr. 3 (04.09.2007): 293–327. http://dx.doi.org/10.1007/s11042-007-0161-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhang, Hong, Yu Huang, Xin Xu, Ziqi Zhu und Chunhua Deng. „Latent semantic factorization for multimedia representation learning“. Multimedia Tools and Applications 77, Nr. 3 (30.08.2017): 3353–68. http://dx.doi.org/10.1007/s11042-017-5135-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Duan, Yiping, Qiyuan Du, Xin Fang, Zhipeng Xie, Zhijin Qin, Xiaoming Tao, Chengkang Pan und Guangyi Liu. „Multimedia Semantic Communications: Representation, Encoding and Transmission“. IEEE Network 37, Nr. 1 (Januar 2023): 44–50. http://dx.doi.org/10.1109/mnet.001.2200468.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wagenpfeil, Stefan, Paul Mc Kevitt und Matthias Hemmje. „Towards Automated Semantic Explainability of Multimedia Feature Graphs“. Information 12, Nr. 12 (02.12.2021): 502. http://dx.doi.org/10.3390/info12120502.

Der volle Inhalt der Quelle
Annotation:
Multimedia feature graphs are employed to represent features of images, video, audio, or text. Various techniques exist to extract such features from multimedia objects. In this paper, we describe the extension of such a feature graph to represent the meaning of such multimedia features and introduce a formal context-free PS-grammar (Phrase Structure grammar) to automatically generate human-understandable natural language expressions based on such features. To achieve this, we define a semantic extension to syntactic multimedia feature graphs and introduce a set of production rules for phrases of natural language English expressions. This explainability, which is founded on a semantic model provides the opportunity to represent any multimedia feature in a human-readable and human-understandable form, which largely closes the gap between the technical representation of such features and their semantics. We show how this explainability can be formally defined and demonstrate the corresponding implementation based on our generic multimedia analysis framework. Furthermore, we show how this semantic extension can be employed to increase the effectiveness in precision and recall experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Al-Khatib, W., Y. F. Day, A. Ghafoor und P. B. Berra. „Semantic modeling and knowledge representation in multimedia databases“. IEEE Transactions on Knowledge and Data Engineering 11, Nr. 1 (1999): 64–80. http://dx.doi.org/10.1109/69.755616.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Petridis, K., S. Bloehdorn, C. Saathoff, N. Simou, S. Dasiopoulou, V. Tzouvaras, S. Handschuh, Y. Avrithis, Y. Kompatsiaris und S. Staab. „Knowledge representation and semantic annotation of multimedia content“. IEE Proceedings - Vision, Image, and Signal Processing 153, Nr. 3 (2006): 255. http://dx.doi.org/10.1049/ip-vis:20050059.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Smith, Roger W., Dorota Kieronska und Svetha Venkatesh. „Conceptual Representation for Multimedia Information“. International Journal of Pattern Recognition and Artificial Intelligence 11, Nr. 02 (März 1997): 303–27. http://dx.doi.org/10.1142/s0218001497000147.

Der volle Inhalt der Quelle
Annotation:
Multimedia information is now routinely available in the forms of text, pictures, animation and sound. Although text objects are relatively easy to deal with (in terms of information search and retrieval), other information bearing objects (such as sound, images, animation) are more difficult to index. Our research is aimed at developing better ways of representing multimedia objects by using a conceptual representation based on Schank's conceptual dependencies. Moreover, the representation allows for users' individual interpretations to be embedded in the system. This will alleviate the problems associated with traditional semantic networks by allowing for coexistence of multiple views of the same information. The viability of the approach is tested, and the preliminary results reported.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chang, Xiaojun, Zhigang Ma, Yi Yang, Zhiqiang Zeng und Alexander G. Hauptmann. „Bi-Level Semantic Representation Analysis for Multimedia Event Detection“. IEEE Transactions on Cybernetics 47, Nr. 5 (Mai 2017): 1180–97. http://dx.doi.org/10.1109/tcyb.2016.2539546.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yang, Bo, und Ali R. Hurson. „Similarity-Based Clustering Strategy for Mobile Ad Hoc Multimedia Databases“. Mobile Information Systems 1, Nr. 4 (2005): 253–73. http://dx.doi.org/10.1155/2005/317136.

Der volle Inhalt der Quelle
Annotation:
Multimedia data are becoming popular in wireless ad hoc environments. However, the traditional content-based retrieval techniques are inefficient in ad hoc networks due to the multiple limitations such as node mobility, computation capability, memory space, network bandwidth, and data heterogeneity. To provide an efficient platform for multimedia retrieval, we propose to cluster ad hoc multimedia databases based on their semantic contents, and construct a virtual hierarchical indexing infrastructure overlaid on the mobile databases. This content-aware clustering scheme uses a semantic-aware framework as the theoretical foundation for data organization. Several novel techniques are presented to facilitate the representation and manipulation of multimedia data in ad hoc networks: 1) using concise distribution expressions to represent the semantic similarity of multimedia data, 2) constructing clusters based on the semantic relationships between multimedia entities, 3) reducing the cost of content-based multimedia retrieval through the restriction of semantic distances, and 4) employing a self-adaptive mechanism that dynamically adjusts to the content and topology changes of the ad hoc networks. The proposed scheme is scalable, fault-tolerant, and efficient in performing content-based multimedia retrieval as demonstrated in our combination of theoretical analysis and extensive experimental studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Luan, Xi Dao, Yu Xiang Xie, Yi Hong Tan, Sai Hu, Zhi Ping Chen und Jing Wang. „Description Logic Based Objects and Space Relations Representation“. Applied Mechanics and Materials 48-49 (Februar 2011): 366–72. http://dx.doi.org/10.4028/www.scientific.net/amm.48-49.366.

Der volle Inhalt der Quelle
Annotation:
This theme focuses on representing and reasoning high-level semantic based on concepts and their space relations. As to multimedia data, such as image and video, acquiring, representing and retrieving high-level semantic information has been a confused problem for a long time. Without the support of knowledge database, it is an impossible mission to carry out the simple synonymous retrieval, let alone retrieving the abstract semantic. This paper proposes some algorithms to translate restored concepts and their relations into a Concept Semantic Network, which is visualized by SVG finally. The paper also introduces the method of recording concepts distribution by description logic, which services users with concepts and distribution retrieval.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Semantic multimedia representation"

1

Harrando, Ismail. „Representation, information extraction, and summarization for automatic multimedia understanding“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS097.

Der volle Inhalt der Quelle
Annotation:
Que ce soit à la télévision ou sur internet, la production de contenu vidéo connaît un essor sans précédent. La vidéo est devenu non seulement le support dominant pour le divertissement, mais elle est également considérée comme l'avenir de l'éducation, l'information et le loisir. Néanmoins, le paradigme traditionnel de la gestion du multimédia s'avère incapable de suivre le rythme imposé par l'ampleur du volume de contenu créé chaque jour sur les différents canaux de distribution. Ainsi, les tâches de routine telles que l'archivage, l'édition, l'organisation et la recherche de contenu par les créateurs multimédias deviennent d'un coût prohibitif. Du côté de l'utilisateur, la quantité de contenu multimédia distribuée quotidiennement peut être écrasante ; le besoin d'un contenu plus court et plus personnalisé n'a jamais été aussi prononcé. Pour faire progresser l'état de l'art sur ces deux fronts, un certain niveau de compréhension du multimédia doit être atteint par nos ordinateurs. Dans cette thèse, nous proposons d'aborder les multiples défis auxquels sont confrontés le traitement et l'analyse automatique de contenu multimédia, en orientant notre exploration autour de trois axes : 1. la représentation: avec toute sa richesse et sa variété, la modélisation et la représentation du contenu multimédia peut être un défi en soi. 2. la description: La composante textuelle du multimédia peut être exploitée pour générer des descripteurs de haut niveau (annotation) pour le contenu en question. 3. le résumé: où nous étudions la possibilité d'extraire les moments d'intérêt de ce contenu, à la fois pour un résumé centré sur la narration et pour maximiser la mémorabilité
Whether on TV or on the internet, video content production is seeing an unprecedented rise. Not only is video the dominant medium for entertainment purposes, but it is also reckoned to be the future of education, information and leisure. Nevertheless, the traditional paradigm for multimedia management proves to be incapable of keeping pace with the scale brought about by the sheer volume of content created every day across the disparate distribution channels. Thus, routine tasks like archiving, editing, content organization and retrieval by multimedia creators become prohibitively costly. On the user side, too, the amount of multimedia content pumped daily can be simply overwhelming; the need for shorter and more personalized content has never been more pronounced. To advance the state of the art on both fronts, a certain level of multimedia understanding has to be achieved by our computers. In this research thesis, we aim to go about the multiple challenges facing automatic media content processing and analysis, mainly gearing our exploration to three axes: 1. Representing multimedia: With all its richness and variety, modeling and representing multimedia content can be a challenge in itself. 2. Describing multimedia: The textual component of multimedia can be capitalized on to generate high-level descriptors, or annotations, for the content at hand. 3. Summarizing multimedia: we investigate the possibility of extracting highlights from media content, both for narrative-focused summarization and for maximising memorability
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Eze, Emmanuel Uchechukwu. „Context-based multimedia semantics modelling and representation“. Thesis, University of Hull, 2013. http://hydra.hull.ac.uk/resources/hull:8004.

Der volle Inhalt der Quelle
Annotation:
The evolution of the World Wide Web, increase in processing power, and more network bandwidth have contributed to the proliferation of digital multimedia data. Since multimedia data has become a critical resource in many organisations, there is an increasing need to gain efficient access to data, in order to share, extract knowledge, and ultimately use the knowledge to inform business decisions. Existing methods for multimedia semantic understanding are limited to the computable low-level features; which raises the question of how to identify and represent the high-level semantic knowledge in multimedia resources. In order to bridge the semantic gap between multimedia low-level features and highlevel human perception, this thesis seeks to identify the possible contextual dimensions in multimedia resources to help in semantic understanding and organisation. This thesis investigates the use of contextual knowledge to organise and represent the semantics of multimedia data aimed at efficient and effective multimedia content-based semantic retrieval. A mixed methods research approach incorporating both Design Science Research and Formal Methods for investigation and evaluation was adopted. A critical review of current approaches for multimedia semantic retrieval was undertaken and various shortcomings identified. The objectives for a solution were defined which led to the design, development, and formalisation of a context-based model for multimedia semantic understanding and organisation. The model relies on the identification of different contextual dimensions in multimedia resources to aggregate meaning and facilitate semantic representation, knowledge sharing and reuse. A prototype system for multimedia annotation, CONMAN was built to demonstrate aspects of the model and validate the research hypothesis, H₁. Towards providing richer and clearer semantic representation of multimedia content, the original contributions of this thesis to Information Science include: (a) a novel framework and formalised model for organising and representing the semantics of heterogeneous visual data; and (b) a novel S-Space model that is aimed at visual information semantic organisation and discovery, and forms the foundations for automatic video semantic understanding.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zareian, Alireza. „Learning Structured Representations for Understanding Visual and Multimedia Data“. Thesis, 2021. https://doi.org/10.7916/d8-94j1-yb14.

Der volle Inhalt der Quelle
Annotation:
Recent advances in Deep Learning (DL) have achieved impressive performance in a variety of Computer Vision (CV) tasks, leading to an exciting wave of academic and industrial efforts to develop Artificial Intelligence (AI) facilities for every aspect of human life. Nevertheless, there are inherent limitations in the understanding ability of DL models, which limit the potential of AI in real-world applications, especially in the face of complex, multimedia input. Despite tremendous progress in solving basic CV tasks, such as object detection and action recognition, state-of-the-art CV models can merely extract a partial summary of visual content, which lacks a comprehensive understanding of what happens in the scene. This is partly due to the oversimplified definition of CV tasks, which often ignore the compositional nature of semantics and scene structure. It is even less studied how to understand the content of multiple modalities, which requires processing visual and textual information in a holistic and coordinated manner, and extracting interconnected structures despite the semantic gap between the two modalities. In this thesis, we argue that a key to improve the understanding capacity of DL models in visual and multimedia domains is to use structured, graph-based representations, to extract and convey semantic information more comprehensively. To this end, we explore a variety of ideas to define more realistic DL tasks in both visual and multimedia domains, and propose novel methods to solve those tasks by addressing several fundamental challenges, such as weak supervision, discovery and incorporation of commonsense knowledge, and scaling up vocabulary. More specifically, inspired by the rich literature of semantic graphs in Natural Language Processing (NLP), we explore innovative scene understanding tasks and methods that describe images using semantic graphs, which reflect the scene structure and interactions between objects. In the first part of this thesis, we present progress towards such graph-based scene understanding solutions, which are more accurate, need less supervision, and have more human-like common sense compared to the state of the art. In the second part of this thesis, we extend our results on graph-based scene understanding to the multimedia domain, by incorporating the recent advances in NLP and CV, and developing a new task and method from the ground up, specialized for joint information extraction in the multimedia domain. We address the inherent semantic gap between visual content and text by creating high-level graph-based representations of images, and developing a multitask learning framework to establish a common, structured semantic space for representing both modalities. In the third part of this thesis, we explore another extension of our scene understanding methodology, to open-vocabulary settings, in order to make scene understanding methods more scalable and versatile. We develop visually grounded language models that use naturally supervised data to learn the meaning of all words, and transfer that knowledge to CV tasks such as object detection with little supervision. Collectively, the proposed solutions and empirical results set a new state of the art for the semantic comprehension of visual and multimedia content in a structured way, in terms of accuracy, efficiency, scalability, and robustness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Semantic multimedia representation"

1

Anupama, Mallik, und Ghosh Hiranmay, Hrsg. Multimedia ontology: Representation and applications. Boca Raton, Florida: CRC Press, 2016.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mallik, Anupama, Santanu Chaudhury und Hiranmay Ghosh. Multimedia Ontology: Representation and Applications. Taylor & Francis Group, 2015.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Mallik, Anupama, Santanu Chaudhury und Hiranmay Ghosh. Multimedia Ontology: Representation and Applications. Taylor & Francis Group, 2020.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mallik, Anupama, Santanu Chaudhury und Hiranmay Ghosh. Multimedia Ontology: Representation and Applications. Taylor & Francis Group, 2015.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Semantic multimedia representation"

1

Lim, Joo-Hwee. „Semantic Image Representation and Indexing“. In Encyclopedia of Multimedia, 800–805. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-78414-4_210.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Dalakleidi, Kalliopi, Stamatia Dasiopoulou, Giorgos Stoilos, Vassilis Tzouvaras, Giorgos Stamou und Yiannis Kompatsiaris. „Semantic Representation of Multimedia Content“. In Knowledge-Driven Multimedia Information Extraction and Ontology Evolution, 18–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20795-2_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Given, No Author. „Towards Semantic Universal Multimedia Access“. In Visual Content Processing and Representation, 13–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39798-4_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Carlos, Rafael Paulin, und Kuniaki Uehara. „Video Summarization Based on Semantic Representation“. In Advanced Multimedia Content Processing, 1–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48962-2_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sikos, Leslie F. „Knowledge Representation with Semantic Web Standards“. In Description Logics in Multimedia Reasoning, 11–49. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54066-5_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Giro, Xavier, und Ferran Marques. „From Partition Trees to Semantic Trees“. In Multimedia Content Representation, Classification and Security, 306–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11848035_41.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wang, Dong, Xiaobing Liu, Duanpeng Wang, Jianmin Li und Bo Zhang. „SemanGist: A Local Semantic Image Representation“. In Advances in Multimedia Information Processing - PCM 2008, 625–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89796-5_64.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Motik, Boris, Alexander Maedche und Raphael Volz. „Ontology Representation and Querying for Realizing Semantics-Driven Applications“. In Multimedia Content and the Semantic Web, 45–73. Chichester, UK: John Wiley & Sons, Ltd, 2005. http://dx.doi.org/10.1002/0470012617.ch2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lin, Chuang, Hongxun Yao, Wei Yu und Wenbo Tang. „Multi-level Semantic Representation for Flower Classification“. In Advances in Multimedia Information Processing – PCM 2017, 325–35. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77380-3_31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Oermann, Andrea, Tobias Scheidat, Claus Vielhauer und Jana Dittmann. „Semantic Fusion for Biometric User Authentication as Multimodal Signal Processing“. In Multimedia Content Representation, Classification and Security, 546–53. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11848035_72.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Semantic multimedia representation"

1

Azough, Ahmed, Alexandre Delteil, Mohand-Said Hacid und Fabien de Marchi. „A Representation Language for Multimedia Web Semantic“. In 2007 2nd International Workshop on Semantic Media Adaptation and Personalization. IEEE, 2007. http://dx.doi.org/10.1109/smap.2007.46.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Fogarolli, Angela, und Marco Ronchetti. „Domain Independent Semantic Representation of Multimedia Presentations“. In 2009 International Conference on Intelligent Networking and Collaborative Systems (INCOS). IEEE, 2009. http://dx.doi.org/10.1109/incos.2009.80.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bonino, Dario, Fulvio Corno und Paolo Pellegrino. „Versatile RDF Representation for Multimedia Semantic Search“. In 19th IEEE International Conference on Tools with Artificial Intelligence(ICTAI 2007). IEEE, 2007. http://dx.doi.org/10.1109/ictai.2007.129.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Azough, Ahmed, Alexandre Delteil, Mohand-Said Hacid und Fabien de Marchi. „A Representation Language for Multimedia Web Semantic“. In Second International Workshop on Semantic Media Adaptation and Personalization (SMAP 2007). IEEE, 2007. http://dx.doi.org/10.1109/smap.2007.4414384.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lo, Shao-Yuan, und Hsueh-Ming Hang. „Exploring Semantic Segmentation on the DCT Representation“. In MMAsia '19: ACM Multimedia Asia. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3338533.3366557.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Bu, Xuxiao, Bingfeng Li, Yaxiong Wang, Jihua Zhu, Xueming Qian und Marco Zhao. „Semantic Gated Network for Efficient News Representation“. In ICMR '20: International Conference on Multimedia Retrieval. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3372278.3390719.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jiang, Weiwei. „Research on multimedia representation of IETM based on semantic“. In 2012 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (QR2MSE). IEEE, 2012. http://dx.doi.org/10.1109/icqr2mse.2012.6246503.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Tang, Pengjie, Hanli Wang, Hanzhang Wang und Kaisheng Xu. „Richer Semantic Visual and Language Representation for Video Captioning“. In MM '17: ACM Multimedia Conference. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3123266.3127895.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Li, Xiaoni, Yu Zhou, Yifei Zhang, Aoting Zhang, Wei Wang, Ning Jiang, Haiying Wu und Weiping Wang. „Dense Semantic Contrast for Self-Supervised Visual Representation Learning“. In MM '21: ACM Multimedia Conference. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3474085.3475551.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ren, Zhou, Hailin Jin, Zhe Lin, Chen Fang und Alan Yuille. „Joint Image-Text Representation by Gaussian Visual-Semantic Embedding“. In MM '16: ACM Multimedia Conference. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2964284.2967212.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie