Auswahl der wissenschaftlichen Literatur zum Thema „Visual and semantic embedding“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Visual and semantic embedding" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Visual and semantic embedding"
Zhang, Yuanpeng, Jingye Guan, Haobo Wang, Kaiming Li, Ying Luo und Qun Zhang. „Generalized Zero-Shot Space Target Recognition Based on Global-Local Visual Feature Embedding Network“. Remote Sensing 15, Nr. 21 (28.10.2023): 5156. http://dx.doi.org/10.3390/rs15215156.
Der volle Inhalt der QuelleYeh, Mei-Chen, und Yi-Nan Li. „Multilabel Deep Visual-Semantic Embedding“. IEEE Transactions on Pattern Analysis and Machine Intelligence 42, Nr. 6 (01.06.2020): 1530–36. http://dx.doi.org/10.1109/tpami.2019.2911065.
Der volle Inhalt der QuelleMerkx, Danny, und Stefan L. Frank. „Learning semantic sentence representations from visually grounded language without lexical knowledge“. Natural Language Engineering 25, Nr. 4 (Juli 2019): 451–66. http://dx.doi.org/10.1017/s1351324919000196.
Der volle Inhalt der QuelleZhou, Mo, Zhenxing Niu, Le Wang, Zhanning Gao, Qilin Zhang und Gang Hua. „Ladder Loss for Coherent Visual-Semantic Embedding“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 07 (03.04.2020): 13050–57. http://dx.doi.org/10.1609/aaai.v34i07.7006.
Der volle Inhalt der QuelleGe, Jiannan, Hongtao Xie, Shaobo Min und Yongdong Zhang. „Semantic-guided Reinforced Region Embedding for Generalized Zero-Shot Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 2 (18.05.2021): 1406–14. http://dx.doi.org/10.1609/aaai.v35i2.16230.
Der volle Inhalt der QuelleNguyen, Huy Manh, Tomo Miyazaki, Yoshihiro Sugaya und Shinichiro Omachi. „Multiple Visual-Semantic Embedding for Video Retrieval from Query Sentence“. Applied Sciences 11, Nr. 7 (03.04.2021): 3214. http://dx.doi.org/10.3390/app11073214.
Der volle Inhalt der QuelleMATSUBARA, Takashi. „Target-Oriented Deformation of Visual-Semantic Embedding Space“. IEICE Transactions on Information and Systems E104.D, Nr. 1 (01.01.2021): 24–33. http://dx.doi.org/10.1587/transinf.2020mup0003.
Der volle Inhalt der QuelleTang, Qi, Yao Zhao, Meiqin Liu, Jian Jin und Chao Yao. „Semantic Lens: Instance-Centric Semantic Alignment for Video Super-resolution“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 6 (24.03.2024): 5154–61. http://dx.doi.org/10.1609/aaai.v38i6.28321.
Der volle Inhalt der QuelleKeller, Patrick, Abdoul Kader Kaboré, Laura Plein, Jacques Klein, Yves Le Traon und Tegawendé F. Bissyandé. „What You See is What it Means! Semantic Representation Learning of Code based on Visualization and Transfer Learning“. ACM Transactions on Software Engineering and Methodology 31, Nr. 2 (30.04.2022): 1–34. http://dx.doi.org/10.1145/3485135.
Der volle Inhalt der QuelleHe, Hai, und Haibo Yang. „Deep Visual Semantic Embedding with Text Data Augmentation and Word Embedding Initialization“. Mathematical Problems in Engineering 2021 (28.05.2021): 1–8. http://dx.doi.org/10.1155/2021/6654071.
Der volle Inhalt der QuelleDissertationen zum Thema "Visual and semantic embedding"
Engilberge, Martin. „Deep Inside Visual-Semantic Embeddings“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS150.
Der volle Inhalt der QuelleNowadays Artificial Intelligence (AI) is omnipresent in our society. The recentdevelopment of learning methods based on deep neural networks alsocalled "Deep Learning" has led to a significant improvement in visual representation models.and textual.In this thesis, we aim to further advance image representation and understanding.Revolving around Visual Semantic Embedding (VSE) approaches, we explore different directions: We present relevant background covering images and textual representation and existing multimodal approaches. We propose novel architectures further improving retrieval capability of VSE and we extend VSE models to novel applications and leverage embedding models to visually ground semantic concept. Finally, we delve into the learning process andin particular the loss function by learning differentiable approximation of ranking based metric
Wang, Qian. „Zero-shot visual recognition via latent embedding learning“. Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/zeroshot-visual-recognition-via-latent-embedding-learning(bec510af-6a53-4114-9407-75212e1a08e1).html.
Der volle Inhalt der QuelleFicapal, Vila Joan. „Anemone: a Visual Semantic Graph“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252810.
Der volle Inhalt der QuelleSemantiska grafer har använts för att optimera olika processer för naturlig språkbehandling samt för att förbättra sökoch informationsinhämtningsuppgifter. I de flesta fall har sådana semantiska grafer konstruerats genom övervakade maskininlärningsmetoder som förutsätter manuellt kurerade ontologier såsom Wikipedia eller liknande. I denna uppsats, som består av två delar, undersöker vi i första delen möjligheten att automatiskt generera en semantisk graf från ett ad hoc dataset bestående av 50 000 tidningsartiklar på ett helt oövervakat sätt. Användbarheten hos den visuella representationen av den resulterande grafen testas på 14 försökspersoner som utför grundläggande informationshämtningsuppgifter på en delmängd av artiklarna. Vår studie visar att vår funktionalitet är lönsam för att hitta och dokumentera likhet med varandra, och den visuella kartan som produceras av vår artefakt är visuellt användbar. I den andra delen utforskar vi möjligheten att identifiera entitetsrelationer på ett oövervakat sätt genom att använda abstraktiva djupa inlärningsmetoder för meningsomformulering. De omformulerade meningarna utvärderas kvalitativt med avseende på grammatisk korrekthet och meningsfullhet såsom detta uppfattas av 14 testpersoner. Vi utvärderar negativt resultaten av denna andra del, eftersom de inte har varit tillräckligt bra för att få någon definitiv slutsats, men har istället öppnat nya dörrar för att utforska.
Jakeš, Jan. „Visipedia - Embedding-driven Visual Feature Extraction and Learning“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236120.
Der volle Inhalt der QuelleGao, Jizhou. „VISUAL SEMANTIC SEGMENTATION AND ITS APPLICATIONS“. UKnowledge, 2013. http://uknowledge.uky.edu/cs_etds/14.
Der volle Inhalt der QuelleLiu, Jingen. „Learning Semantic Features for Visual Recognition“. Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3358.
Der volle Inhalt der QuellePh.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
Nguyen, Duc Minh Chau. „Affordance learning for visual-semantic perception“. Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2443.
Der volle Inhalt der QuelleChen, Yifu. „Deep learning for visual semantic segmentation“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS200.
Der volle Inhalt der QuelleIn this thesis, we are interested in Visual Semantic Segmentation, one of the high-level task that paves the way towards complete scene understanding. Specifically, it requires a semantic understanding at the pixel level. With the success of deep learning in recent years, semantic segmentation problems are being tackled using deep architectures. In the first part, we focus on the construction of a more appropriate loss function for semantic segmentation. More precisely, we define a novel loss function by employing a semantic edge detection network. This loss imposes pixel-level predictions to be consistent with the ground truth semantic edge information, and thus leads to better shaped segmentation results. In the second part, we address another important issue, namely, alleviating the need for training segmentation models with large amounts of fully annotated data. We propose a novel attribution method that identifies the most significant regions in an image considered by classification networks. We then integrate our attribution method into a weakly supervised segmentation framework. The semantic segmentation models can thus be trained with only image-level labeled data, which can be easily collected in large quantities. All models proposed in this thesis are thoroughly experimentally evaluated on multiple datasets and the results are competitive with the literature
Fan, Wei. „Image super-resolution using neighbor embedding over visual primitive manifolds /“. View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20FAN.
Der volle Inhalt der QuelleHanwell, David. „Weakly supervised learning of visual semantic attributes“. Thesis, University of Bristol, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.687063.
Der volle Inhalt der QuelleBücher zum Thema "Visual and semantic embedding"
Endert, Alex. Semantic Interaction for Visual Analytics. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-031-02603-4.
Der volle Inhalt der QuellePaquette, Gilbert. Visual knowledge modeling for semantic web technologies: Models and ontologies. Hershey, PA: Information Science Reference, 2010.
Den vollen Inhalt der Quelle findenHussam, Ali. Semantic highlighting: An approach to communicating information and knowledge through visual metadata. [s.l: The Author], 1999.
Den vollen Inhalt der Quelle findenValkola, Jarmo. Perceiving the visual in cinema: Semantic approaches to film form and meaning. Jyväskylä: Jyväskylän Yliopisto, 1993.
Den vollen Inhalt der Quelle findenChen, Chaomei. Effects of spatial-semantic interfaces in visual information retrieval: Three experimental studies. [Great Britain]: Resource, 2002.
Den vollen Inhalt der Quelle findenK, kokula Krishna Hari, Hrsg. Multi-secret Semantic Visual Cryptographic Protocol for Securing Image Communications: ICCS 2014. Bangkok, Thailand: Association of Scientists, Developers and Faculties, 2014.
Den vollen Inhalt der Quelle findenBratko, Aleksandr. Artificial intelligence, legal system and state functions. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1064996.
Der volle Inhalt der QuelleVideo segmentation and its applications. New York: Springer, 2011.
Den vollen Inhalt der Quelle findenStoenescu, Livia. The Pictorial Art of El Greco. NL Amsterdam: Amsterdam University Press, 2019. http://dx.doi.org/10.5117/9789462989009.
Der volle Inhalt der QuelleZhang, Yu-jin. Semantic-Based Visual Information Retrieval. IRM Press, 2006.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Visual and semantic embedding"
Wang, Haoran, Ying Zhang, Zhong Ji, Yanwei Pang und Lin Ma. „Consensus-Aware Visual-Semantic Embedding for Image-Text Matching“. In Computer Vision – ECCV 2020, 18–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58586-0_2.
Der volle Inhalt der QuelleYang, Zhanbo, Li Li, Jun He, Zixi Wei, Li Liu und Jun Liao. „Multimodal Learning with Triplet Ranking Loss for Visual Semantic Embedding Learning“. In Knowledge Science, Engineering and Management, 763–73. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29551-6_67.
Der volle Inhalt der QuelleJiang, Zhukai, und Zhichao Lian. „Self-supervised Visual-Semantic Embedding Network Based on Local Label Optimization“. In Machine Learning for Cyber Security, 400–412. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-20102-8_31.
Der volle Inhalt der QuelleFilntisis, Panagiotis Paraskevas, Niki Efthymiou, Gerasimos Potamianos und Petros Maragos. „Emotion Understanding in Videos Through Body, Context, and Visual-Semantic Embedding Loss“. In Computer Vision – ECCV 2020 Workshops, 747–55. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_52.
Der volle Inhalt der QuelleValério, Rodrigo, und João Magalhães. „Learning Semantic-Visual Embeddings with a Priority Queue“. In Pattern Recognition and Image Analysis, 67–81. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-36616-1_6.
Der volle Inhalt der QuelleSyed, Arsal, und Brendan Tran Morris. „CNN, Segmentation or Semantic Embeddings: Evaluating Scene Context for Trajectory Prediction“. In Advances in Visual Computing, 706–17. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64559-5_56.
Der volle Inhalt der QuelleSchall, Konstantin, Nico Hezel, Klaus Jung und Kai Uwe Barthel. „Vibro: Video Browsing with Semantic and Visual Image Embeddings“. In MultiMedia Modeling, 665–70. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27077-2_56.
Der volle Inhalt der QuelleChen, Yanbei, und Loris Bazzani. „Learning Joint Visual Semantic Matching Embeddings for Language-Guided Retrieval“. In Computer Vision – ECCV 2020, 136–52. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58542-6_9.
Der volle Inhalt der QuelleTheodoridou, Christina, Andreas Kargakos, Ioannis Kostavelis, Dimitrios Giakoumis und Dimitrios Tzovaras. „Spatially-Constrained Semantic Segmentation with Topological Maps and Visual Embeddings“. In Lecture Notes in Computer Science, 117–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87156-7_10.
Der volle Inhalt der QuelleThoma, Steffen, Achim Rettinger und Fabian Both. „Towards Holistic Concept Representations: Embedding Relational Knowledge, Visual Attributes, and Distributional Word Semantics“. In Lecture Notes in Computer Science, 694–710. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68288-4_41.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Visual and semantic embedding"
Li, Zheng, Caili Guo, Zerun Feng, Jenq-Neng Hwang und Xijun Xue. „Multi-View Visual Semantic Embedding“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/158.
Der volle Inhalt der QuelleRen, Zhou, Hailin Jin, Zhe Lin, Chen Fang und Alan Yuille. „Multiple Instance Visual-Semantic Embedding“. In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.89.
Der volle Inhalt der QuelleWehrmann, Jônatas, und Rodrigo C. Barros. „Language-Agnostic Visual-Semantic Embeddings“. In Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/ctd.2021.15751.
Der volle Inhalt der QuelleLi, Binglin, und Yang Wang. „Visual Relationship Detection Using Joint Visual-Semantic Embedding“. In 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018. http://dx.doi.org/10.1109/icpr.2018.8546097.
Der volle Inhalt der QuelleJi, Rongrong, Hongxun Yao, Xiaoshuai Sun, Bineng Zhong und Wen Gao. „Towards semantic embedding in visual vocabulary“. In 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2010. http://dx.doi.org/10.1109/cvpr.2010.5540118.
Der volle Inhalt der QuelleHong, Ziming, Shiming Chen, Guo-Sen Xie, Wenhan Yang, Jian Zhao, Yuanjie Shao, Qinmu Peng und Xinge You. „Semantic Compression Embedding for Generative Zero-Shot Learning“. In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/134.
Der volle Inhalt der QuellePerez-Martin, Jesus, Jorge Perez und Benjamin Bustos. „Visual-Syntactic Embedding for Video Captioning“. In LatinX in AI at Computer Vision and Pattern Recognition Conference 2021. Journal of LatinX in AI Research, 2021. http://dx.doi.org/10.52591/lxai202106259.
Der volle Inhalt der QuelleZeng, Zhixian, Jianjun Cao, Nianfeng Weng, Guoquan Jiang, Yizhuo Rao und Yuxin Xu. „Softmax Pooling for Super Visual Semantic Embedding“. In 2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). IEEE, 2021. http://dx.doi.org/10.1109/iemcon53756.2021.9623131.
Der volle Inhalt der QuelleZhang, Licheng, Xianzhi Wang, Lina Yao, Lin Wu und Feng Zheng. „Zero-Shot Object Detection via Learning an Embedding from Semantic Space to Visual Space“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/126.
Der volle Inhalt der QuelleSong, Yale, und Mohammad Soleymani. „Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval“. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00208.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Visual and semantic embedding"
Kud, A. A. Figures and Tables. Reprinted from “Comprehensive сlassification of virtual assets”, A. A. Kud, 2021, International Journal of Education and Science, 4(1), 52–75. KRPOCH, 2021. http://dx.doi.org/10.26697/reprint.ijes.2021.1.6.a.kud.
Der volle Inhalt der QuelleTabinskyy, Yaroslav. VISUAL CONCEPTS OF PHOTO IN THE MEDIA (ON THE EXAMPLE OF «UKRAINER» AND «REPORTERS»). Ivan Franko National University of Lviv, März 2021. http://dx.doi.org/10.30970/vjo.2021.50.11099.
Der volle Inhalt der QuelleMbani, Benson, Timm Schoening und Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, Mai 2023. http://dx.doi.org/10.3289/sw_2_2023.
Der volle Inhalt der QuelleYatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, Februar 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.
Der volle Inhalt der Quelle