Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Multimodal Information Retrieval“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multimodal Information Retrieval" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Multimodal Information Retrieval"
Xu, Hong. „Multimodal bird information retrieval system“. Applied and Computational Engineering 53, Nr. 1 (28.03.2024): 96–102. http://dx.doi.org/10.54254/2755-2721/53/20241282.
Der volle Inhalt der QuelleCui, Chenhao, und Zhoujun Li. „Prompt-Enhanced Generation for Multimodal Open Question Answering“. Electronics 13, Nr. 8 (10.04.2024): 1434. http://dx.doi.org/10.3390/electronics13081434.
Der volle Inhalt der QuelleKulvinder Singh, Et al. „Enhancing Multimodal Information Retrieval Through Integrating Data Mining and Deep Learning Techniques“. International Journal on Recent and Innovation Trends in Computing and Communication 11, Nr. 9 (30.10.2023): 560–69. http://dx.doi.org/10.17762/ijritcc.v11i9.8844.
Der volle Inhalt der QuelleUbaidullahBokhari, Mohammad, und Faraz Hasan. „Multimodal Information Retrieval: Challenges and Future Trends“. International Journal of Computer Applications 74, Nr. 14 (26.07.2013): 9–12. http://dx.doi.org/10.5120/12951-9967.
Der volle Inhalt der QuelleCalumby, Rodrigo Tripodi. „Diversity-oriented Multimodal and Interactive Information Retrieval“. ACM SIGIR Forum 50, Nr. 1 (27.06.2016): 86. http://dx.doi.org/10.1145/2964797.2964811.
Der volle Inhalt der QuelleS. Gomathy, K. P. Deepa, T. Revathi und L. Maria Michael Visuwasam. „Genre Specific Classification for Information Search and Multimodal Semantic Indexing for Data Retrieval“. SIJ Transactions on Computer Science Engineering & its Applications (CSEA) 01, Nr. 01 (05.04.2013): 10–15. http://dx.doi.org/10.9756/sijcsea/v1i1/01010159.
Der volle Inhalt der QuelleZHANG, Jing. „Video retrieval model based on multimodal information fusion“. Journal of Computer Applications 28, Nr. 1 (10.07.2008): 199–201. http://dx.doi.org/10.3724/sp.j.1087.2008.00199.
Der volle Inhalt der QuelleMourão, André, Flávio Martins und João Magalhães. „Multimodal medical information retrieval with unsupervised rank fusion“. Computerized Medical Imaging and Graphics 39 (Januar 2015): 35–45. http://dx.doi.org/10.1016/j.compmedimag.2014.05.006.
Der volle Inhalt der QuelleRevuelta-Martínez, Alejandro, Luis Rodríguez, Ismael García-Varea und Francisco Montero. „Multimodal interaction for information retrieval using natural language“. Computer Standards & Interfaces 35, Nr. 5 (September 2013): 428–41. http://dx.doi.org/10.1016/j.csi.2012.11.002.
Der volle Inhalt der QuelleChen, Xu, Alfred O. Hero, III und Silvio Savarese. „Multimodal Video Indexing and Retrieval Using Directed Information“. IEEE Transactions on Multimedia 14, Nr. 1 (Februar 2012): 3–16. http://dx.doi.org/10.1109/tmm.2011.2167223.
Der volle Inhalt der QuelleDissertationen zum Thema "Multimodal Information Retrieval"
Adebayo, Kolawole John <1986>. „Multimodal Legal Information Retrieval“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8634/1/ADEBAYO-JOHN-tesi.pdf.
Der volle Inhalt der QuelleValero-Mas, Jose J. „Towards Interactive Multimodal Music Transcription“. Doctoral thesis, Universidad de Alicante, 2017. http://hdl.handle.net/10045/71275.
Der volle Inhalt der QuelleFedel, Gabriel de Souza. „Busca multimodal para apoio à pesquisa em biodiversidade“. [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275751.
Der volle Inhalt der QuelleDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-18T07:07:49Z (GMT). No. of bitstreams: 1 Fedel_GabrieldeSouza_M.pdf: 14390093 bytes, checksum: 63058da33a22121e927f1cdbaff297d3 (MD5) Previous issue date: 2011
Resumo: A pesquisa em computação aplicada à biodiversidade apresenta muitos desafios, que vão desde o grande volume de dados altamente heterogêneos até a variedade de tipos de usuários. Isto gera a necessidade de ferramentas versáteis de recuperação. As ferramentas disponíveis ainda são limitadas e normalmente só consideram dados textuais, deixando de explorar a potencialidade da busca por dados de outra natureza, como imagens ou sons. Esta dissertação analisa os problemas de realizar consultas multimodais a partir de predicados que envolvem texto e imagem para o domínio de biodiversidade, especificando e implementando um conjunto de ferramentas para processar tais consultas. As contribuições do trabalho, validado com dados reais, incluem a construção de uma ontologia taxonômica associada a nomes vulgares e a possibilidade de apoiar dois perfis de usuários (especialistas e leigos). Estas características estendem o escopo da consultas atualmente disponíveis em sistemas de biodiversidade. Este trabalho está inserido no projeto Bio-CORE, uma parceria entre pesquisadores de computação e biologia para criar ferramentas computacionais para dar apoio à pesquisa em biodiversidade
Abstract: Research on Computing applied to biodiversity present several challenges, ranging from the massive volumes of highly heterogeneous data to the variety in user profiles. This kind of scenario requires versatile data retrieval and management tools. Available tools are still limited. Most often, they only consider textual data and do not take advantage of the multiple data types available, such as images or sounds. This dissertation discusses issues concerning multimodal queries that involve both text and images as search parameters, for the domanin of biodiversity. It presents the specification and implementation of a set of tools to process such queries, which were validate with real data from Unicamp's Zoology Museum. The aim contributions also include the construction of a taxonomic ontology that includes species common names, and support to both researchers and non-experts in queries. Such features extend the scop of queries available in biodiversity information systems. This research is associated with the Biocore project, jointly conducted by researchers in computing and biology, to design and develop computational tools to support research in biodiversity
Mestrado
Banco de Dados
Mestre em Ciência da Computação
Quack, Till. „Large scale mining and retrieval of visual data in a multimodal context“. Konstanz Hartung-Gorre, 2009. http://d-nb.info/993614620/04.
Der volle Inhalt der QuelleCalumby, Rodrigo Tripodi 1985. „Recuperação multimodal de imagens com realimentação de relevância baseada em programação genética“. [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275814.
Der volle Inhalt der QuelleDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-16T05:18:58Z (GMT). No. of bitstreams: 1 Calumby_RodrigoTripodi_M.pdf: 15749586 bytes, checksum: 2493b0b703adc1973eeabf7eb70ad21c (MD5) Previous issue date: 2010
Resumo: Este trabalho apresenta uma abordagem para recuperação multimodal de imagens com realimentação de relevância baseada em programação genética. Supõe-se que cada imagem da coleção possui informação textual associada (metadado, descrição textual, etc.), além de ter suas propriedades visuais (por exemplo, cor e textura) codificadas em vetores de características. A partir da informação obtida ao longo das iterações de realimentação de relevância, programação genética é utilizada para a criação de funções de combinação de medidas de similaridades eficazes. Com essas novas funções, valores de similaridades diversos são combinados em uma única medida, que mais adequadamente reflete as necessidades do usuário. As principais contribuições deste trabalho consistem na proposta e implementação de dois arcabouços. O primeiro, RFCore, é um arcabouço genérico para atividades de realimentação de relevância para manipulação de objetos digitais. O segundo, MMRFGP, é um arcabouço para recuperação de objetos digitais com realimentação de relevância baseada em programação genética, construído sobre o RFCore. O método proposto de recuperação multimodal de imagens foi validado sobre duas coleções de imagens, uma desenvolvida pela Universidade de Washington e outra da ImageCLEF Photographic Retrieval Task. A abordagem proposta mostrou melhores resultados para recuperação multimodal frente a utilização das modalidades isoladas. Além disso, foram obtidos resultados para recuperação visual e multimodal melhores do que as melhores submissões para a ImageCLEF Photographic Retrieval Task 2008
Abstract: This work presents an approach for multimodal content-based image retrieval with relevance feedback based on genetic programming. We assume that there is textual information (e.g., metadata, textual descriptions) associated with collection images. Furthermore, image content properties (e.g., color and texture) are characterized by image descriptores. Given the information obtained over the relevance feedback iterations, genetic programming is used to create effective combination functions that combine similarities associated with different features. Hence using these new functions the different similarities are combined into a unique measure that more properly meets the user needs. The main contribution of this work is the proposal and implementation of two frameworks. The first one, RFCore, is a generic framework for relevance feedback tasks over digital objects. The second one, MMRF-GP, is a framework for digital object retrieval with relevance feedback based on genetic programming and it was built on top of RFCore. We have validated the proposed multimodal image retrieval approach over 2 datasets, one from the University of Washington and another from the ImageCLEF Photographic Retrieval Task. Our approach has yielded the best results for multimodal image retrieval when compared with one-modality approaches. Furthermore, it has achieved better results for visual and multimodal image retrieval than the best submissions for ImageCLEF Photographic Retrieval Task 2008
Mestrado
Sistemas de Recuperação da Informação
Mestre em Ciência da Computação
SIMONETTA, FEDERICO. „MUSIC INTERPRETATION ANALYSIS. A MULTIMODAL APPROACH TO SCORE-INFORMED RESYNTHESIS OF PIANO RECORDINGS“. Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/918909.
Der volle Inhalt der QuelleNag, Chowdhury Sreyasi [Verfasser]. „Text-image synergy for multimodal retrieval and annotation / Sreyasi Nag Chowdhury“. Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2021. http://d-nb.info/1240674139/34.
Der volle Inhalt der QuelleInagaki, Yasuyoshi, Katsuhiko Toyama, Nobuo Kawaguchi, Shigeki Matsubara, Satoru Matsunaga, 康善 稲垣, 勝彦 外山, 信夫 河口, 茂樹 松原 und 悟. 松永. „Sync/Mail : 話し言葉の漸進的変換に基づく即時応答インタフェース“. 一般社団法人情報処理学会, 1998. http://hdl.handle.net/2237/15382.
Der volle Inhalt der QuelleKarlsson, Kristina. „Semantic represenations of retrieved memory information depend on cue-modality“. Thesis, Stockholms universitet, Psykologiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-58817.
Der volle Inhalt der QuelleSlizovskaia, Olga. „Audio-visual deep learning methods for musical instrument classification and separation“. Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/669963.
Der volle Inhalt der QuelleEn la percepción musical, normalmente recibimos por nuestro sistema visual y por nuestro sistema auditivo informaciones complementarias. Además, la percepción visual juega un papel importante en nuestra experiencia integral ante una interpretación musical. Esta relación entre audio y visión ha incrementado el interés en métodos de aprendizaje automático capaces de combinar ambas modalidades para el análisis musical automático. Esta tesis se centra en dos problemas principales: la clasificación de instrumentos y la separación de fuentes en el contexto de videos musicales. Para cada uno de los problemas, se desarrolla un método multimodal utilizando técnicas de Deep Learning. Esto nos permite obtener -a través del aprendizaje- una representación codificada para cada modalidad. Además, para el problema de la separación de fuentes, también proponemos dos modelos condicionados a las etiquetas de los instrumentos, y examinamos la influencia que tienen dos fuentes de información extra en el rendimiento de la separación -comparándolas contra un modelo convencional-. Otro aspecto importante de este trabajo se basa en la exploración de diferentes modelos de fusión que permiten una mejor integración multimodal de fuentes de información de dominios asociados.
En la percepció visual, és habitual que rebem informacions complementàries des del nostres sistemes visual i auditiu. A més a més, la percepció visual té un paper molt important en la nostra experiència integral davant una interpretació musical. Aquesta relació entre àudio i visió ha fet créixer l'interès en mètodes d’aprenentatge automàtic capaços de combinar ambdues modalitats per l’anàlisi musical automàtic. Aquesta tesi se centra en dos problemes principals: la classificació d'instruments i la separació de fonts en el context dels vídeos musicals. Per a cadascú dels problemes, s'ha desenvolupat un mètode multimodal fent servir tècniques de Deep Learning. Això ens ha permès d'obtenir – gràcies a l’aprenentatge- una representació codificada per a cada modalitat. A més a més, en el cas del problema de separació de fonts, també proposem dos models condicionats a les etiquetes dels instruments, i examinem la influència que tenen dos fonts d’informació extra sobre el rendiment de la separació -tot comparant-les amb un model convencional-. Un altre aspecte d’aquest treball es basa en l’exploració de diferents models de fusió, els quals permeten una millor integració multimodal de fonts d'informació de dominis associats.
Bücher zum Thema "Multimodal Information Retrieval"
Peters, Carol, Valentin Jijkoun, Thomas Mandl, Henning Müller, Douglas W. Oard, Anselmo Peñas, Vivien Petras und Diana Santos, Hrsg. Advances in Multilingual and Multimodal Information Retrieval. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-85760-0.
Der volle Inhalt der QuelleJay, Kuo C. C., Hrsg. Video content analysis using multimodal information: For movie content extraction, indexing, and representation. Boston, Mass: Kluwer Academic Publishers, 2003.
Den vollen Inhalt der Quelle findenLi, Ying. Video Content Analysis Using Multimodal Information: For Movie Content Extraction, Indexing and Representation. Boston, MA: Springer US, 2003.
Den vollen Inhalt der Quelle findenC, Peters, Hrsg. Advances in multilingual and multimodal information retrieval: 8th Workshop of the Cross-Language Evaluation Forum, CLEF 2007, Budapest, Hungary, September 19-21, 2007 : revised selected papers. Berlin: Springer, 2008.
Den vollen Inhalt der Quelle findenForner, Pamela. Multilingual and Multimodal Information Access Evaluation: Second International Conference of the Cross-Language Evaluation Forum, CLEF 2011, Amsterdam, The Netherlands, September 19-22, 2011. Proceedings. Berlin, Heidelberg: Springer-Verlag GmbH Berlin Heidelberg, 2011.
Den vollen Inhalt der Quelle findenLi, Ying. Video content analysis using multimodal information: For movie content extraction, indexing, and representation. Boston, MA: Kluwer Academic Publishers, 2003.
Den vollen Inhalt der Quelle findenEsposito, Anna. Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues: Third COST 2102 International Training School, Caserta, Italy, March 15-19, 2010, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.
Den vollen Inhalt der Quelle findenGosse, Bouma, und SpringerLink (Online service), Hrsg. Interactive Multi-modal Question-Answering. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.
Den vollen Inhalt der Quelle findenAndrzej, Drygajlo, Esposito Anna, Ortega-Garcia Javier, Faúndez Zanuy Marcos und SpringerLink (Online service), Hrsg. Biometric ID Management and Multimodal Communication: Joint COST 2101 and 2102 International Conference, BioID_MultiComm 2009, Madrid, Spain, September 16-18, 2009. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.
Den vollen Inhalt der Quelle findenUte, Jekosch, Brewster Stephen 1967- und SpringerLink (Online service), Hrsg. Haptic and Audio Interaction Design: 4th International Conference, HAID 2009 Dresden, Germany, September 10-11, 2009 Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Multimodal Information Retrieval"
Baeza-Yates, Ricardo. „Retrieval Evaluation in Practice“. In Multilingual and Multimodal Information Access Evaluation, 2. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15998-5_2.
Der volle Inhalt der QuelleChang, Edward Y. „Multimodal Fusion“. In Foundations of Large-Scale Multimedia Information Management and Retrieval, 121–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20429-6_6.
Der volle Inhalt der QuelleMarx, Jutta, und Stephan Roppel. „WING: An Intelligent Multimodal Interface for a Materials Information System“. In 14th Information Retrieval Colloquium, 67–78. London: Springer London, 1993. http://dx.doi.org/10.1007/978-1-4471-3211-0_5.
Der volle Inhalt der QuelleAlonso, Omar. „Crowdsourcing for Information Retrieval Experimentation and Evaluation“. In Multilingual and Multimodal Information Access Evaluation, 2. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23708-9_2.
Der volle Inhalt der QuelleBozzon, Alessandro, und Piero Fraternali. „Chapter 8: Multimedia and Multimodal Information Retrieval“. In Search Computing, 135–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12310-8_8.
Der volle Inhalt der QuelleTautkute, Ivona, und Tomasz Trzciński. „SynthTriplet GAN: Synthetic Query Expansion for Multimodal Retrieval“. In Neural Information Processing, 287–98. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-92273-3_24.
Der volle Inhalt der QuelleMayer, Rudolf, und Andreas Rauber. „Multimodal Aspects of Music Retrieval: Audio, Song Lyrics – and Beyond?“ In Advances in Music Information Retrieval, 333–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11674-2_15.
Der volle Inhalt der QuelleNeumayer, Robert, und Andreas Rauber. „Multimodal Analysis of Text and Audio Features for Music Information Retrieval“. In Multimodal Processing and Interaction, 1–17. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-76316-3_11.
Der volle Inhalt der QuellePurificato, Erasmo, und Antonio M. Rinaldi. „A Multimodal Approach for Cultural Heritage Information Retrieval“. In Computational Science and Its Applications – ICCSA 2018, 214–30. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-95162-1_15.
Der volle Inhalt der QuelleRashid, Umer, Iftikhar Azim Niaz und Muhammad Afzal Bhatti. „Unified Multimodal Search Framework for Multimedia Information Retrieval“. In Advanced Techniques in Computing Sciences and Software Engineering, 129–36. Dordrecht: Springer Netherlands, 2009. http://dx.doi.org/10.1007/978-90-481-3660-5_22.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Multimodal Information Retrieval"
Ji, Wei, Yinwei Wei, Zhedong Zheng, Hao Fei und Tat-seng Chua. „Deep Multimodal Learning for Information Retrieval“. In MM '23: The 31st ACM International Conference on Multimedia. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3581783.3610949.
Der volle Inhalt der QuelleAhmed, Shaikh Riaz, Jian-Ping Li, Memon Muhammad Hammad und Khan Asif. „Image segmentation approach in multimodal information retrieval“. In 2013 10th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). IEEE, 2013. http://dx.doi.org/10.1109/iccwamtip.2013.6716624.
Der volle Inhalt der QuelleZhang, Rui, und Ling Guan. „Multimodal image retrieval via bayesian information fusion“. In 2009 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2009. http://dx.doi.org/10.1109/icme.2009.5202623.
Der volle Inhalt der QuelleBruno, Eric, Jana Kludas und Stephane Marchand-Maillet. „Combining multimodal preferences for multimedia information retrieval“. In the international workshop. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1290082.1290095.
Der volle Inhalt der QuelleRusnandi, Enang, Edi Winarko und S. N. Azhari. „A Survey on Multimodal Information Retrieval Approach“. In 2020 International Conference on Smart Technology and Applications (ICoSTA). IEEE, 2020. http://dx.doi.org/10.1109/icosta48221.2020.1570611095.
Der volle Inhalt der QuelleValstar, Michel, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André et al. „Ask Alice: an artificial retrieval of information agent“. In ICMI '16: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2016. http://dx.doi.org/10.1145/2993148.2998535.
Der volle Inhalt der QuelleCaicedo, Juan C. „Multimodal Information Spaces for Content-based Image Retrieval“. In Third BCS-IRSG Symposium on Future Directions in Information Access (FDIA 2009). BCS Learning & Development, 2009. http://dx.doi.org/10.14236/ewic/fdia2009.18.
Der volle Inhalt der QuelleFurui, Sadaoki, und Koh'ichiro Yamaguchi. „Designing a multimodal dialogue system for information retrieval“. In 5th International Conference on Spoken Language Processing (ICSLP 1998). ISCA: ISCA, 1998. http://dx.doi.org/10.21437/icslp.1998-84.
Der volle Inhalt der QuelleJin, Zhenkun, Xingshi Wan, Xin Nie, Xinlei Zhou, Yuanyuan Yi und Gefei Zhou. „Ranking on Heterogeneous Manifold for Multimodal Information Retrieval“. In 2023 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom). IEEE, 2023. http://dx.doi.org/10.1109/ispa-bdcloud-socialcom-sustaincom59178.2023.00162.
Der volle Inhalt der QuelleWajid, Mohd Anas, und Aasim Zafar. „Multimodal Information Access and Retrieval Notable Work and Milestones“. In 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE, 2019. http://dx.doi.org/10.1109/icccnt45670.2019.8944581.
Der volle Inhalt der Quelle