Добірка наукової літератури з теми "Apprentissage profond – Recherche de l'information"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Apprentissage profond – Recherche de l'information".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Apprentissage profond – Recherche de l'information":
Rondón, Emil Amarilys. "El aprendizaje de los nativos digitales desde sus vivencias, pensamientos y acciones." GACETA DE PEDAGOGÍA, no. 38 (December 1, 2019): 112–36. http://dx.doi.org/10.56219/rgp.vi38.771.
Carlos Daniel Suárez Meneses. "La educación virtual en tiempos de pandemia: Un enfoque praxeológico dentro del sistema educativo venezolano." GACETA DE PEDAGOGÍA, no. 44 (November 30, 2022): 66–89. http://dx.doi.org/10.56219/rgp.vi44.1247.
Fastrez, Pierre, and Thierry De Smedt. "A la recherche des compétences médiatiques. Introduction au dossier." Recherches en Communication 33 (October 7, 2011). http://dx.doi.org/10.14428/rec.v33i33.51773.
Дисертації з теми "Apprentissage profond – Recherche de l'information":
Ayoub, Oussama. "Enrichissement sémantique non supervisé de longs documents spécialisés pour la recherche d’information." Electronic Thesis or Diss., Paris, HESAM, 2023. http://www.theses.fr/2023HESAC039.
Faced with the incessant growth of textual data that needs processing, Information Retrieval (IR) systems are confronted with the urgent need to adopt effective mechanisms for efficiently selecting document sets that are best suited to specific queries. A predominant difficulty lies in the terminological divergence between the terms used in queries and those present in relevant documents. This semantic disparity, particularly pronounced for terms with similar meanings in large-scale documents from specialized domains, poses a significant challenge for IR systems.In addressing these challenges, many studies have been limited to query enrichment via supervised models, an approach that proves inadequate for industrial application and lacks flexibility. This thesis proposes LoGE an innovative alternative with an unsupervised search system based on advanced Deep Learning methods. This system uses a masked language model to extrapolate associated terms, thereby enriching the textual representation of documents. The Deep Learning models used, pre-trained on extensive textual corpora, incorporate general or domain-specific knowledge, thus optimizing the document representation.The analysis of the generated extensions revealed an imbalance between the signal (relevant terms added) and the noise (irrelevant terms). To address this issue, we developed SummVD, an innovative extractive automatic summarization approach, using singular value decomposition to synthesize the information contained in documents and identify the most pertinent phrases. This method has been adapted to filter the terms of the extensions based on the local context of each document, thereby maintaining the relevance of the information while minimizing noise
Belkacem, Thiziri. "Neural models for information retrieval : towards asymmetry sensitive approaches based on attention models." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30167.
This work is situated in the context of information retrieval (IR) using machine learning (ML) and deep learning (DL) techniques. It concerns different tasks requiring text matching, such as ad-hoc research, question answering and paraphrase identification. The objective of this thesis is to propose new approaches, using DL methods, to construct semantic-based models for text matching, and to overcome the problems of vocabulary mismatch related to the classical bag of word (BoW) representations used in traditional IR models. Indeed, traditional text matching methods are based on the BoW representation, which considers a given text as a set of independent words. The process of matching two sequences of text is based on the exact matching between words. The main limitation of this approach is related to the vocabulary mismatch. This problem occurs when the text sequences to be matched do not use the same vocabulary, even if their subjects are related. For example, the query may contain several words that are not necessarily used in the documents of the collection, including relevant documents. BoW representations ignore several aspects about a text sequence, such as the structure the context of words. These characteristics are important and make it possible to differentiate between two texts that use the same words but expressing different information. Another problem in text matching is related to the length of documents. The relevant parts can be distributed in different ways in the documents of a collection. This is especially true in large documents that tend to cover a large number of topics and include variable vocabulary. A long document could thus contain several relevant passages that a matching model must capture. Unlike long documents, short documents are likely to be relevant to a specific subject and tend to contain a more restricted vocabulary. Assessing their relevance is in principle simpler than assessing the one of longer documents. In this thesis, we have proposed different contributions, each addressing one of the above-mentioned issues. First, in order to solve the problem of vocabulary mismatch, we used distributed representations of words (word embedding) to allow a semantic matching between the different words. These representations have been used in IR applications where document/query similarity is computed by comparing all the term vectors of the query with all the term vectors of the document, regardless. Unlike the models proposed in the state-of-the-art, we studied the impact of query terms regarding their presence/absence in a document. We have adopted different document/query matching strategies. The intuition is that the absence of the query terms in the relevant documents is in itself a useful aspect to be taken into account in the matching process. Indeed, these terms do not appear in documents of the collection for two possible reasons: either their synonyms have been used or they are not part of the context of the considered documents. The methods we have proposed make it possible, on the one hand, to perform an inaccurate matching between the document and the query, and on the other hand, to evaluate the impact of the different terms of a query in the matching process. Although the use of word embedding allows semantic-based matching between different text sequences, these representations combined with classical matching models still consider the text as a list of independent elements (bag of vectors instead of bag of words). However, the structure of the text as well as the order of the words is important. Any change in the structure of the text and/or the order of words alters the information expressed. In order to solve this problem, neural models were used in text matching
Nguyen, Kim-Anh Laura. "Document Understanding with Deep Learning Techniques." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS077.
The field of Document Understanding, which addresses the problem of solving an array of Natural Language Processing tasks for visually-rich documents, faces challenges due to the complex structures and diverse formats of documents. Real-world documents rarely follow a strictly sequential structure. The visual presentation of a document, especially its layout, conveys rich semantic information, highlighting the crucial need for document understanding systems to include multimodal information. Despite notable advancements attributed to the emergence of Deep Learning, the field still grapples with various challenges in real-world applications. This thesis addresses two key challenges: 1) developing efficient and effective methods to encode the multimodal nature of documents, and 2) formulating strategies for efficient and effective processing of long and complex documents, considering their visual appearance. Our strategy to address the first research question involves designing approaches that rely only on layout to build meaningful representations. Multimodal pre-trained models for Document Understanding often neglect efficiency and fail to fully capitalize on the strong correlation between text and layout. We address these issues by introducing an attention mechanism based exclusively on layout information, enabling performance improvement and attention sparsification. Furthermore, we introduce a strategy based solely on layout to address reading order issues. While layout inherently captures the correct reading order of documents, existing pre-training methods for Document Understanding rely solely on OCR or PDF parsing to establish the reading order of documents, potentially introducing inaccuracies that can impact the entire text processing pipeline. Therefore, we discard sequential position information and propose a model that strategically leverages layout information as an alternative means to determine the reading order of documents. In addressing the second research axis, we explore the potential of leveraging layout to enhance the performance of models for tasks related to long and complex documents. The importance of document structure in information processing, particularly in the context of long documents, underscores the need for efficient modeling of layout information. To fill a notable void in resources and approaches for multimodal long document modeling, we introduce a dataset collection for summarization of long documents with consideration for their visual appearance, and present novel baselines that can handle long documents with awareness of their layout
Chafik, Sanaa. "Machine learning techniques for content-based information retrieval." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL008/document.
The amount of media data is growing at high speed with the fast growth of Internet and media resources. Performing an efficient similarity (nearest neighbor) search in such a large collection of data is a very challenging problem that the scientific community has been attempting to tackle. One of the most promising solutions to this fundamental problem is Content-Based Media Retrieval (CBMR) systems. The latter are search systems that perform the retrieval task in large media databases based on the content of the data. CBMR systems consist essentially of three major units, a Data Representation unit for feature representation learning, a Multidimensional Indexing unit for structuring the resulting feature space, and a Nearest Neighbor Search unit to perform efficient search. Media data (i.e. image, text, audio, video, etc.) can be represented by meaningful numeric information (i.e. multidimensional vector), called Feature Description, describing the overall content of the input data. The task of the second unit is to structure the resulting feature descriptor space into an index structure, where the third unit, effective nearest neighbor search, is performed.In this work, we address the problem of nearest neighbor search by proposing three Content-Based Media Retrieval approaches. Our three approaches are unsupervised, and thus can adapt to both labeled and unlabeled real-world datasets. They are based on a hashing indexing scheme to perform effective high dimensional nearest neighbor search. Unlike most recent existing hashing approaches, which favor indexing in Hamming space, our proposed methods provide index structures adapted to a real-space mapping. Although Hamming-based hashing methods achieve good accuracy-speed tradeoff, their accuracy drops owing to information loss during the binarization process. By contrast, real-space hashing approaches provide a more accurate approximation in the mapped real-space as they avoid the hard binary approximations.Our proposed approaches can be classified into shallow and deep approaches. In the former category, we propose two shallow hashing-based approaches namely, "Symmetries of the Cube Locality Sensitive Hashing" (SC-LSH) and "Cluster-based Data Oriented Hashing" (CDOH), based respectively on randomized-hashing and shallow learning-to-hash schemes. The SC-LSH method provides a solution to the space storage problem faced by most randomized-based hashing approaches. It consists of a semi-random scheme reducing partially the randomness effect of randomized hashing approaches, and thus the memory storage problem, while maintaining their efficiency in structuring heterogeneous spaces. The CDOH approach proposes to eliminate the randomness effect by combining machine learning techniques with the hashing concept. The CDOH outperforms the randomized hashing approaches in terms of computation time, memory space and search accuracy.The third approach is a deep learning-based hashing scheme, named "Unsupervised Deep Neuron-per-Neuron Hashing" (UDN2H). The UDN2H approach proposes to index individually the output of each neuron of the top layer of a deep unsupervised model, namely a Deep Autoencoder, with the aim of capturing the high level individual structure of each neuron output.Our three approaches, SC-LSH, CDOH and UDN2H, were proposed sequentially as the thesis was progressing, with an increasing level of complexity in terms of the developed models, and in terms of the effectiveness and the performances obtained on large real-world datasets
Tuo, Aboubacar. "Extraction d'événements à partir de peu d'exemples par méta-apprentissage." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG098.
Information Extraction (IE) is a research field with the objective of automatically identifying and extracting structured information within a given domain from unstructured or minimally structured text data. The implementation of such extractions often requires significant human efforts, either in the form of rule development or the creation of annotated data for systems based on machine learning. One of the current challenges in information extraction is to develop methods that minimize the costs and development time of these systems whenever possible. This thesis focuses on few-shot event extraction through a meta-learning approach that aims to train IE models from only few data. We have redefined the task of event extraction from this perspective, aiming to develop systems capable of quickly adapting to new contexts with a small volume of training data. First, we propose methods to enhance event trigger detection by developing more robust representations for this task. Then, we tackle the specific challenge raised by the "NULL" class (absence of events) within this framework. Finally, we evaluate the effectiveness of our proposals within the broader context of event extraction by extending their application to the extraction of event arguments
Chafik, Sanaa. "Machine learning techniques for content-based information retrieval." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL008.
The amount of media data is growing at high speed with the fast growth of Internet and media resources. Performing an efficient similarity (nearest neighbor) search in such a large collection of data is a very challenging problem that the scientific community has been attempting to tackle. One of the most promising solutions to this fundamental problem is Content-Based Media Retrieval (CBMR) systems. The latter are search systems that perform the retrieval task in large media databases based on the content of the data. CBMR systems consist essentially of three major units, a Data Representation unit for feature representation learning, a Multidimensional Indexing unit for structuring the resulting feature space, and a Nearest Neighbor Search unit to perform efficient search. Media data (i.e. image, text, audio, video, etc.) can be represented by meaningful numeric information (i.e. multidimensional vector), called Feature Description, describing the overall content of the input data. The task of the second unit is to structure the resulting feature descriptor space into an index structure, where the third unit, effective nearest neighbor search, is performed.In this work, we address the problem of nearest neighbor search by proposing three Content-Based Media Retrieval approaches. Our three approaches are unsupervised, and thus can adapt to both labeled and unlabeled real-world datasets. They are based on a hashing indexing scheme to perform effective high dimensional nearest neighbor search. Unlike most recent existing hashing approaches, which favor indexing in Hamming space, our proposed methods provide index structures adapted to a real-space mapping. Although Hamming-based hashing methods achieve good accuracy-speed tradeoff, their accuracy drops owing to information loss during the binarization process. By contrast, real-space hashing approaches provide a more accurate approximation in the mapped real-space as they avoid the hard binary approximations.Our proposed approaches can be classified into shallow and deep approaches. In the former category, we propose two shallow hashing-based approaches namely, "Symmetries of the Cube Locality Sensitive Hashing" (SC-LSH) and "Cluster-based Data Oriented Hashing" (CDOH), based respectively on randomized-hashing and shallow learning-to-hash schemes. The SC-LSH method provides a solution to the space storage problem faced by most randomized-based hashing approaches. It consists of a semi-random scheme reducing partially the randomness effect of randomized hashing approaches, and thus the memory storage problem, while maintaining their efficiency in structuring heterogeneous spaces. The CDOH approach proposes to eliminate the randomness effect by combining machine learning techniques with the hashing concept. The CDOH outperforms the randomized hashing approaches in terms of computation time, memory space and search accuracy.The third approach is a deep learning-based hashing scheme, named "Unsupervised Deep Neuron-per-Neuron Hashing" (UDN2H). The UDN2H approach proposes to index individually the output of each neuron of the top layer of a deep unsupervised model, namely a Deep Autoencoder, with the aim of capturing the high level individual structure of each neuron output.Our three approaches, SC-LSH, CDOH and UDN2H, were proposed sequentially as the thesis was progressing, with an increasing level of complexity in terms of the developed models, and in terms of the effectiveness and the performances obtained on large real-world datasets
Tang, Anfu. "Leveraging linguistic and semantic information for relation extraction from domain-specific texts." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG081.
This thesis aims to extract relations from scientific documents in the biomedical domain, i.e. transform unstructured texts into structured data that is machine-readable. As a task in the domain of Natural Language Processing (NLP), the extraction of semantic relations between textual entities makes explicit and formalizes the underlying structures. Current state-of-the-art methods rely on supervised learning, more specifically the fine-tuning of pre-trained language models such as BERT. Supervised learning requires a large amount of examples that are expensive to produce, especially in specific domains such as the biomedical domain. BERT variants such as PubMedBERT have been successful on NLP tasks involving biomedical texts. We hypothesize that injecting external information such as syntactic information or factual knowledge into such BERT variants can compensate for the reduced number of annotated training data. To this end, this thesis consists of proposing several neural architectures based on PubMedBERT that exploit linguistic information obtained by syntactic parsers or domain knowledge from knowledge bases
Paumard, Marie-Morgane. "Résolution automatique de puzzles par apprentissage profond." Thesis, CY Cergy Paris Université, 2020. http://www.theses.fr/2020CYUN1067.
The objective of this thesis is to develop semantic methods of reassembly in the complicated framework of heritage collections, where some blocks are eroded or missing.The reassembly of archaeological remains is an important task for heritage sciences: it allows to improve the understanding and conservation of ancient vestiges and artifacts. However, some sets of fragments cannot be reassembled with techniques using contour information or visual continuities. It is then necessary to extract semantic information from the fragments and to interpret them. These tasks can be performed automatically thanks to deep learning techniques coupled with a solver, i.e., a constrained decision making algorithm.This thesis proposes two semantic reassembly methods for 2D fragments with erosion and a new dataset and evaluation metrics.The first method, Deepzzle, proposes a neural network followed by a solver. The neural network is composed of two Siamese convolutional networks trained to predict the relative position of two fragments: it is a 9-class classification. The solver uses Dijkstra's algorithm to maximize the joint probability. Deepzzle can address the case of missing and supernumerary fragments, is capable of processing about 15 fragments per puzzle, and has a performance that is 25% better than the state of the art.The second method, Alphazzle, is based on AlphaZero and single-player Monte Carlo Tree Search (MCTS). It is an iterative method that uses deep reinforcement learning: at each step, a fragment is placed on the current reassembly. Two neural networks guide MCTS: an action predictor, which uses the fragment and the current reassembly to propose a strategy, and an evaluator, which is trained to predict the quality of the future result from the current reassembly. Alphazzle takes into account the relationships between all fragments and adapts to puzzles larger than those solved by Deepzzle. Moreover, Alphazzle is compatible with constraints imposed by a heritage framework: at the end of reassembly, MCTS does not access the reward, unlike AlphaZero. Indeed, the reward, which indicates if a puzzle is well solved or not, can only be estimated by the algorithm, because only a conservator can be sure of the quality of a reassembly
Grivolla, Jens. "Apprentissage et décision automatique en recherche documentaire : prédiction de difficulté de requêtes et sélection de modèle de recherche." Avignon, 2006. http://www.theses.fr/2006AVIG0142.
This thesis is centered around the subject of information retrieval, with a focus on those queries that are particularly difficult to handle for current retrieval systems. In the application and evaluation settings we were concerned with, a user expresses his information need as a natural language query. There are different approaches for treating those queries, but current systems typically use a single approach for all queries, without taking into account the specific properties of each query. However, it has been shown that the performance of one strategy relative to another can vary greatly depending on the query. We have approached this problem by proposing methods that will permit to automatically identify those queries that will pose particular difficulties to the retrieval system, in order to allow for a specific treatment. This research topic was very new and barely starting to be explored at the beginning of my work, but has received much attention these last years. We have developed a certain number of quality predictor functions that obtain results comparable to those published recently by other research teams. However, the ability of individual predictors to accurately classify queries by their level of difficulty remains rather limited. The major particularity and originality of our work lies in the combination of those different measures. Using methods of automatic classification with corpus-based training, we have been able to obtain quite reliable predictions, on the basis of measures that individually are far less discriminant. We have also adapted our approach to other application settings, with very encouraging results. We have thus developed a method for the selective application of query expansion techniques, as well as the selection of the most appropriate retrieval model for each query
Oita, Marilena. "Inférer des objets sémantiques du Web structuré." Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0060/document.
This thesis focuses on the extraction and analysis of Web data objects, investigated from different points of view: temporal, structural, semantic. We first survey different strategies and best practices for deriving temporal aspects of Web pages, together with a more in-depth study on Web feeds for this particular purpose, and other statistics. Next, in the context of dynamically-generated Web pages by content management systems, we present two keyword-based techniques that perform article extraction from such pages. Keywords, automatically acquired, guide the process of object identification, either at the level of a single Web page (SIGFEED), or across different pages sharing the same template (FOREST). We finally present, in the context of the deep Web, a generic framework that aims at discovering the semantic model of a Web object (here, data record) by, first, using FOREST for the extraction of objects, and second, representing the implicit rdf:type similarities between the object attributes and the entity of the form as relationships that, together with the instances extracted from the objects, form a labeled graph. This graph is further aligned to an ontology like YAGO for the discovery of the unknown types and relations
Книги з теми "Apprentissage profond – Recherche de l'information":
Zorfass, Judith M. Teaching middle school students to be active researchers / Judith M. Zorfass with Harriet Copel. Alexandria, Va: Association for Supervision and Curriculum Development, 1998.
Liu, Alex. Apache Spark Machine Learning Blueprints. Packt Publishing, Limited, 2016.
California Media and Library Educators Assn. From Library Skills to Information Literacy: A Handbook for the 21st Century. Hi Willow Research & Pub, 1994.
Wagh, Sanjeev J., Manisha S. Bhende, and Anuradha D. Thakare. Fundamentals of Data Science. Taylor & Francis Group, 2021.
Wagh, Sanjeev J., Manisha S. Bhende, and Anuradha D. Thakare. Fundamentals of Data Science. Taylor & Francis Group, 2021.
Forsyth, R., and R. Rada. Machine Learning (Ellis Horwood Series Artificial Intelligence). Ellis Horwood, 1986.
Wagh, Sanjeev J., Manisha S. Bhende, and Anuradha D. Thakare. Fundamentals of Data Science. Taylor & Francis Group, 2021.
Частини книг з теми "Apprentissage profond – Recherche de l'information":
ROCHDI, Sara, and Nadia EL OUESDADI. "Les étudiants et les pratiques numériques informelles: échange et collaboration sur le réseau social Facebook." In Langue(s) en mondialisation, 127–36. Editions des archives contemporaines, 2022. http://dx.doi.org/10.17184/eac.5204.