Academic literature on the topic 'Recherche d'images par contenu visuel'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Recherche d'images par contenu visuel.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Recherche d'images par contenu visuel":
Tollari, Sabrina, Marcin Detyniecki, Ali Fakeri-Tabrizi, Christophe Marsalla, Massih-Reza Amini, and Patrick Gallinari. "Exploitation du contenu visuel pour améliorer la recherche textuelle d'images en ligne." Document numérique 13, no. 1 (April 30, 2010): 187–209. http://dx.doi.org/10.3166/dn.13.1.187-209.
Cord, Matthieu, Jérôme Fournier, and Sylvie Philipp-Foliguet. "Approche interactive de la recherche d'images par le contenu." Techniques et sciences informatiques 23, no. 1 (February 1, 2004): 97–123. http://dx.doi.org/10.3166/tsi.23.97-123.
Kaine, Élisabeth, Pierre De Coninck, and Denis Bellemare. "Pour un développement social durable des individus et des communautés autochtones par la recherche action/création." Nouvelles pratiques sociales 23, no. 1 (May 10, 2011): 33–52. http://dx.doi.org/10.7202/1003166ar.
حفيظة, بلقاسمي. "ترجمة النص الشكسبيري." Traduction et Langues 4, no. 1 (December 31, 2005): 90–100. http://dx.doi.org/10.52919/translang.v4i1.334.
Lopes, Letícia Ferreira, and Maiara Garcia Orlandini. "Curdas na Guerra Civil Síria." Sur le journalisme, About journalism, Sobre jornalismo 11, no. 1 (June 13, 2022): 32–45. http://dx.doi.org/10.25200/slj.v11.n1.2022.475.
Bessai, FZ, A. Hamadi, S. Selmouni, and A. Hamadi. "Indexation et Recherche d'Images par le Contenu." Revue d'Information Scientifique et Technique 12, no. 2 (March 2, 2004). http://dx.doi.org/10.4314/rist.v12i2.26681.
Mathieu, Suzanne, and James M. Turner. "Audiovision ou comment faire voir l’information par les personnes aveugles et malvoyantes : lignes directrices pour la description d’images en mouvement." Proceedings of the Annual Conference of CAIS / Actes du congrès annuel de l'ACSI, October 31, 2013. http://dx.doi.org/10.29173/cais229.
Dissertations / Theses on the topic "Recherche d'images par contenu visuel":
Hoàng, Nguyen Vu. "Prise en compte des relations spatiales contextuelles dans la recherche d'images par contenu visuel." Paris 9, 2011. http://basepub.dauphine.fr/xmlui/handle/123456789/8202.
This thesis is focused on the study of methods for image retrieval by visual content in collection of heterogeneous contents. We are interested in the description of spatial relationships between the entities present in the images that can be symbolic objects or visual primitives such as interest points. The first part of this thesis is dedicated to a state of the art on the description of spatial relationship techniques. As a result of this study, we propose the approach Δ-TSR, our first contribution, which allows similarity search based on visual content by using the triangular relationships between entities in images. In our experiments, the entities are local visual features based on salient points represented in a bag of features model. This approach improves not only the quality of the images retrieval but also the execution time in comparison with other approaches in the literature. The second part is dedicated to the study of the image context. The spatial relationships between entities in an image allow creating the global description of the image that we call the image context. Taking into account the contextual spatial relationships in the similarity search of images can allow improving the retrieval quality by limiting false alarms. We defined the context of image as the presence of entity categories and their spatial relationships in the image. We studied the relationships between different entity categories on LabelMe, a state of the art of symbolic images databases of heterogeneous content. This statistical study, our second contribution, allows creating a cartography of their spatial relationships. It can be integrated in a graph-based model of the contextual relationships, our third contribution. This graph describes the general knowledge of every entity categories. Spatial reasoning on this knowledge graph can help improving tasks of image processing such as detection and localization of an entity category by using the presence of another reference. Further, this model can be applied to represent the context of an image. The similarity search based on context can be achieved by comparing the graphs, then, contextual similarity between two images is evaluated by the similarity between their graphs. This work was evaluated on the symbolic image database of LabelMe. The experiments showed its relevance for image retrieval by context
Michaud, Dorian. "Indexation bio-inspirée pour la recherche d'images par similarité." Thesis, Poitiers, 2018. http://www.theses.fr/2018POIT2288/document.
Image Retrieval is still a very active field of image processing as the number of available image datasets continuously increases.One of the principal objectives of Content-Based Image Retrieval (CBIR) is to return the most similar images to a given query with respect to their visual content.Our work fits in a very specific application context: indexing small expert image datasets, with no prior knowledge on the images. Because of the image complexity, one of our contributions is the choice of effective descriptors from literature placed in direct competition.Two strategies are used to combine features: a psycho-visual one and a statistical one.In this context, we propose an unsupervised and adaptive framework based on the well-known bags of visual words and phrases models that select relevant visual descriptors for each keypoint to construct a more discriminative image representation.Experiments show the interest of using this this type of methodologies during a time when convolutional neural networks are ubiquitous.We also propose a study about semi interactive retrieval to improve the accuracy of CBIR systems by using the knowledge of the expert users
Fauqueur, Julien. "Contributions pour la Recherche d'Images par Composantes Visuelles." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2003. http://tel.archives-ouvertes.fr/tel-00007090.
Un système de recherche d'information visuelle doit permettre à l'utilisateur de désigner d'une manière explicite la cible visuelle qu'il recherche se rapportant aux différentes composantes de l'image. Notre objectif au cours de ce travail a été de réfléchir à comment définir des clés de recherche visuelle permettant à l'utilisateur d'exprimer cette cible visuelle, de concevoir et d'implémenter efficacement les méthodes correspondantes.
Les contributions originales de cette thèse portent sur de nouvelles approches permettant de retrouver des images à partir de leurs différentes composantes visuelles selon deux paradigmes de recherche distincts.
Le premier paradigme est celui de la recherche par région exemple. Il consiste à retrouver les images comportant une partie d'image similaire à une partie visuelle requête. Pour ce paradigme, nous avons mis au point une approche de segmentation grossière en régions et de description fine de ces régions ensuite. Les régions grossières des images de la base, extraites par notre nouvel algorithme de segmentation non supervisée, représentent les composantes visuellement saillantes de chaque image. Cette décomposition permet à l'utilisateur de désigner séparément une région d'intérêt pour sa requête. La recherche de régions similaires dans les images de la base repose sur un nouveau descripteur de régions (ADCS). Il offre une caractérisation fine, compacte et adaptative de l'apparence photométrique des régions, afin de tenir compte de la spécificité d'une base de descripteurs de régions. Dans cette nouvelle approche, la segmentation est rapide et les régions extraites sont intuitives pour l'utilisateur. La finesse de description des régions améliore la similarité des régions retournées par rapport aux descripteurs existants, compte tenu de la fidélité accrue au contenu des régions.
Notre seconde contribution porte sur l'élaboration d'un nouveau paradigme de recherche d'images par composition logique de catégories de régions. Ce paradigme présente l'avantage d'apporter une solution au problème de la page zéro. Il permet d'atteindre les images, quand elles existent dans la base, qui se rapprochent de la représentation mentale de la cible visuelle de l'utilisateur. Ainsi aucune image ou région exemple n'est nécessaire au moment de la formulation de la requête. Ce paradigme repose sur la génération non-supervisée d'un thésaurus photométrique constitué par le résumé visuel des régions de la base. Pour formuler sa requête, l'utilisateur accède directement à ce résumé en disposant d'opérateurs de composition logique de ces différentes parties visuelles. Il est à noter qu'un item visuel dans ce résumé est un représentant d'une classe photométrique de régions. Les requêtes logiques sur le contenu des images s'apparentent à celles en recherche de texte. L'originalité de ce paradigme ouvre des perspectives riches pour de futurs travaux en recherche d'information visuelle.
Bouteldja, Nouha. "Accélération de la recherche dans les espaces de grande dimension : Application à l'indexation d'images par contenu visuel." Paris, CNAM, 2009. http://www.theses.fr/2009CNAM0628.
In this thesis we are interested in accelerating retrieval in large databases where entities are described with high dimensional vectors (or multidimensional points). Several index structures have been already proposed to accelerate retrieval but a large number of these structures suffer from the well known Curse of Dimensionality phenomenon (CoD). In the first part of this thesis we revisited the CoD phenomenon with classical indices in order to determine from which dimension these indices does not work; Our study showed that classical indices still perform well with moderate dimensions (< 30) when dealing with real data. However, needs for accelerating retrieval are not satisfied when dealing with high dimensional spaces or with large databases. The latter observations motivated our main contribution called HiPeR. HiPeR is based on a hierarchy of subspaces and indexes: it performs nearest neighbors search across spaces of different dimensions, by beginning with the lowest dimensions up to the highest ones, aiming at minimizing the effects of curse of dimensionality. Scanning the hierarchy can be done according to several scenarios that are presented for retrieval of exact as well as approximate neighbors. In this work, HiPeR has been implemented on the classical index structure VA-File, providing VA-Hierarchies. For the approximate scenario, the model of precision loss defined is probabilistic and non parametric (very little assumptions are made on the data distribution) and quality of answers can be selected by user at query time. HiPeR is evaluated for range queries on 3 real data-sets of image descriptors varying from 500,000 vectors to 4 millions. The experiments demonstrate that the hierarchy of HiPeR improves the best index structure by significantly. Reducing CPU time, whatever the scenario of retrieval. Its approximate version improves even more retrieval by saving I/O access significantly. In the last part of our thesis, we studied the particular case of multiple queries where each database entity is represented with several vectors. To accelerate retrieval with such queries different strategies were proposed to reduce I/O and CPU times. The proposed strategies were applied both to simple indices as well as to HiPeR
Le, Huu Ton. "Improving image representation using image saliency and information gain." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2287/document.
Nowadays, along with the development of multimedia technology, content based image retrieval (CBIR) has become an interesting and active research topic with an increasing number of application domains: image indexing and retrieval, face recognition, event detection, hand writing scanning, objects detection and tracking, image classification, landmark detection... One of the most popular models in CBIR is Bag of Visual Words (BoVW) which is inspired by Bag of Words model from Information Retrieval field. In BoVW model, images are represented by histograms of visual words from a visual vocabulary. By comparing the images signatures, we can tell the difference between images. Image representation plays an important role in a CBIR system as it determines the precision of the retrieval results.In this thesis, image representation problem is addressed. Our first contribution is to propose a new framework for visual vocabulary construction using information gain (IG) values. The IG values are computed by a weighting scheme combined with a visual attention model. Secondly, we propose to use visual attention model to improve the performance of the proposed BoVW model. This contribution addresses the importance of saliency key-points in the images by a study on the saliency of local feature detectors. Inspired from the results from this study, we use saliency as a weighting or an additional histogram for image representation.The last contribution of this thesis to CBIR shows how our framework enhances the BoVP model. Finally, a query expansion technique is employed to increase the retrieval scores on both BoVW and BoVP models
Leveau, Valentin. "Représentations d'images basées sur un principe de voisins partagés pour la classification fine." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT257/document.
This thesis focuses on the issue of fine-grained classification which is a particular classification task where classes may be visually distinguishable only from subtle localized details and where background often acts as a source of noise. This work is mainly motivated by the need to devise finer image representations to address such fine-grained classification tasks by encoding enough localized discriminant information such as spatial arrangement of local features.To this aim, the main research line we investigate in this work relies on spatially localized similarities between images computed thanks to efficient approximate nearest neighbor search techniques and localized parametric geometry. The main originality of our approach is to embed such spatially consistent localized similarities into a high-dimensional global image representation that preserves the spatial arrangement of the fine-grained visual patterns (contrary to traditional encoding methods such as BoW, Fisher or VLAD Vectors). In a nutshell, this is done by considering all raw patches of the training set as a large visual vocabulary and by explicitly encoding their similarity to the query image. In more details:The first contribution proposed in this work is a classification scheme based on a spatially consistent k-nn classifier that relies on pooling similarity scores between local features of the query and those of the similar retrieved images in the vocabulary set. As this set can be composed of a lot of local descriptors, we propose to scale up our approach by using approximate k-nearest neighbors search methods. Then, the main contribution of this work is a new aggregation-based explicit embedding derived from a newly introduced match kernel based on shared nearest neighbors of localized feature vectors combined with local geometric constraints. The originality of this new similarity-based representation space is that it directly integrates spatially localized geometric information in the aggregation process.Finally, as a third contribution, we proposed a strategy to drastically reduce, by up to two orders of magnitude, the high-dimensionality of the previously introduced over-complete image representation while still providing competitive image classification performance.We validated our approaches by conducting a series of experiments on several classification tasks involving rigid objects such as FlickrsLogos32 or Vehicles29 but also on tasks involving finer visual knowledge such as FGVC-Aircrafts, Oxford-Flower102 or CUB-Birds200. We also demonstrated significant results on fine-grained audio classification tasks such as the LifeCLEF 2015 bird species identification challenge by proposing a temporal extension of our image representation. Finally, we notably showed that our dimensionality reduction technique used on top of our representation resulted in highly interpretable visual vocabulary composed of the most representative image regions for different visual concepts of the training base
Landre, Jérôme. "Analyse multirésolution pour la recherche et l'indexation d'images par le contenu dans les bases de données images - Application à la base d'images paléontologique Trans'Tyfipal." Phd thesis, Université de Bourgogne, 2005. http://tel.archives-ouvertes.fr/tel-00079897.
1) La taille du vecteur descripteur (n>100) rend les calculs de distance sensibles à la malédiction de la dimension,
2) La présence d'attributs de nature différente dans le vecteur descripteur ne facilite pas la classification,
3) La classification ne s'adapte pas (en général) au contexte de recherche de l'utilisateur.
Nous proposons dans ce travail une méthode basée sur la construction de hiérarchies de signatures de tailles réduites croissantes qui permettent de prendre en compte le contexte de recherche de l'utilisateur. Notre méthode tend à imiter le comportement de la vision humaine.
Le vecteur descripteur contient des attributs issus de l'analyse multirésolution des images. Ces attributs sont organisés par un expert du domaine de la base d'images en plusieurs hiérarchies de quatre vecteur signature de taille réduite croissante (respectivement 4, 6, 8 et 10 attributs). Ces signatures sont utilisées pour construire un arbre de recherche flou grâce à l'algorithme des nuées dynamiques (dont deux améliorations sont proposées). Les utilisateurs en ligne choisissent une hiérarchie de signature parmi celles proposées par l'expert en fonction de leur contexte de recherche.
Un logiciel de démonstration a été développé. Il utilise une interface web dynamique (PHP), les traitements d'images (optimisés) sont réalisés grâce aux librairies Intel IPP et OpenCV, le stockage et l'indexation sont réalisés par une base de données MySQL, une interface de visualisation 3D (Java3D) permet de se rendre compte de la répartition des images dans la classification.
Un protocole de tests psycho-visuels a été réalisé. Les résultats sur la base paléontologique Trans'Tyfipal sont présentés et offrent des réponses pertinentes selon le contexte de recherche. La méthode donne de bons résultats, tant en temps de calcul qu'en pertinence des images résultats lors de la navigation dans les bases d'images homogènes.
Gbehounou, Syntyche. "Indexation de bases d'images : évaluation de l'impact émotionnel." Thesis, Poitiers, 2014. http://www.theses.fr/2014POIT2295/document.
The goal of this work is to propose an efficient approach for emotional impact recognition based on CBIR techniques (descriptors, image representation). The main idea relies in classifying images according to their emotion which can be "Negative", "Neutral" or "Positive". Emotion is related to the image content and also to the personnal feelings. To achieve our goal we firstly need a correct assessed image database. Our first contribution is about this aspect. We proposed a set of 350 diversifed images rated by people around the world. Added to our choice to use CBIR methods, we studied the impact of visual saliency for the subjective evaluations and interest region segmentation for classification. The results are really interesting and prove that the CBIR methods are usefull for emotion recognition. The chosen desciptors are complementary and their performance are consistent on the database we have built and on IAPS, reference database for the analysis of the image emotional impact
Niaz, Usman. "Amélioration de la détection des concepts dans les vidéos en coupant de plus grandes tranches du monde visuel." Electronic Thesis or Diss., Paris, ENST, 2014. http://www.theses.fr/2014ENST0040.
Visual material comprising images and videos is growing ever so rapidly over the internet and in our personal collections. This necessitates automatic understanding of the visual content which calls for the conception of intelligent methods to correctly index, search and retrieve images and videos. This thesis aims at improving the automatic detection of concepts in the internet videos by exploring all the available information and putting the most beneficial out of it to good use. Our contributions address various levels of the concept detection framework and can be divided into three main parts. The first part improves the Bag of Words (BOW) video representation model by proposing a novel BOW construction mechanism using concept labels and by including a refinement to the BOW signature based on the distribution of its elements. We then devise methods to incorporate knowledge from similar and dissimilar entities to build improved recognition models in the second part. Here we look at the potential information that the concepts share and build models for meta-concepts from which concept specific results are derived. This improves recognition for concepts lacking labeled examples. Lastly we contrive certain semi-supervised learning methods to get the best of the substantial amount of unlabeled data. We propose techniques to improve the semi-supervised cotraining algorithm with optimal view selection
Niaz, Usman. "Amélioration de la détection des concepts dans les vidéos en coupant de plus grandes tranches du monde visuel." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0040/document.
Visual material comprising images and videos is growing ever so rapidly over the internet and in our personal collections. This necessitates automatic understanding of the visual content which calls for the conception of intelligent methods to correctly index, search and retrieve images and videos. This thesis aims at improving the automatic detection of concepts in the internet videos by exploring all the available information and putting the most beneficial out of it to good use. Our contributions address various levels of the concept detection framework and can be divided into three main parts. The first part improves the Bag of Words (BOW) video representation model by proposing a novel BOW construction mechanism using concept labels and by including a refinement to the BOW signature based on the distribution of its elements. We then devise methods to incorporate knowledge from similar and dissimilar entities to build improved recognition models in the second part. Here we look at the potential information that the concepts share and build models for meta-concepts from which concept specific results are derived. This improves recognition for concepts lacking labeled examples. Lastly we contrive certain semi-supervised learning methods to get the best of the substantial amount of unlabeled data. We propose techniques to improve the semi-supervised cotraining algorithm with optimal view selection
Books on the topic "Recherche d'images par contenu visuel":
Dey, Nilanjan, and Wahiba Ben Abdessalem Karaa. Mining Multimedia Documents. Taylor & Francis Group, 2017.
Dey, Nilanjan, and Wahiba Ben Abdessalem Karaa. Mining Multimedia Documents. Taylor & Francis Group, 2017.
Dey, Nilanjan, and Wahiba Ben Abdessalem Karaa. Mining Multimedia Documents. Taylor & Francis Group, 2017.
Dey, Nilanjan, and Wahiba Ben Abdessalem Karaa. Mining Multimedia Documents. Taylor & Francis Group, 2017.
Chang, Ni-Bin, and Kaixu Bai. Multisensor Data Fusion and Machine Learning for Environmental Remote Sensing. Taylor & Francis Group, 2018.
Chang, Ni-Bin, and Kaixu Bai. Multisensor Data Fusion and Machine Learning for Environmental Remote Sensing. Taylor & Francis Group, 2018.
Lian, Shiguo, Yiannis Kompatsiaris, and Bernard Merialdo. TV Content Analysis: Techniques and Applications. Auerbach Publishers, Incorporated, 2012.
Lian, Shiguo, Yiannis Kompatsiaris, and Bernard Merialdo. TV Content Analysis: Techniques and Applications. Taylor & Francis Group, 2012.
Lian, Shiguo, Yiannis Kompatsiaris, and Bernard Merialdo. TV Content Analysis: Techniques and Applications. Auerbach Publishers, Incorporated, 2012.
Lian, Shiguo, Yiannis Kompatsiaris, and Bernard Merialdo. TV Content Analysis: Techniques and Applications. Auerbach Publishers, Incorporated, 2012.