Literatura científica selecionada sobre o tema "Apprentissage de la représentation visuelle"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Apprentissage de la représentation visuelle".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Apprentissage de la représentation visuelle"
Gauthier, Mathieu. "Perceptions des élèves du secondaire par rapport à la résolution de problèmes en algèbre à l’aide d’un logiciel dynamique et la stratégie Prédire – investiguer – expliquer". Éducation et francophonie 42, n.º 2 (19 de dezembro de 2014): 190–214. http://dx.doi.org/10.7202/1027913ar.
Texto completo da fonteMassion, Jean. "Posture, représentation interne et apprentissage". STAPS 19, n.º 46 (1998): 209–15. http://dx.doi.org/10.3406/staps.1998.1290.
Texto completo da fonteBarboux, Cécile, Michel Liquière e Jean Sallantin. "Argumentation d'un apprentissage à l'aide d'expression visuelle". Intellectica. Revue de l'Association pour la Recherche Cognitive 2, n.º 1 (1987): 195–212. http://dx.doi.org/10.3406/intel.1987.1807.
Texto completo da fonteDubé, Raymonde, Gabriel Goyette, Monique Lebrun e Marie-Thérèse Vachon. "Image mentale et apprentissage de l’orthographe lexicale". Articles 17, n.º 2 (16 de novembro de 2009): 191–205. http://dx.doi.org/10.7202/900695ar.
Texto completo da fonteLedoux-Beaugrand, Évelyne. "Le sexe rédimé par l’amour. Regard sur l’adaptation cinématographique de Borderline de Marie-Sissi Labrèche". Globe 12, n.º 2 (15 de fevereiro de 2011): 83–94. http://dx.doi.org/10.7202/1000708ar.
Texto completo da fonteGiasson, Thierry, Richard Nadeau e Éric Bélanger. "Débats télévisés et évaluations des candidats: la représentation visuelle des politiciens canadiens agit-elle dans la formation des préférences des électeurs québécois?" Canadian Journal of Political Science 38, n.º 4 (dezembro de 2005): 867–95. http://dx.doi.org/10.1017/s0008423905050377.
Texto completo da fonteGiasson, Thierry. "La préparation de la représentation visuelle des leaders politiques". Questions de communication, n.º 9 (30 de junho de 2006): 357–81. http://dx.doi.org/10.4000/questionsdecommunication.5226.
Texto completo da fonteDine, Susan. "Critique, dialogue et action. La représentation des musées dans Black Panther". Muséologie et cinéma : perspectives contemporaines 43 (2024): 90–112. http://dx.doi.org/10.4000/11rah.
Texto completo da fonteDouar, Fabrice. "Carpaccio et l'espace de la représentation". Psychosomatique relationnelle N° 2, n.º 1 (7 de julho de 2014): 35–48. http://dx.doi.org/10.3917/psyr.141.0035.
Texto completo da fonteReynolds, Alexandra. "Émotions et apprentissage de l'anglais dans l’enseignement supérieur : une approche visuelle". Voix Plurielles 12, n.º 1 (6 de maio de 2015): 66–81. http://dx.doi.org/10.26522/vp.v12i1.1175.
Texto completo da fonteTeses / dissertações sobre o assunto "Apprentissage de la représentation visuelle"
Risser-Maroix, Olivier. "Similarité visuelle et apprentissage de représentations". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7327.
Texto completo da fonteThe objective of this CIFRE thesis is to develop an image search engine, based on computer vision, to assist customs officers. Indeed, we observe, paradoxically, an increase in security threats (terrorism, trafficking, etc.) coupled with a decrease in the number of customs officers. The images of cargoes acquired by X-ray scanners already allow the inspection of a load without requiring the opening and complete search of a controlled load. By automatically proposing similar images, such a search engine would help the customs officer in his decision making when faced with infrequent or suspicious visual signatures of products. Thanks to the development of modern artificial intelligence (AI) techniques, our era is undergoing great changes: AI is transforming all sectors of the economy. Some see this advent of "robotization" as the dehumanization of the workforce, or even its replacement. However, reducing the use of AI to the simple search for productivity gains would be reductive. In reality, AI could allow to increase the work capacity of humans and not to compete with them in order to replace them. It is in this context, the birth of Augmented Intelligence, that this thesis takes place. This manuscript devoted to the question of visual similarity is divided into two parts. Two practical cases where the collaboration between Man and AI is beneficial are proposed. In the first part, the problem of learning representations for the retrieval of similar images is still under investigation. After implementing a first system similar to those proposed by the state of the art, one of the main limitations is pointed out: the semantic bias. Indeed, the main contemporary methods use image datasets coupled with semantic labels only. The literature considers that two images are similar if they share the same label. This vision of the notion of similarity, however fundamental in AI, is reductive. It will therefore be questioned in the light of work in cognitive psychology in order to propose an improvement: the taking into account of visual similarity. This new definition allows a better synergy between the customs officer and the machine. This work is the subject of scientific publications and a patent. In the second part, after having identified the key components allowing to improve the performances of thepreviously proposed system, an approach mixing empirical and theoretical research is proposed. This secondcase, augmented intelligence, is inspired by recent developments in mathematics and physics. First applied tothe understanding of an important hyperparameter (temperature), then to a larger task (classification), theproposed method provides an intuition on the importance and role of factors correlated to the studied variable(e.g. hyperparameter, score, etc.). The processing chain thus set up has demonstrated its efficiency byproviding a highly explainable solution in line with decades of research in machine learning. These findings willallow the improvement of previously developed solutions
Saxena, Shreyas. "Apprentissage de représentations pour la reconnaissance visuelle". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM080/document.
Texto completo da fonteIn this dissertation, we propose methods and data driven machine learning solutions which address and benefit from the recent overwhelming growth of digital media content.First, we consider the problem of improving the efficiency of image retrieval. We propose a coordinated local metric learning (CLML) approach which learns local Mahalanobis metrics, and integrates them in a global representation where the l2 distance can be used. This allows for data visualization in a single view, and use of efficient ` 2 -based retrieval methods. Our approach can be interpreted as learning a linear projection on top of an explicit high-dimensional embedding of a kernel. This interpretation allows for the use of existing frameworks for Mahalanobis metric learning for learning local metrics in a coordinated manner. Our experiments show that CLML improves over previous global and local metric learning approaches for the task of face retrieval.Second, we present an approach to leverage the success of CNN models forvisible spectrum face recognition to improve heterogeneous face recognition, e.g., recognition of near-infrared images from visible spectrum training images. We explore different metric learning strategies over features from the intermediate layers of the networks, to reduce the discrepancies between the different modalities. In our experiments we found that the depth of the optimal features for a given modality, is positively correlated with the domain shift between the source domain (CNN training data) and the target domain. Experimental results show the that we can use CNNs trained on visible spectrum images to obtain results that improve over the state-of-the art for heterogeneous face recognition with near-infrared images and sketches.Third, we present convolutional neural fabrics for exploring the discrete andexponentially large CNN architecture space in an efficient and systematic manner. Instead of aiming to select a single optimal architecture, we propose a “fabric” that embeds an exponentially large number of architectures. The fabric consists of a 3D trellis that connects response maps at different layers, scales, and channels with a sparse homogeneous local connectivity pattern. The only hyperparameters of the fabric (the number of channels and layers) are not critical for performance. The acyclic nature of the fabric allows us to use backpropagation for learning. Learning can thus efficiently configure the fabric to implement each one of exponentially many architectures and, more generally, ensembles of all of them. While scaling linearly in terms of computation and memory requirements, the fabric leverages exponentially many chain-structured architectures in parallel by massively sharing weights between them. We present benchmark results competitive with the state of the art for image classification on MNIST and CIFAR10, and for semantic segmentation on the Part Labels dataset
Tamaazousti, Youssef. "Vers l’universalité des représentations visuelle et multimodales". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC038/document.
Texto completo da fonteBecause of its key societal, economic and cultural stakes, Artificial Intelligence (AI) is a hot topic. One of its main goal, is to develop systems that facilitates the daily life of humans, with applications such as household robots, industrial robots, autonomous vehicle and much more. The rise of AI is highly due to the emergence of tools based on deep neural-networks which make it possible to simultaneously learn, the representation of the data (which were traditionally hand-crafted), and the task to solve (traditionally learned with statistical models). This resulted from the conjunction of theoretical advances, the growing computational capacity as well as the availability of many annotated data. A long standing goal of AI is to design machines inspired humans, capable of perceiving the world, interacting with humans, in an evolutionary way. We categorize, in this Thesis, the works around AI, in the two following learning-approaches: (i) Specialization: learn representations from few specific tasks with the goal to be able to carry out very specific tasks (specialized in a certain field) with a very good level of performance; (ii) Universality: learn representations from several general tasks with the goal to perform as many tasks as possible in different contexts. While specialization was extensively explored by the deep-learning community, only a few implicit attempts were made towards universality. Thus, the goal of this Thesis is to explicitly address the problem of improving universality with deep-learning methods, for image and text data. We have addressed this topic of universality in two different forms: through the implementation of methods to improve universality (“universalizing methods”); and through the establishment of a protocol to quantify its universality. Concerning universalizing methods, we proposed three technical contributions: (i) in a context of large semantic representations, we proposed a method to reduce redundancy between the detectors through, an adaptive thresholding and the relations between concepts; (ii) in the context of neural-network representations, we proposed an approach that increases the number of detectors without increasing the amount of annotated data; (iii) in a context of multimodal representations, we proposed a method to preserve the semantics of unimodal representations in multimodal ones. Regarding the quantification of universality, we proposed to evaluate universalizing methods in a Transferlearning scheme. Indeed, this technical scheme is relevant to assess the universal ability of representations. This also led us to propose a new framework as well as new quantitative evaluation criteria for universalizing methods
Lienou, Marie Lauginie. "Apprentissage automatique des classes d'occupation du sol et représentation en mots visuels des images satellitaires". Phd thesis, Paris, ENST, 2009. https://pastel.hal.science/pastel-00005585.
Texto completo da fonteLand cover recognition from automatic classifications is one of the important methodological researches in remote sensing. Besides, getting results corresponding to the user expectations requires approaching the classification from a semantic point of view. Within this frame, this work aims at the elaboration of automatic methods capable of learning classes defined by cartography experts, and of automatically annotating unknown images based on this classification. Using corine land cover maps, we first show that classical approaches in the state-of-the-art are able to well-identify homogeneous classes such as fields, but have difficulty in finding high-level semantic classes, also called mixed classes because they consist of various land cover categories. To detect such classes, we represent images into visual words, in order to use text analysis tools which showed their efficiency in the field of text mining. By means of supervised and not supervised approaches on one hand, we exploit the notion of semantic compositionality: image structures which are considered as mixtures of land cover types, are detected by bringing out the importance of spatial relations between the visual words. On the other hand, we propose a semantic annotation method using a statistical text analysis model: latent dirichlet allocation. We rely on this mixture model, which requires a bags-of-words representation of images, to properly model high-level semantic classes. The proposed approach and the comparative studies with gaussian and gmm models, as well as svm classifier, are assessed using spot and quickbird images among others
Lienou, Marie Lauginie. "Apprentissage automatique des classes d'occupation du sol et représentation en mots visuels des images satellitaires". Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005585.
Texto completo da fonteEl-Zakhem, Imad. "Modélisation et apprentissage des perceptions humaines à travers des représentations floues : le cas de la couleur". Reims, 2009. http://theses.univ-reims.fr/exl-doc/GED00001090.pdf.
Texto completo da fonteThe target of this thesis is to implement an interactive modeling of the user perception and a creation of an appropriate profile. We present two methods to build the profile representing the perception of the user through fuzzy subsets. The first method is a descriptive method used by an expert user and the second one is a constructive method used by a none-expert user. For the descriptive method, we propose a questioning procedure allowing the user to define completely his profile. For the constructive method, the user will be able to define his perception while comparing and selecting some profiles reflecting the perception of other expert users. We present a procedure of aggregation allowing building the profile of the user starting from the selected expert profiles and the rates of satisfaction. As a case study, we describe an application to model the color perception. Thereafter, we exploit the profiles already built for image classification. We propose a procedure that allows building the profile of an image according to the user perception, by using the standard profile of the image and the user’s profile representing his perception. In this method we use new definitions for the notions of comparability and compatibility of two fuzzy subsets. At the end, we present an implementation of the all procedure, the structure of the database as some examples and results
Engilberge, Martin. "Deep Inside Visual-Semantic Embeddings". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS150.
Texto completo da fonteNowadays Artificial Intelligence (AI) is omnipresent in our society. The recentdevelopment of learning methods based on deep neural networks alsocalled "Deep Learning" has led to a significant improvement in visual representation models.and textual.In this thesis, we aim to further advance image representation and understanding.Revolving around Visual Semantic Embedding (VSE) approaches, we explore different directions: We present relevant background covering images and textual representation and existing multimodal approaches. We propose novel architectures further improving retrieval capability of VSE and we extend VSE models to novel applications and leverage embedding models to visually ground semantic concept. Finally, we delve into the learning process andin particular the loss function by learning differentiable approximation of ranking based metric
Venkataramanan, Shashanka. "Metric learning for instance and category-level visual representation". Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS022.
Texto completo da fonteThe primary goal in computer vision is to enable machines to extract meaningful information from visual data, such as images and videos, and leverage this information to perform a wide range of tasks. To this end, substantial research has focused on developing deep learning models capable of encoding comprehensive and robust visual representations. A prominent strategy in this context involves pretraining models on large-scale datasets, such as ImageNet, to learn representations that can exhibit cross-task applicability and facilitate the successful handling of diverse downstream tasks with minimal effort. To facilitate learning on these large-scale datasets and encode good representations, com- plex data augmentation strategies have been used. However, these augmentations can be limited in their scope, either being hand-crafted and lacking diversity, or generating images that appear unnatural. Moreover, the focus of these augmentation techniques has primarily been on the ImageNet dataset and its downstream tasks, limiting their applicability to a broader range of computer vision problems. In this thesis, we aim to tackle these limitations by exploring different approaches to en- hance the efficiency and effectiveness in representation learning. The common thread across the works presented is the use of interpolation-based techniques, such as mixup, to generate diverse and informative training examples beyond the original dataset. In the first work, we are motivated by the idea of deformation as a natural way of interpolating images rather than using a convex combination. We show that geometrically aligning the two images in the fea- ture space, allows for more natural interpolation that retains the geometry of one image and the texture of the other, connecting it to style transfer. Drawing from these observations, we explore the combination of mixup and deep metric learning. We develop a generalized formu- lation that accommodates mixup in metric learning, leading to improved representations that explore areas of the embedding space beyond the training classes. Building on these insights, we revisit the original motivation of mixup and generate a larger number of interpolated examples beyond the mini-batch size by interpolating in the embedding space. This approach allows us to sample on the entire convex hull of the mini-batch, rather than just along lin- ear segments between pairs of examples. Finally, we investigate the potential of using natural augmentations of objects from videos. We introduce a "Walking Tours" dataset of first-person egocentric videos, which capture a diverse range of objects and actions in natural scene transi- tions. We then propose a novel self-supervised pretraining method called DoRA, which detects and tracks objects in video frames, deriving multiple views from the tracks and using them in a self-supervised manner
Nguyen, Nhu Van. "Représentations visuelles de concepts textuels pour la recherche et l'annotation interactives d'images". Phd thesis, Université de La Rochelle, 2011. http://tel.archives-ouvertes.fr/tel-00730707.
Texto completo da fonteDefrasne, Ait-Said Elise. "Perception et représentation du mouvement : influences de la verbalisation sur la reconnaissance de mouvements d'escrime en fonction de l'expertise". Thesis, Besançon, 2014. http://www.theses.fr/2014BESA1023/document.
Texto completo da fonteIs it necessary to verbalize in order to memorize and learn a material? According to the literature examining the influence of verbalizations on learning and memory, the answer to this question depends on the type of material used (conceptual material versus perceptive material) and on the learners’ level of expertise. In Study 1, we examined the influence of verbal descriptions on the visual recognition of sequences of fencing movements, with participants of the three levels of expertise (novices, intermediates, experts). In Study 2, we studied the influence of different content of verbal descriptions on the recognition of sequences of fencing movements, according to the level of expertise. The goal of Study 3 was to examine the effect on memory of a trace distinct from a verbal trace: a motor trace. The findings of Study 1 show that verbalizing improves novices’ recognition, impairs intermediates’ recognition and has no effect on experts’ recognition. The results of Study 2 show that the content of verbal descriptions has an effect on memory, according to the participants’ level of expertise. The findings of Study 3 show that duplicating the fencing movement, with no feedback, strongly impedes beginners’ visual recognition. These findings broaden the verbal overshadowing phenomena to a material distinctly more conceptual than the one classically used in this field of research. They bring strong support to the theoretical hypothesis of interference resulting from a verbal recoding (Schooler, 1990). They also show that an additional motor trace can harm visual recognition of movement sequences
Livros sobre o assunto "Apprentissage de la représentation visuelle"
Chayer, Lucille Paquette. Compréhension de lecture. Montréal, Qué: Éditions de la Chenelière, 2000.
Encontre o texto completo da fonteCôté, Claire. Résolution de problèmes. Montréal, Qué: Éditions de la Chenelière, 2000.
Encontre o texto completo da fonte1948-, Perraton Charles, e Université du Québec à Montréal. Groupe d'études et de recherches en sémiotique des espaces., eds. Le cinéma: Imaginaire de la ville : "du cinéma et des restes urbains", prise 2. Montréal: Groupe d'études et de recherches en sémiotique des espaces, Université du Québec à Montréal, 2001.
Encontre o texto completo da fontePylyshyn, Zenon W. Things and places: How the mind connects with the world. Cambridge, Mass: MIT Press, 2007.
Encontre o texto completo da fonteRobert, Hopkins. Picture, image and experience: A philosophical inquiry. Cambridge: Cambridge University Press, 1998.
Encontre o texto completo da fonteBerger, John. Ways of seeing. London: British Broadcasting Corporation and Penguin Books, 1986.
Encontre o texto completo da fonteLacey, Nick. Image and representation: Key concepts in media studies. 2a ed. Basingstoke [England]: Palgrave Macmillan, 2009.
Encontre o texto completo da fonteLacey, Nick. Image and representation: Key concepts in media studies. New York: St. Martin's Press, 1998.
Encontre o texto completo da fonteBerger, John. Ways of seeing. [S.l.]: Peter Smith, 1987.
Encontre o texto completo da fonteR, Cocking Rodney, e Renninger K. Ann, eds. The development and meaning ofpsychological distance. Hillsdale, N.J: L. Erlbaum Associates, 1993.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Apprentissage de la représentation visuelle"
Bastien, Claude. "Apprentissage : modèles et représentation". In Intelligence naturelle, intelligence artificielle, 257–68. Presses Universitaires de France, 1993. http://dx.doi.org/10.3917/puf.lenyj.1993.01.0257.
Texto completo da fonteDucasse, Déborah, e Véronique Brand-Arpon. "Représentation visuelle du fonctionnement humain ordinaire". In Psychothérapie du Trouble Borderline, 19–27. Elsevier, 2019. http://dx.doi.org/10.1016/b978-2-294-76241-3.00003-9.
Texto completo da fonteOlivier, Laurent. "LES CODES DE REPRÉSENTATION VISUELLE DANS L’ART CELTIQUE ANCIEN". In Celtic Art in Europe, 39–55. Oxbow Books, 2014. http://dx.doi.org/10.2307/j.ctvh1dqs8.8.
Texto completo da fonteWANG, Xinxia, Xialing SHEN e Jing GUO. "La métaphore dans les dictionnaires bilingues d’apprentissage :". In Dictionnaires et apprentissage des langues, 79–88. Editions des archives contemporaines, 2021. http://dx.doi.org/10.17184/eac.4627.
Texto completo da fonteMartí, Eduardo. "Appropriation précoce des systèmes externes de représentation : apprentissage et développement". In Vygotski et les recherches en éducation et en didactiques, 59–71. Presses Universitaires de Bordeaux, 2008. http://dx.doi.org/10.4000/books.pub.48192.
Texto completo da fonteCelse-Giot, Carole, e Alexandre Coutté. "Peut-on se passer de représentations en sciences cognitives ?" In Neurosciences & cognition, 115–23. De Boeck Supérieur, 2011. http://dx.doi.org/10.3917/dbu.putoi.2011.01.0114.
Texto completo da fonteBarreau, Jean-Jacques. "L'œuvre d'art : un ailleurs familier". In L'œuvre d'art : un ailleurs familier, 123–47. In Press, 2014. http://dx.doi.org/10.3917/pres.barre.2014.01.0124.
Texto completo da fonte"Histoire de la représentation visuelle de la croix et du crucifié". In Commencements, 240–43. Peeters Publishers, 2020. http://dx.doi.org/10.2307/j.ctv1q26jxr.28.
Texto completo da fonteBarreau, Jean-Jacques. "Le rêve, entre actuel et origines". In Le rêve, entre actuel et origines, 197–212. In Press, 2017. http://dx.doi.org/10.3917/pres.barre.2017.01.0198.
Texto completo da fonteChiaramonte, Céline, e Stéphane Rousset. "Peut-on se passer de représentations en sciences cognitives ?" In Neurosciences & cognition, 69–78. De Boeck Supérieur, 2011. http://dx.doi.org/10.3917/dbu.putoi.2011.01.0069.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Apprentissage de la représentation visuelle"
Fourcade, A. "Apprentissage profond : un troisième oeil pour les praticiens". In 66ème Congrès de la SFCO. Les Ulis, France: EDP Sciences, 2020. http://dx.doi.org/10.1051/sfco/20206601014.
Texto completo da fonteOtjacques, Benoît, Maël Cornil, Mickaël Stefas e Fernand Feltz. "Représentation visuelle interactive de l'importance subjective de critères de sélection de données". In 23rd French Speaking Conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2044354.2044368.
Texto completo da fonteVan de Vreken, Anne, e Stéphane Safin. "Influence du type de représentation visuelle sur l'évaluation de l'ambiance d'un espace architectural". In Conference Internationale Francophone sur I'Interaction Homme-Machine. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1941007.1941015.
Texto completo da fonte