Tesis sobre el tema "Compréhension du contenu multimédia"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Compréhension du contenu multimédia".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Harrando, Ismail. "Representation, information extraction, and summarization for automatic multimedia understanding". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS097.
Texto completoWhether on TV or on the internet, video content production is seeing an unprecedented rise. Not only is video the dominant medium for entertainment purposes, but it is also reckoned to be the future of education, information and leisure. Nevertheless, the traditional paradigm for multimedia management proves to be incapable of keeping pace with the scale brought about by the sheer volume of content created every day across the disparate distribution channels. Thus, routine tasks like archiving, editing, content organization and retrieval by multimedia creators become prohibitively costly. On the user side, too, the amount of multimedia content pumped daily can be simply overwhelming; the need for shorter and more personalized content has never been more pronounced. To advance the state of the art on both fronts, a certain level of multimedia understanding has to be achieved by our computers. In this research thesis, we aim to go about the multiple challenges facing automatic media content processing and analysis, mainly gearing our exploration to three axes: 1. Representing multimedia: With all its richness and variety, modeling and representing multimedia content can be a challenge in itself. 2. Describing multimedia: The textual component of multimedia can be capitalized on to generate high-level descriptors, or annotations, for the content at hand. 3. Summarizing multimedia: we investigate the possibility of extracting highlights from media content, both for narrative-focused summarization and for maximising memorability
Nguyen, Minh Thang. "La compréhension orale en environnement multimédia". Rouen, 2003. http://www.theses.fr/2003ROUEL436.
Texto completoEven if the integration of educational technology in teaching/learning languages is, indeed, of interest, up to present, the utilisation of such a new form of teaching aids istill not yet widely spread, as for the case of Vietnam. With this present study, we will closely look into the assistance provided by these multimedia teaching aids in the field of oral comprehension competency. Therefore, we will, firstly, analyse research work done in this field of studies. Next, we will try to trackle issues which have been questioned during the pedagogical process od designing a multimedia tool. Lastely, we will present some new principles concerning the conception and the elaboration of a multimedia product
Turlier, Stéphane. "Accès et personnalisation du contenu multimédia dans un véhicule". Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00683823.
Texto completoBenoit, Huet. "Étude de Contenus Multimédia: Apporter du Contexte au Contenu". Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00744320.
Texto completoDelannoy, Pierre. "Performances des réseaux pour la diffusion de contenu multimédia". Evry, Télécom & Management SudParis, 2008. http://www.theses.fr/2008TELE0023.
Texto completoHidrio, Cédric. "Compréhension de documents multimédia : des illustrations statiques aux animations". Rennes 2, 2004. http://www.theses.fr/2004REN20040.
Texto completoThe aim of this research was to give an account of the cognitive processes involved in the simultaneous processing of auditory verbal information and corresponding pictorial information. For that, 5 experiments were conducted. Four experiments aimed at comparing different types of illustrations simultaneously presented with an audio explanation on the establishment of mental models. We also evaluated the impact of different systems which aimed at facilitating the co-referencing between the informational sources (i. E. Verbal and pictorial). These systems consisted in highlighting pictorial elements, inserting pauses in the documents which gave access or not to pictorial information and manipulating subjects prior knowledge about a target learning. The 5th experiment took place in a research convention and aimed at optimising the presentation format of a Web site. For that, we evaluated the effects of two factors: the modality of verbal information and the presence of animated picture
Turlier, Stéphane. "Accès et personnalisation de contenu multimédia à la demande dans un véhicule". Paris, Télécom ParisTech, 2011. https://pastel.hal.science/pastel-00683823.
Texto completoThe recent advent of connected vehicle platforms permits the distribution of infotainment assets to drivers and passengers with pulled and pushed workflows in a comparable manner to current mobile handsets. However, vehicles differ technically from mobile phones in terms of capability and in terms of usage. This thesis tackles the subject of personalised media delivery to motorists. We first study the technical characteristics of vehicle infotainment platforms, media assets and metadata in order to identify the requirements of a media delivery architecture for a vehicle. Based on those constraints, we have specified a media on-demand framework, which has been developed in a prototype. Afterwards, we tackle the topic of personalisation in light of two complementary point of views : on the one hand, the driver can process active personalisation when using a proper human machine interface. We present a music browser for online libraries that allows the creation of multicriteria playlists while driving. On the other hand, we analyse passive personalisation, which makes use of the driving context. We discuss the repartition of the functional components and build up a distributed architecture, which takes into account individual context preferences and their integration in the multimedia architecture that we have formerly presented. Eventually, the different solutions are evaluated according to experimental and expert methods
Kimiaei, Asadi Mariam. "Adaptation de Contenu Multimédia avec MPEG-21: Conversion de Ressources et Adaptation Sémantique de Scènes". Phd thesis, Télécom ParisTech, 2005. http://pastel.archives-ouvertes.fr/pastel-00001615.
Texto completoCarlier, Axel. "Compréhension de contenus visuels par analyse conjointe du contenu et des usages". Thesis, Toulouse, INPT, 2014. http://www.theses.fr/2014INPT0085/document.
Texto completoThis thesis focuses on the problem of understanding visual contents, which can be images, videos or 3D contents. Understanding means that we aim at inferring semantic information about the visual content. The goal of our work is to study methods that combine two types of approaches: 1) automatic content analysis and 2) an analysis of how humans interact with the content (in other words, usage analysis). We start by reviewing the state of the art from both Computer Vision and Multimedia communities. Twenty years ago, the main approach was aiming at a fully automatic understanding of images. This approach today gives way to different forms of human intervention, whether it is through the constitution of annotated datasets, or by solving problems interactively (e.g. detection or segmentation), or by the implicit collection of information gathered from content usages. These different types of human intervention are at the heart of modern research questions: how to motivate human contributors? How to design interactive scenarii that will generate interactions that contribute to content understanding? How to check or ensure the quality of human contributions? How to aggregate human contributions? How to fuse inputs obtained from usage analysis with traditional outputs from content analysis? Our literature review addresses these questions and allows us to position the contributions of this thesis. In our first set of contributions we revisit the detection of important (or salient) regions through implicit feedback from users that either consume or produce visual contents. In 2D, we develop several interfaces of interactive video (e.g. zoomable video) in order to coordinate content analysis and usage analysis. We also generalize these results to 3D by introducing a new detector of salient regions that builds upon simultaneous video recordings of the same public artistic performance (dance show, chant, etc.) by multiple users. The second contribution of our work aims at a semantic understanding of fixed images. With this goal in mind, we use data gathered through a game, Ask’nSeek, that we created. Elementary interactions (such as clicks) together with textual input data from players are, as before, mixed with automatic analysis of images. In particular, we show the usefulness of interactions that help revealing spatial relations between different objects in a scene. After studying the problem of detecting objects on a scene, we also adress the more ambitious problem of segmentation
Benmokhtar, Rachid. "Fusion multi-niveaux pour l'indexation et la recherche multimédia par le contenu sémantique". Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005321.
Texto completoHamroun, Mohamed. "Indexation et recherche par contenu visuel, sémantique et multi-niveaux des documents multimédia". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0372.
Texto completoDue to the latest technological advances, the amount of multimedia data is constantly increasing. In this context, the problem is how to effectively use this data? it is necessary to set up tools to facilitate its access and manipulation.To achieve this goal, we first propose an indexation and retrieval model for video shots (or images) by their visual content (ISE). The innovative features of ISE are as follows: (i) definition of a new descriptor "PMC" and (ii) application of the genetic algorithm (GA) to improve the retrieval (PMGA).Then, we focus on the detection of concepts in video shots (LAMIRA approach). In the same context, we propose a semi-automatic annotation method for video shots in order to improve the quality of indexation based on the GA.Then, we provide a semantic indexation method separating the data level from a conceptual level and a more abstract, contextual level. This new system also incorporates mechanisms for expanding the request and relevance feedback. To add more fluidity to the user query, the user can perform a navigation using the three levels of abstraction. Two systems called VISEN and VINAS have been set up to validate these last positions.Finally, a SIRI Framework was proposed on the basis of a multi-level indexation combining our 3 systems: ISE, VINAS and VISEN. This Framework provides a two-dimensional representation of features (high level and low level) for each image
Harb, Hadi. "Classification du signal sonore en vue d'une indexation par le contenu des documents multimédia". Ecully, Ecole centrale de Lyon, 2003. http://bibli.ec-lyon.fr/exl-doc/hharb.pdf.
Texto completoHumans have a remarkable ability to categorise audio signals into classes, such as speech, music, explosion, etc. . . The thesis studies the capacity of developing audio classification algorithms inspired by the human perception of the audio semantic classes in the multimedia context. A model of short therm auditory memory is proposed in order to explain some psychoacoustic effects. The memory model is then simplified to constitute the basis of the Piecewise Gaussian Modelling (PGM) features. The PGM features are coupled to a mixture of neural networks to form a general audio signal classifier. The classifier was successfully applied to speech/music classification, gender identification, action detection and musical genre recognition. A synthesis of the classification effort was used in order to structure a video into "audio scenes" and "audio chapters". This work has permitted the development of an autoamtic audio indexer prototype, CYNDI
Merhy, Liliane. "La compréhension orale, médias et multimédia, dans l'enseignement/apprentissage du français langue étrangère". Nancy 2, 2006. http://www.theses.fr/2006NAN21010.
Texto completoIrrespective of favourable or critical assesments, the integration of ICTs in language teaching/learning in general and listening comprehension in particular is still running into difficulties. The research described in this dissertation consists in analyzing the role of the media and ICTs in language classes, their potential for improving learners' listening comprhension and the ergonomics of work in an academic context. An enquiry among Français Langue Etrangère, DeFLE at the University Nancy 2 and the Institut des Langues, University Tichrine of Lattaquié in Syria) helped us to understand why the integration of ICTs is still problematic, which is not the case for the media (television, radio, video)
Layaïda, Nabil. "Représentation et analyses de contenu et de programmes Web". Habilitation à diriger des recherches, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00872752.
Texto completoPleşca, Cezar. "Supervision de contenus multimédia : adaptation de contenu, politiques optimales de préchargement et coordination causale de flux". Toulouse, INPT, 2007. http://ethesis.inp-toulouse.fr/archive/00000499/.
Texto completoDistributed systems information quality depends on service responsiveness, data consistency and its relevance according to user interests. The first part of this study deals with hypermedia content delivery and uses Markov Decision Processes (MDP) to derive aggresive optimal prefetching policies integrating both users habits and ressource availability. The second part addresses the partial observable contexts. We show how a ressource-based policy adaptation (MDP model) can be modulated according to user interest, using partially observable MDP (POMDP). Finally, the third part is placed in distributed multimedia applications context. We propose a coordination-level middleware for supporting flexible consistency. Our simulations show that its ability to handle several partial orders (e. G. Fifo, causal, total) makes it better than classic or [delta)-causality
Letessier, Pierre. "Découverte et exploitation d'objets visuels fréquents dans des collections multimédia". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0014/document.
Texto completoThe main goal of this thesis is to discover frequent visual objects in large multimedia collections. As in many areas (finance, genetics, . . .), it consists in extracting a knowledge, using the occurence frequency of an object in a collection as a relevance criterion. A first contribution is to provide a formalism to the problems of mining and discovery of frequent visual objects. The second contribution is a generic method to solve these two problems, based on an iterative sampling process, and on an efficient and scalable rigid objects matching. The third contribution of this work focuses on building a likelihood function close to the perfect distribution. Experiments show that contrary to state-of-the-art methods, our approach allows to discover efficiently very small objects in several millions images. Finally, several applications are presented, including trademark logos discovery, transmedia events detection or visual-based query suggestion
Menant, William. "Contribution à l'analyse des orientations stratégiques et à la compréhension du discours de l'industrie pharmaceutique". Caen, 2005. http://www.theses.fr/2005CAEN0621.
Texto completoBen, Abdelali Abdessalem. "Etude de la conception d’architectures matérielles dédiées pour les traitements multimédia : indexation de la vidéo par le contenu". Dijon, 2007. http://www.theses.fr/2007DIJOS075.
Texto completoThis thesis constitutes a contribution to the study of content based automatic video indexing aiming at designing hardware architectures dedicated to this type of multimedia application. The content based video indexing represents an important domain that is in constant development for different types of applications such as the Internet, the interactive TV, the personal video recorders (PVR) and the security applications. The proposed study is done through concrete AV analysis techniques for video indexing and it is carried out according to different aspects related to application, technology and methodology. It is included in the context of dedicated hardware architectures design and exploitation of the new embedded systems technologies for the recent multimedia applications. Much more interest is given to the reconfigurable technology and to the new possibilities and means of the FPGA devices utilization. The first stage of this thesis is devoted to the study of the automatic content based video indexing domain. It is about the study of features and the new needs of indexing systems through the approaches and techniques currently used as well as the application fields of the new generations of these systems. This is in order to show the interest of using new architectures and technological solutions permitting to support the new requirements of this domain. The second stage is dedicated to the validation and the optimization of some visual descriptors of the MPEG-7 standard for the video temporal segmentation. This constitutes a case study through an important example of AV content analysis techniques. The proposed study constitutes also a stage of preparation for the hardware implementation of these techniques in the context of hardware accelerators design for real time automatic video indexing. Different Algorithm Architecture Adequacy aspects have been studied through the proposition of various algorithmic transformations that can be applied for the considered algorithms. The third stage of this thesis is devoted to study the design of dedicated hardware operators for video content analysis techniques as well as the exploitation of the new reconfigurable systems technologies for designing SORC dedicated to the automatic video indexing. Several hardware architectures have been proposed for the MPEG-7 descriptors and different concepts related to the exploitation of reconfigurable technology and SORC have been studied as well (methodologies and tools for designing such systems on chip, technology and methods for the dynamic and partial reconfiguration, FPGA based hardware platforms, SORC structure for video indexing, etc. )
Hinard, Yoann. "Sécurisation et tarification de la diffusion de contenu en multicast". Compiègne, 2008. http://www.theses.fr/2008COMP1766.
Texto completoIP multicast is an effective way to distribute vidéo content to large group of receivers. This technology is now widely used in the closed and private networks of the telecoms operators. However, IP multicast is not widely deployed over the Internet which is by nature an open network. In this thesis, we deal with two issues preventing wide deployment of IP multicast : ability to perform accounting and access control, and ability to secure the content distributed to large groups. We define a generic Authentication, Authorization and Accounting architecture for multicast content distribution which is based on the Diameter base protocol standardized by the IETF. We also define a new hash-code chaining scheme which allow to amortize the overhead of a digital signature on many other packets. This scheme allows data origin authentication and non-repudiation even with high packet loss ratio
Abdel, Wahab Shaimaa. "Le multimédia en maternelle : tâches, activités et apprentissage du langage". Thesis, Paris 8, 2016. http://www.theses.fr/2016PA080018.
Texto completoThe purpose of this research is to study the impact of multimedia assisted learning on vocabulary development and comprehension among children of preschool, compared to traditional learning. It also aims to study the impact of different modes of interaction in computerized environments on language development and comprehension of the story among children of preschool.Learning the language is a major challenge for future academic success of students in kindergarten. This doctoral research aims to study the impact on the acquisition of certain skills on the language, and introduction of computerized environments in the final year of kindergarten (KG2, 5 to 6 year-olds). The study focuses particularly on the children acquisition of language skills in vocabulary and through the reception and comprehension of narratives. This work aims to take stock of existing research and analyses software (electronic stories) in French. It then uses special software (Un Prince à l’école) in the Paris region, and study the effectiveness in vocabulary development (pre/post test) and comprehension of the story (post-test) for these children. We studied (i) the impact of the interaction with the e-story vs. the story on paper, (ii) the impact of the interaction (individual vs. collaborative) with e-story on vocabulary development and comprehension of the story
Delezoide, Bertrand. "Modèles d'indéxation multimédia pour la description automatique de films de cinéma". Paris 6, 2006. http://www.theses.fr/2006PA066108.
Texto completoPapadopoulos, Hélène. "Estimation conjointe d'information de contenu musical d'un signal audio". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00548952.
Texto completoReboud, Alison. "Towards automatic understanding of narrative audiovisual content". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS398.pdf.
Texto completoModern storytelling is digital and video-based. Understanding the stories contained in videos remains a challenge for automatic systems. Having multimodality as a transversal theme, this research thesis breaks down the "understanding" task into the following challenges: Predicting memorability, summarising and modelling stories from audiovisual content
Letessier, Pierre. "Découverte et exploitation d'objets visuels fréquents dans des collections multimédia". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0014.
Texto completoThe main goal of this thesis is to discover frequent visual objects in large multimedia collections. As in many areas (finance, genetics, . . .), it consists in extracting a knowledge, using the occurence frequency of an object in a collection as a relevance criterion. A first contribution is to provide a formalism to the problems of mining and discovery of frequent visual objects. The second contribution is a generic method to solve these two problems, based on an iterative sampling process, and on an efficient and scalable rigid objects matching. The third contribution of this work focuses on building a likelihood function close to the perfect distribution. Experiments show that contrary to state-of-the-art methods, our approach allows to discover efficiently very small objects in several millions images. Finally, several applications are presented, including trademark logos discovery, transmedia events detection or visual-based query suggestion
Métayer, Natacha. "Compréhension et stratégies d’exploration des documents pédagogiques illustrés". Thesis, Rennes 2, 2016. http://www.theses.fr/2016REN20001/document.
Texto completoIt is easier today to offer in one informative document many sources of information presented in different formats. Presenting various media can bring benefits in terms of learning performance, but dealing effectively with these different sources of information is complex. Therefore, offering documents guiding the learner when he is reading may be necessary to promote the construction of a qualitative mental model. Empirical studies that are conducted during this thesis endeavour to determine which formats are most effective while gradually increasing guidance within the document. Thus, four items are tested: the position of the picture relative to the text, the effect of the instructions, the text segmentation and the introduction of a guidance through a figure-ground contrast. Moreover, in order to bring new elements of reflection about how people explore an informative document and the impact of these strategies on performance, the eye movements of the learners were recorded. The results showed that changes in the format of information have an impact on the strategies of consultation of the document, including the increasing eye transitions between texts and illustrations. However, only the segmentation of the text in the form of semantic paragraphs brought benefits in terms of understanding
Badr, Mehdi. "Traitement de requêtes top-k multicritères et application à la recherche par le contenu dans les bases de données multimédia". Phd thesis, Université de Cergy Pontoise, 2013. http://tel.archives-ouvertes.fr/tel-00978770.
Texto completoJamin, Emmanuel. "La conception de documents audiovisuels : vers l'extraction sémantique et la réécriture interactive des archives multimédias". Paris 11, 2006. http://www.theses.fr/2006PA112215.
Texto completoDigitalization of audio-visual documents (DAV) improves storage techniques, which boosts innovating uses of DAV exploitation. Thus, our aim is enrich the activities of audio-visual writing based on the re-use of video fragments. After an analysis of documentary practices, we formalized the task of "multi-media read-writing" by adapting cognitive models of writing. This task brings into play the originator and the reader in a double narrative/discursive fitting, where everyone interprets the informational matter which is being presented. Within this relation, the document acts like a vector of communication and even of interaction. Therefore, we formalized a multi-media model for audio-visual design that supports MPEG7 standardization; we are talking about the "Interactive Scenario" (ScoI). ScoI is a virtual document and a suitable pool for the integration of heterogeneous fragments. This model integrates knowledge on the media, on the design’s process and on the contents access methods. The scenario is instrumented on a multimedia writing interactive system which is connected to a search system for contextualized multimedia information. We thus adapted a method of information search in order to extract multimedia fragments from a corpus of semi-structured documents and considered a recombination. A dynamic human-computer interaction process directs and assists the choices of the author in the construction of the document to be produced, or target document
Xie, Fuchun. "Tatouage sûr et robuste appliqué au traçage de documents multimédia". Phd thesis, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00592126.
Texto completoDaoudi, Imane. "Recherche par similarité dans les grandes bases de données multimédia : application à la recherche par le contenu dans les bases d'images". Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0057/these.pdf.
Texto completo[The emergence of digital multimedia data is increasing. Access, sharing and retrieval of these data have become the real needs. This requires the use of powerful tools and search engine for fast and efficient access to data. The spectacular growth of technologies and numeric requires the use of powerful tools and search engine for fast and efficient access to data. My thesis work is in the field of multimedia data especially images. The main objectives is to develop a fast and efficient indexing and searching method of the k nearest neighbour which is adapted for applications in Content-based image retrieval (CBIR) and for properties of image descriptors (high volume, large dimension, etc. ). The main idea is on one hand, to provide answers to the problems of scalability and the curse of dimensionality and the other to deal with similarity problems that arise in indexing and CBIR. We propose in this thesis two different approaches. The first uses a multidimensional indexing structure based on approximation approach or filtering, which is an improvement in the RA-Blocks method. The proposed method is based on the proposal of an algorithm of subdividing the data space which improves the storage capacity of the index and the CPU times. In a second approach, we propose a multidimensional indexing method suitable for heterogeneous data (colour, texture, shape). The second proposed method combines a non linear dimensionality reduction technique with a multidimensional indexing approach based on approximation. This combination allows one hand to deal with the curse of dimensionality scalability problems and also to exploit the properties of the non-linear space to find suitable similarity measurement for the nature of manipulated data. ]
Gosselin, Philippe-Henri. "Apprentissage interactif pour la recherche par le contenu dans les bases multimédias". Habilitation à diriger des recherches, Université de Cergy Pontoise, 2011. http://tel.archives-ouvertes.fr/tel-00660316.
Texto completoKaced, Ahmed Réda. "Problèmes de sécurité posés par les proxies d'adaptation multimédia : proposition de solutions pour une sécurisation de bout-en-bout". Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005883.
Texto completoIeva, Carlo. "Révéler le contenu latent du code source : à la découverte des topoi de programme". Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS024/document.
Texto completoDuring the development of long lifespan software systems, specification documents can become outdated or can even disappear due to the turnover of software developers. Implementing new software releases or checking whether some user requirements are still valid thus becomes challenging. The only reliable development artifact in this context is source code but understanding source code of large projects is a time- and effort- consuming activity. This challenging problem can be addressed by extracting high-level (observable) capabilities of software systems. By automatically mining the source code and the available source-level documentation, it becomes possible to provide a significant help to the software developer in his/her program understanding task.This thesis proposes a new method and a tool, called FEAT (FEature As Topoi), to address this problem. Our approach automatically extracts program topoi from source code analysis by using a three steps process: First, FEAT creates a model of a software system capturing both structural and semantic elements of the source code, augmented with code-level comments; Second, it creates groups of closely related functions through hierarchical agglomerative clustering; Third, within the context of every cluster, functions are ranked and selected, according to some structural properties, in order to form program topoi.The contributions of the thesis is three-fold:1) The notion of program topoi is introduced and discussed from a theoretical standpoint with respect to other notions used in program understanding ;2) At the core of the clustering method used in FEAT, we propose a new hybrid distance combining both semantic and structural elements automatically extracted from source code and comments. This distance is parametrized and the impact of the parameter is strongly assessed through a deep experimental evaluation ;3) Our tool FEAT has been assessed in collaboration with Software Heritage (SH), a large-scale ambitious initiative whose aim is to collect, preserve and, share all publicly available source code on earth. We performed a large experimental evaluation of FEAT on 600 open source projects of SH, coming from various domains and amounting to more than 25 MLOC (million lines of code).Our results show that FEAT can handle projects of size up to 4,000 functions and several hundreds of files, which opens the door for its large-scale adoption for program understanding
Lopez, Del Hierro Silvia. "Relations entre la méthodologie de l'enseignement de la compréhension orale et les représentations didactiques des professeurs de français langue étrangère au Mexique". Thesis, Nancy 2, 2010. http://www.theses.fr/2010NAN21005/document.
Texto completoThe purpose of this thesis was to study how French language teachers from two language centers in Mexico teach listening comprehension to their students. The hypothesis of the study was that French language instructors teach this controversial and complex skill on the basis of different methodological precepts learned from their French teacher training and their professional experience, as well as from the representations acquired from their professional working environment and from their personal experience of learning a second language. To complete our study we analysed the content and modalisation of the teachers' discourse. This study confirmed part of our hypothesis and highlighted the influence of the institutional context in the instructional methods and practice of French language teaching in Mexico
Ly, Anh Tuan. "Accès et utilisation de documents multimédia complexes dans une bibliothèque numérique". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00871651.
Texto completoMartin, Jean-Pascal. "Description sémiotique de contenus audiovisuels". Paris 11, 2005. http://www.theses.fr/2005PA112297.
Texto completoThree categories of descriptors are necessary to describe an audiovisual content: objects shown, processes used for film direction, and diegetic relations. The identification of the diegetic relations (those of the space-time continuum of the narration) cannot be automated. The formalisms used by the community exploit semantic descriptors that are difficult to select since they depend on contextual elements interpreted according to sophisticated knowledge. We choose to keep the human in the center of indexation's process. Two kinds of answers are provided. First, we propose a method of semiotic indexing based on the identification and the clarification of the signs that are reified at the time of the analysis. For that, we define the tetrahedral sign as a cognitive representation necessarily made up of one meant and of one meaning and possibly intensional and extensional referents. We define then the process of interpretation as a semiotic rewriting. We propose a formalism for graphs of signs (expressed with RDF+OWL schema integrated as extensions of MPEG-7) to represent the mental activity of interpretation. Second, we recommend a model of operative interaction between the man and the system that makes the reification of interpretation easier. A platform for the construction of graphs of signs based on the multi-agents paradigm allows dynamic and negotiated construction signs. Those signs are expressed according to provided syntax and grammar. Diagrams of interpretation provide to the agents micro-interpretations that may be activated in context
Lombard, Jordan. "Guidage des traitements et acceptabilité de la tablette pour la compréhension de documents multiples". Thesis, Toulouse 2, 2019. http://www.theses.fr/2019TOU20035.
Texto completoThis thesis focuses on students' activity (including information selection) when they read multiple textual documents in order to develop their critical perspective on a topic; and it focuses on students' perceptions (including ease of use) of the tablet as a tool for consulting documents. Under these conditions, three studies evaluate the comprehension performance of students following the reading of several documents on a tablet with an innovative application (e.g., display of several documents simultaneously), depending on whether they freely study the documents or are guided in the processing of the documents. In addition, these studies assess how students perceive the tablet as a tool for studying documents, particularly if they consider the tablet to improve their performance
Mbarki, Mohamed. "Gestion de l'hétérogénéité documentaire : le cas d'un entrepôt de documents multimédia". Toulouse 3, 2008. http://thesesups.ups-tlse.fr/185/.
Texto completoThe knowledge society is based on three axes: the diffusion and use of information via new technologies, the deduction of knowledge induced by this information and the economic impacts which can result from this information. To offer to the actors and more particularly to the "decision makers" of this society some tools which enable them to produce and manage "knowledge" or at least "elements of knowledge" seem to be rather difficult to ensure. This difficulty is due to the dynamism of the environment and the diversity of factors influencing the information production, extraction and communication. Indeed, this information is included in documents which are collected from disseminated sources (Internet, Workflow, numerical libraries, etc. ). These documents are thus heterogeneous on the content and on the form (they can be related to various fields, they can be more or less structured, they can have various structures, they contain several type of media, are stored in several type of supports, etc). The current challenges are to conceive new applications to exploit this document heterogeneity. Having in mind these needs, the work presented in my thesis, aims to face these challenges and in particular at proposing solutions in order "to manage and create knowledge" starting from the integration of all information available on the heterogeneous documents. The handling of multimedia documents repositories constitutes the applicative framework of our proposals. Our approach is articulated around three complementary axes: (1) the representation, (2) storage (or integration) and (3) exploitation of the heterogeneous documents. Documents representation is related to the determination of information that must be preserved and the way according to which they must be organized to offer better apprehending and envisaging of their uses. The solution that we chose to meet these needs bases on the proposal for a documents model which integrates several overlapping and complementary levels of description (a generic layer and a specific one, a logical description and a semantic one). .
Pansini, Vittorio Michele. "Apport de la spectroscopie 1H par résonance magnétique (3 Tesla) à la compréhension de la physiopathologie de la moelle osseuse de la hanche". Phd thesis, Université du Droit et de la Santé - Lille II, 2012. http://tel.archives-ouvertes.fr/tel-00818364.
Texto completoOllagnier, Anaïs. "Analyse de requêtes en langue naturelle et extraction d'informations bibliographiques pour une recherche de livres orientée contenu efficace". Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0556/document.
Texto completoIn the recent years, the Web has undergone a tremendous growth regarding both content and users. This has led to an information overload problem in which people are finding it increasingly difficult to locate the right information at the right time. Recommender systems have been developed to address this problem, by guiding users through the big ocean of information. The recommendation approaches have multiplied and have been successfully implemented, particularly through approaches such as collaborative filtering. However, there are still challenges and limitations that offer opportunities for new research. Among these challenges, the design of reading recommendation systems has become a new expanding research focus following the emergence of digital libraries.Traditionally, libraries play a passive role in interaction with users due to the lack of effective search and recommendation tools. In this manuscript, we will study the creation of a reading recommendation system in which we'll try to exploit the possibilities of digital access to scientific information. Our objectives are: - to improve the understanding of user needs expressed in natural language search queries for books, articles and posts. This work will require the establishment of processes capable of exploiting the structures of data and their dimension; - to compensate for the absence of explicit links between books and journal articles by automatically detecting and analyzing bibliographic references, and then to propose links;- to achieve a reading recommendation system based on textual data to provide a customized recommendation list to active users, similar to systems already used by users profiles
Aubry, Willy. "Etude et mise en place d’une plateforme d’adaptation multiservice embarquée pour la gestion de flux multimédia à différents niveaux logiciels et matériels". Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14678/document.
Texto completoOn the one hand, technology advances have led to the expansion of the handheld devices market. Thanks to this expansion, people are more and more connected and more and more data are exchanged over the Internet. On the other hand, this huge amound of data imposes drastic constrains in order to achieve sufficient quality. The Internet is now showing its limits to assure such quality. To answer nowadays limitations, a next generation Internet is envisioned. This new network takes into account the content nature (video, audio, ...) and the context (network state, terminal capabilities ...) to better manage its own resources. To this extend, video manipulation is one of the key concept that is highlighted in this arising context. Video content is more and more consumed and at the same time requires more and more resources. Adapting videos to the network state (reducing its bitrate to match available bandwidth) or to the terminal capabilities (screen size, supported codecs, …) appears mandatory and is foreseen to take place in real time in networking devices such as home gateways. However, video adaptation is a resource intensive task and must be implemented using hardware accelerators to meet the desired low cost and real time constraints.In this thesis, content- and context-awareness is first analyzed to be considered at the network side. Secondly, a generic low cost video adaptation system is proposed and compared to existing solutions as a trade-off between system complexity and quality. Then, hardware conception is tackled as this system is implemented in an FPGA based architecture. Finally, this system is used to evaluate the indirect effects of video adaptation; energy consumption reduction is achieved at the terminal side by reducing video characteristics thus permitting an increased user experience for End-Users
Derbas, Nadia. "Contributions à la détection de concepts et d'événements dans les documents vidéos". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM035/document.
Texto completoA consequence of the rise of digital technology is that the quantity of available collections of multimedia documents is permanently and strongly increasing. The indexing of these documents became both very costly and impossible to do manually. In order to be able to analyze, classify and search multimedia documents, indexing systems have been defined. However, most of these systems suffer quality or practicability issues. Their performance is limited and depends on the data volume and data variability. Indexing systems analyze multimedia documents, looking for static concepts (bicycle, chair,...), or events (wedding, protest,...). Therefore, the variability in shapes, positions, lighting or orientation of objects hinders the process. Another aspect is that systems must be scalable. They should be able to handle big data while using reasonable amount of computing time and memory.The aim of this thesis is to improve the general performance of content-based multimedia indexing systems. Four main contributions are brought in this thesis for improving different stages of the indexing process. The first one is an "early-early fusion method" that merges different information sources in order to extract their deep correlations. This method is used for violent scenes detection in movies. The second contribution is a weakly supervised method for basic concept (objects) localization in images. This can be used afterwards as a new descriptor to help detecting complex concepts (events). The third contribution tackles the noise reduction problem on ambiguously annotated data. Two methods are proposed: a shot annotation generator, and a shot weighing method. The last contribution is a generic descriptor optimization method, based on PCA and non-linear transforms.These four contributions are tested and evaluated using reference data collections, including TRECVid and MediaEval. These contributions helped our submissions achieving very good rankings in those evaluation campaigns
Berrani, Sid-Ahmed. "Recherche approximative de plus proches voisins avec contrôle probabiliste de la précision ; application à la recherche d'images par le contenu". Phd thesis, Université Rennes 1, 2004. http://tel.archives-ouvertes.fr/tel-00532854.
Texto completoZhang, Chang. "Exploitation didactique d’un corpus pour l’enseignement de la compréhension orale du FLE en milieu universitaire chinois : didactisation de la banque de données multimédia CLAPI (Corpus de Langues Parlées en Interaction)". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE2064.
Texto completoListening comprehension is a key objective in the process of learning a foreign language. The Chinese students often find understanding oral French difficult.Based on this fact, this paper attempts to use the database CLAPI (Corpus de langues parlées en interaction) to propose some paths for teaching listening comprehension in the context of Chinese universities.This research begins with the presentation of educational and cultural context for interpreting the culture of teaching in China; then the paperconsists of a review of foreign language listening comprehension andthe contributions of the corpus; and then, we carry out this study in the context of Chinese universities, with students and teachers of French, in order to find advantages and limitations in the teaching and learning of listening comprehension. Based on the theories, the Chinese context of French teaching andthe results obtained in our study, we bring our reflections and proposals on the teaching of oral corpus for listening comprehension in Chinese context
Bursuc, Andrei. "Indexation et recherche de contenus par objet visuel". Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00873966.
Texto completoLivshin, Arie. "IDENTIFICATION AUTOMATIQUE DES INSTRUMENTS DE MUSIQUE". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00810688.
Texto completoLe, Huu Ton. "Improving image representation using image saliency and information gain". Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2287/document.
Texto completoNowadays, along with the development of multimedia technology, content based image retrieval (CBIR) has become an interesting and active research topic with an increasing number of application domains: image indexing and retrieval, face recognition, event detection, hand writing scanning, objects detection and tracking, image classification, landmark detection... One of the most popular models in CBIR is Bag of Visual Words (BoVW) which is inspired by Bag of Words model from Information Retrieval field. In BoVW model, images are represented by histograms of visual words from a visual vocabulary. By comparing the images signatures, we can tell the difference between images. Image representation plays an important role in a CBIR system as it determines the precision of the retrieval results.In this thesis, image representation problem is addressed. Our first contribution is to propose a new framework for visual vocabulary construction using information gain (IG) values. The IG values are computed by a weighting scheme combined with a visual attention model. Secondly, we propose to use visual attention model to improve the performance of the proposed BoVW model. This contribution addresses the importance of saliency key-points in the images by a study on the saliency of local feature detectors. Inspired from the results from this study, we use saliency as a weighting or an additional histogram for image representation.The last contribution of this thesis to CBIR shows how our framework enhances the BoVP model. Finally, a query expansion technique is employed to increase the retrieval scores on both BoVW and BoVP models
Terrier, Linda. "Méthodologie linguistique pour l'évaluation des restitutions et analyse expérimentale des processus de didactisation du son : recommandations pour un apprentissage raisonné de la compréhension de l'anglais oral par les étudiants francophones du secteur LANSAD". Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1554/.
Texto completoThis PhD thesis was motivated by the weak level of French university student using English for specific purposes in listening to English. We first established which aspects of English phonology may cause listening comprehension difficulties for native speakers of French. We then analyzed the place of phonology and listening skills in the history of English teaching in France and the cognitive processes involved in the act of listening to a foreign language. We concluded this review by suggesting that focus should be put on teaching and learning the language through written transcriptions of oral documents, rather than on listening strategies. This proposed change in the instructional paradigm invited research into new modes of listening to audio or video files and we chose to explore the value of a didactic approach to sound-editing within the framework of the Cognitive Load Theory. The hypothesis is that the sound-editing processes studied could reduce the intrinsic and extraneous cognitive loads linked to the task of understanding spoken English. However, checking this hypothesis against empirical data required possessing a valid tool to assess listening comprehension through written transcriptions. A linguistic methodology was built to that purpose and applied to the quantitative analysis of transcriptions written by students during the four experiments conducted to validate our working hypothesis. A qualitative analysis was also carried out. The results of both these analyses point the way to new proposals regarding teaching and learning English listening comprehension skills based on innovative multimedia instructional designs within a blended learning environment
Max, Aurélien. "De la création de documents normalisés à la normalisation de documents en domaine contraint". Grenoble 1, 2003. http://www.theses.fr/2003GRE10227.
Texto completoWell-formedness conditions on documents in constrained domains are often hard to apply. An active research trend approaches the authoring of normalized documents through semantic specification, thereby facilitating such applications as multilingual production. However, the current systems are not able to analyse an existing document in order to normalize it. We therefore propose an approach that reuses the resources of such systems to recreate the semantic content of a document, from which a normalized textual version can be generated. This approach is based on two main paradigms : fuzzy inverted generation, which heuristically finds candidate semantic representations, and interactive negotiation, which allows an expert of the domain to progressively validate the semantic representation that corresponds to the original document
Plesca, Cezar. "Supervision de contenus multimédia : adaptation de contenu, politiques optimales de préchargement et coordination causale de flux". Phd thesis, 2007. http://oatao.univ-toulouse.fr/7600/1/plesca.pdf.
Texto completoBourque, Annie-Claude. "Analyse de contenu de modèles globaux d'intervention pour une meilleure compréhension du processus de priorisation des stratégies d'intervention spécifiques". Thèse, 2019. http://depot-e.uqtr.ca/id/eprint/9341/1/eprint9341.pdf.
Texto completo