Dissertations / Theses on the topic 'Visualisation des données mobiles'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Visualisation des données mobiles.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Laucius, Salvijus. "Visualisation dynamique d'informations géographiques pour un utilisateur mobile." La Rochelle, 2008. http://www.theses.fr/2008LAROS234.
Full textIn this thesis, we study the principles for managing and indexing spatial data located on a client in a client-server architecture, as well as downloading data on the client by anticipating the next move of the user. The system has been designed for assisting the user when moving and getting around in an urban area. The client downloads parts of the digital maps while he moves. In order not to download spatial objects at every move, they are kept in the client's cache memory. For the purpose of speeding up the search for spatial objects in the cache memory, we propose to create an index on the client. In order to decide which type of index to use, we have studied several indexing techniques. This study has allowed us to compare their performance and see how relevant these techniques were for the development and use of our system. The choice of indexing mechanisms has led to the definition of a model for assessing the cost of using them when processing spatial queries and updating them as well in the framework of our system. This theoretical study has confirmed the interest of using an index on the client. In order to reduce the cost of updating the index on the client, we have studied the incremental transfer of the index to the client. The application of techniques for releasing the cache has been studied for avoiding the saturation of the cache when travelling on long runs. The objective of moves anticipation is to adapt data loading to the user's moves. The query area on the client is distorted (the expanse being unchanged) depending on the direction of the user's moves. Then, the query area may be reduced in order to reduce the data loading time. We propose various strategies for determining when to send queries to the server, with each move or at the estimated best time. On the client, the visualization system is a Java application running on a PDA equipped with a GPS that gives the position of the user and a cellular phone allowing to connect to a distant server
Follin, Jean-Michel. "Gestion incrémentale de données multi-résolutions dans un système mobile de visualisation d'informations géographiques." La Rochelle, 2004. http://www.theses.fr/2004LAROS131.
Full textWe propose a solution for presentation and management of vector multiresolution geodata by taking into account constraints related to mobile context (limitations of storage, displaying capacities and transfer rate). Our solution provides users with the LoD ("Level of Detail") in adequation with scale by respecting the well-known "principle of constant density of data". The amount of data exchanged between client and server is minimized in our system by reusing already locally available data when possible. Increment corresponds to an operation sequence allowing reconstruction of LoD of an object from an available LoD of the same object on the client side. Transferring only increment appears more interesting than downloading an "entire" LoD object. We present models of multi-resolution data and transfer, and principles allowing an incremental management of data in a mobile geodata visualization system. Then we prove interest of our multi-resolution strategy in relation to mono-resolution one
Islam, Mohammad Alaul. "Visualizations for Smartwatches and Fitness Trackers." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG018.
Full textThis thesis covers research on how to design and use micro-visualizations for pervasive and mobile data exploration on smartwatches and fitness trackers. People increasingly wear smartwatches that can track and show a wide variety of data. My work is motivated by the potential benefits of data visualizations on small mobile devices such as fitness monitoring armbands and smartwatches. I focus on situations in which visualizations support dedicated data-related tasks on interactive smartwatches. My main research goal in this space is to understand more broadly how to design small-scale visualizations for fitness trackers. Here, I explore: (i) design constraints in the small space through an ideation workshop; (ii) what kind of visualizations people currently see on their watch faces; (iii) a design review and design space of small-scale visualizations; (iv) and readability of micro-visualizations considering the impact of size and aspect ratio in the context of sleep tracking. The main findings of the thesis are, first, a set of data needs concerning a sightseeing usage context in which these data needs were met with a wealth of dedicated visualization designs that go beyond those commonly seen on watch displays. Second, a predominant display of health & fitness data, with icons accompanying the text being the most frequent representation type on current smartwatch faces. Third, a design space for smartwatch face visualizations which highlights the most important considerations for new data displays for smartwatch faces and other small displays. Last, in the context of sleep tracking, we saw that people performed simple tasks effectively, even with complex visualization, on both smartwatch and fitness band displays; but more complex tasks benefited from the larger smartwatch size. Finally, I point out important open opportunities for future smartwatch visualization research, such as scalability (e.g., more data, smaller size, and more visualizations), the role of context and wearer's movement, smartwatch display types, and interactivity. In summary, this thesis contributes to the understanding of visualizations on smartwatches and highlights open opportunities for smartwatch visualization research
Wambecke, Jérémy. "Visualisation de données temporelles personnelles." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM051/document.
Full textThe production of energy, in particular the production of electricity, is the main responsible for the emission of greenhouse gases at world scale. The residential sector being the most energy consuming, it is essential to act at a personal scale to reduce these emissions. Thanks to the development of ubiquitous computing, it is now easy to collect data about the electricity consumption of electrical appliances of a housing. This possibility has allowed the development of eco-feedback technologies, whose objective is to provide to consumers a feedback about their consumption with the aim to reduce it. In this thesis we propose a personal visualization method for time-dependent data based on a what if interaction, which means that users can apply modifications in their behavior in a virtual way. Especially our method allows to simulate the modification of the usage of electrical appliances of a housing, and then to evaluate visually the impact of the modifications on data. This approach has been implemented in the Activelec system, which we have evaluated with users on real data.We synthesize the essential elements of conception for eco-feedback systems in a state of the art. We also outline the limitations of these technologies, the main one being the difficulty faced by users to find relevant modifications in their behavior to decrease their energy consumption. We then present three contributions. The first contribution is the development of a what if approach applied to eco-feedback as well as its implementation in the Activelec system. The second contribution is the evaluation of our approach with two laboratory studies. In these studies we assess if participants using our method manage to find modifications that save energy and which require a sufficiently low effort to be applied in reality. Finally the third contribution is the in-situ evaluation of the Activelec system. Activelec has been deployed in three private housings and used for a duration of approximately one month. This in-situ experiment allows to evaluate the usage of our approach in a real domestic context. In these three studies, participants managed to find modifications in the usage of appliances that would savea significant amount of energy, while being judged easy to be applied in reality.We also discuss of the application of our what if approach to the domain of personal visualization, beyond electricity consumption data, which is defined as the visual analysis of personal data. We hence present several potential applications to other types of time-dependent personal data, for example related to physical activity or to transportation. This thesis opens new perspectives for using a what if interaction paradigm for personal visualization
Castanié, Laurent. "Visualisation de données volumiques massives : application aux données sismiques." Thesis, Vandoeuvre-les-Nancy, INPL, 2006. http://www.theses.fr/2006INPL083N/document.
Full textSeismic reflection data are a valuable source of information for the three-dimensional modeling of subsurface structures in the exploration-production of hydrocarbons. This work focuses on the implementation of visualization techniques for their interpretation. We face both qualitative and quantitative challenges. It is indeed necessary to consider (1) the particular nature of seismic data and the interpretation process (2) the size of data. Our work focuses on these two distinct aspects : 1) From the qualitative point of view, we first highlight the main characteristics of seismic data. Based on this analysis, we implement a volume visualization technique adapted to the specificity of the data. We then focus on the multimodal aspect of interpretation which consists in combining several sources of information (seismic and structural). Depending on the nature of these sources (strictly volumes or both volumes and surfaces), we propose two different visualization systems. 2) From the quantitative point of view, we first define the main hardware constraints involved in seismic interpretation. Focused on these constraints, we implement a generic memory management system. Initially able to couple visualization and data processing on massive data volumes, it is then improved and specialised to build a dynamic system for distributed memory management on PC clusters. This later version, dedicated to visualization, allows to manipulate regional scale seismic data (100-200 GB) in real-time. The main aspects of this work are both studied in the scientific context of visualization and in the application context of geosciences and seismic interpretation
Courtès, Ludovic. "Sauvegarde coopérative de données pour dispositifs mobiles." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2007. http://tel.archives-ouvertes.fr/tel-00196822.
Full textWhitbeck, John. "Réseaux mobiles opportunistes : visualisation, modélisation et application aux transferts de charge." Paris 6, 2012. http://www.theses.fr/2012PA066304.
Full textWireless communicating devices are everywhere and increasingly blend into our everyday lives, they form new opportunistic networks that allow data to flow across often unreliable, unorganized, and heterogeneous wireless networks. By developing new analysis techniques for temporal dynamic graphs, this thesis proposes and implements a strong use-case for opportunistic networks: data offloading. Analyzing real-life connectivity graphs is difficult. In this thesis, we develop the plausible mobility approach, which infers, from a given contact trace, a compatible node mobility. Furthermore, we define reachability graphs that capture space-time connectivity. When applied to common contact traces, they show that acceptable delivery ratios for point-to-point communications are often out of reach, regardless of the DTN routing protocol, but that the size of the space-time dominating set tends to be a small fraction of the total number of nodes. Accordingly, we show how opportunistic networks may be used to significantly offload broadcast traffic in situations were two radio technologies coexist, typically a pervasive, low-bitrate, and expensive radio, alongside a shorter-range, high-bitrate, and cheaper one. The latter forms the opportunistic network that is used for disseminating most of the content, whereas the former serves both as a control channel for monitoring and as a data channel for bridging the connectivity gaps in the opportunistic network. In this thesis we propose Push-and-Track, a mobility-agnostic framework that leverages an opportunistic network to reliably disseminate content to large numbers of mobile nodes, while minimizing the load on the pervasive radio
Auber, David. "Outils de visualisation de larges structures de données." Bordeaux 1, 2002. http://www.theses.fr/2002BOR12607.
Full textSansen, Joris. "La visualisation d’information pour les données massives : une approche par l’abstraction de données." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0636/document.
Full textThe evolution and spread of technologies have led to a real explosion of information and our capacity to generate data and our need to analyze them have never been this strong. Still, the problems raised by such accumulation (storage, computation delays, diversity, speed of gathering/generation, etc. ) is as strong as the data are big, complex and varied. Information visualization,by its ability to summarize and abridge data was naturally established as appropriate approach. However, it does not solve the problem raised by Big Data. Actually, classical visualization techniques are rarely designed to handle such mass of information. Moreover, the problems raised by data storage and computation time have repercussions on the analysis system. For example,the increasing distance between the data and the analyst : the place where the data is stored and the place where the user will perform the analyses arerarely close. In this thesis, we focused on these issues and more particularly on adapting the information visualization techniques for Big Data. First of all focus on relational data : how does the existence of a relation between entity istransmitted and how to improve this transmission for hierarchical data. Then,we focus on multi-variate data and how to handle their complexity for the required computations. Finally, we present the methods we designed to make our techniques compatible with Big Data
Allouti, Faryel. "Visualisation dans les systèmes informatiques coopératifs." Paris 5, 2011. http://www.theses.fr/2011PA05S003.
Full textClustering techniques and visualization tools of complex data are two recurring themes in the community of Mining and Knowledge Management. At the intersection of these two themes there are the visualization methods such as multidimensional scaling or the Self-Organizing Maps (SOM). The SOM is constructed using K-means algorithm to which is added the notion of neighborhood allowing in this way the preservation of the topo-logy of the data. Thus, the learning moves closer, in the space of data, the centers that are neighbors on a two dimensions grid generally, to form a discrete surface which is a representation of the distribution of the cloud to explore. In this thesis, we are interested in the visualization in a cooperative context, where co-operation is established via an asynchronous communication and the media is the e-mail. This tool has emerged with the advent of information technology and communication. It is widely used in organizations, it allows an immediate and fast distribution of the in-formation to several persons at the same time, without worrying about their presence. Our objective consisted in proposing a tool of visual exploration of textual data which are files attached to the electronic messages. In order to do this, we combined clustering and visualization methods. We investigated the mixture approach, which is a very useful contribution for classification. In our context, we used the multinomial mixture model (Go-vaert and Nadif, 2007) to determine the classes of files. In addition, we studied the aspect of visualization of the obtained classes and documents using the multidimensional scaling and DC (Difference of Convex functions) and Self-Organizing Maps of Kohonen
Grignard, Arnaud. "Modèles de visualisation à base d'agents." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066268.
Full textInformation visualization is the study of interactive visual representations of abstract data to reinforce human cognition. It is very closely associated with data mining issues which allow to explore, understand and analyze phenomena, systems or data masses whose complexity continues to grow today. However, most existing visualization techniques are not suited to the exploration and understanding of datasets that consist of a large number of individual data from heterogeneous sources that share many properties with what are commonly called "complex systems". The reason is often the use of monolithic and centralized approaches. This situation is reminiscent of the modeling of complex systems (social sciences, chemistry, ecology, and many other fields) before progress represented by the generalization of agent-based approaches twenty years ago. In this thesis, I defend the idea that the same approach can be applied with the same success to the field of information visualization. By starting from the now commonly accepted idea that the agent-based models offer appropriate representations the complexity of a real system, I propose to use an approach based on the definition of agent-based visualization models to facilitate visual representation of complex data and to provide innovative support which allows to explore, programmatically and visually, their underlying dynamics. Just like their software counterparts, agent-based visualization models are composed of autonomous graphical entities that can interact and organize themselves, learn from the data they process and as a result adapt their behavior and visual representations. By providing a user the ability to describe visualization tasks in this form, my goal is to allow them to benefit from the flexibility, modularity and adaptability inherent in agent-based approaches. These concepts have been implemented and experimented on the GAMA modeling and simulation platform in which I developed a 3D immersive environment offering the user different point of views and way to interact with agents. Their implementation is validated on models chosen for their properties, supports a linear progression in terms of complexity, allowing us to highlight the concepts of flexibility, modularity and adaptability. Finally, I demonstrate through the particular case of data visualization, how my approach allows, in real time, to represent, to clarify, or even discover their dynamics and how that progress in terms of visualization can contributing,in turn, to improve the modeling of complex systems
Bourqui, Romain. "Décomposition et Visualisation de graphes : Applications aux Données Biologiques." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2008. http://tel.archives-ouvertes.fr/tel-00421872.
Full textLes travaux de cette thèse ont été appliqués sur des données réelles provenant de deux domaines de la biologie : les réseaux métaboliques et les réseaux d'interactions gènes-protéines.
Ventura, Quentin. "Technique de visualisation hybride pour les données spatio-temporelles." Mémoire, École de technologie supérieure, 2014. http://espace.etsmtl.ca/1298/1/VENTURA_Quentin.pdf.
Full textHayat, Khizar. "Visualisation 3D adaptée par insertion synchronisée de données cachées." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2009. http://tel.archives-ouvertes.fr/tel-00400762.
Full textMora, Benjamin. "Nouveaux algorithmes interatifs pour la visualisation de données volumiques." Toulouse 3, 2001. http://www.theses.fr/2001TOU30192.
Full textBarbier, Sébastien. "Visualisation distance temps-réel de grands volumes de données." Grenoble 1, 2009. http://www.theses.fr/2009GRE10155.
Full textNumerical simulations produce huger and huger meshes that can reach dozens of million tetrahedra. These datasets must be visually analyzed to understand the physical simulated phenomenon and draw conclusions. The computational power for scientific visualization of such datasets is often smaller than for numerical simulation. As a consequence, interactive exploration of massive meshes is barely achieved. In this document, we propose a new interactive method to interactively explore massive tetrahedral meshes with over forty million tetrahedra. This method is fully integrated into the simulation process, based on two meshes at different resolutions , one fine mesh and one coarse mesh , of the same simulation. A partition of the fine vertices is computed guided by the coarse mesh. It allows the on-the-fly extraction of a mesh, called \textit{biresolution}, mixed of the two initial resolutions as in usual multiresolution approaches. The extraction of such meshes is carried out into the main memory (CPU), the last generation of graphics cards (GPU) and with an out-of-core algorithm. They guarantee extraction rates never reached in previous work. To visualize the biresolution meshes, a new direct volume rendering (DVR) algorithm is fully implemented into graphics cards. Approximations can be performed and are evaluated in order to guarantee an interactive rendering of any biresolution meshes
El, Mahrsi Mohamed Khalil. "Analyse et fouille de données de trajectoires d'objets mobiles." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0056/document.
Full textIn this thesis, we explore two problems related to managing and mining moving object trajectories. First, we study the problem of sampling trajectory data streams. Storing the entirety of the trajectories provided by modern location-aware devices can entail severe storage and processing overheads. Therefore, adapted sampling techniques are necessary in order to discard unneeded positions and reduce the size of the trajectories while still preserving their key spatiotemporal features. In streaming environments, this process needs to be conducted "on-the-fly" since the data are transient and arrive continuously. To this end, we introduce a new sampling algorithm called spatiotemporal stream sampling (STSS). This algorithm is computationally-efficient and guarantees an upper bound for the approximation error introduced during the sampling process. Experimental results show that stss achieves good performances and can compete with more sophisticated and costly approaches. The second problem we study is clustering trajectory data in road network environments. We present three approaches to clustering such data: the first approach discovers clusters of trajectories that traveled along the same parts of the road network; the second approach is segment-oriented and aims to group together road segments based on trajectories that they have in common; the third approach combines both aspects and simultaneously clusters trajectories and road segments. We show how these approaches can be used to reveal useful knowledge about flow dynamics and characterize traffic in road networks. We also provide experimental results where we evaluate the performances of our propositions
El, Mahrsi Mohamed Khalil. "Analyse et fouille de données de trajectoires d'objets mobiles." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0056.
Full textIn this thesis, we explore two problems related to managing and mining moving object trajectories. First, we study the problem of sampling trajectory data streams. Storing the entirety of the trajectories provided by modern location-aware devices can entail severe storage and processing overheads. Therefore, adapted sampling techniques are necessary in order to discard unneeded positions and reduce the size of the trajectories while still preserving their key spatiotemporal features. In streaming environments, this process needs to be conducted "on-the-fly" since the data are transient and arrive continuously. To this end, we introduce a new sampling algorithm called spatiotemporal stream sampling (STSS). This algorithm is computationally-efficient and guarantees an upper bound for the approximation error introduced during the sampling process. Experimental results show that stss achieves good performances and can compete with more sophisticated and costly approaches. The second problem we study is clustering trajectory data in road network environments. We present three approaches to clustering such data: the first approach discovers clusters of trajectories that traveled along the same parts of the road network; the second approach is segment-oriented and aims to group together road segments based on trajectories that they have in common; the third approach combines both aspects and simultaneously clusters trajectories and road segments. We show how these approaches can be used to reveal useful knowledge about flow dynamics and characterize traffic in road networks. We also provide experimental results where we evaluate the performances of our propositions
Roudaut, Anne. "Conception et Evaluation de Technique d'Interaction pour Dispositifs Mobiles." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00005914.
Full textGrignard, Arnaud. "Modèles de visualisation à base d'agents." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066268/document.
Full textInformation visualization is the study of interactive visual representations of abstract data to reinforce human cognition. It is very closely associated with data mining issues which allow to explore, understand and analyze phenomena, systems or data masses whose complexity continues to grow today. However, most existing visualization techniques are not suited to the exploration and understanding of datasets that consist of a large number of individual data from heterogeneous sources that share many properties with what are commonly called "complex systems". The reason is often the use of monolithic and centralized approaches. This situation is reminiscent of the modeling of complex systems (social sciences, chemistry, ecology, and many other fields) before progress represented by the generalization of agent-based approaches twenty years ago. In this thesis, I defend the idea that the same approach can be applied with the same success to the field of information visualization. By starting from the now commonly accepted idea that the agent-based models offer appropriate representations the complexity of a real system, I propose to use an approach based on the definition of agent-based visualization models to facilitate visual representation of complex data and to provide innovative support which allows to explore, programmatically and visually, their underlying dynamics. Just like their software counterparts, agent-based visualization models are composed of autonomous graphical entities that can interact and organize themselves, learn from the data they process and as a result adapt their behavior and visual representations. By providing a user the ability to describe visualization tasks in this form, my goal is to allow them to benefit from the flexibility, modularity and adaptability inherent in agent-based approaches. These concepts have been implemented and experimented on the GAMA modeling and simulation platform in which I developed a 3D immersive environment offering the user different point of views and way to interact with agents. Their implementation is validated on models chosen for their properties, supports a linear progression in terms of complexity, allowing us to highlight the concepts of flexibility, modularity and adaptability. Finally, I demonstrate through the particular case of data visualization, how my approach allows, in real time, to represent, to clarify, or even discover their dynamics and how that progress in terms of visualization can contributing,in turn, to improve the modeling of complex systems
Blanchard, Frédéric. "Visualisation et classification de données multidimensionnelles : Application aux images multicomposantes." Reims, 2005. http://theses.univ-reims.fr/exl-doc/GED00000287.pdf.
Full textThe analysis of multicomponent images is a crucial problem. Visualization and clustering problem are two relevant questions about it. We decided to work in the more general frame of data analysis to answer to these questions. The preliminary step of this work is describing the problems induced by the dimensionality and studying the current dimensionality reduction methods. The visualization problem is then considered and a contribution is exposed. We propose a new method of visualization through color image that provides an immediate and sythetic image od data. Applications are presented. The second contribution lies upstream with the clustering procedure strictly speaking. We etablish a new kind of data representation by using rank transformation, fuzziness and agregation procedures. Its use inprove the clustering procedures by dealing with clusters with dissimilar density or variant effectives and by making them more robust. This work presents two important contributions to the field of data analysis applied to multicomponent image. The variety of the tools involved (originally from decision theory, uncertainty management, data mining or image processing) make the presented methods usable in many diversified areas as well as multicomponent images analysis
Do, Thanh-Nghi. "Visualisation et séparateurs à vaste marge en fouille de données." Nantes, 2004. http://www.theses.fr/2004NANT2072.
Full textWe present the different cooperative approaches using visualization methods and support vector machine algorithms (SVM) for knowledge discovery in databases (KDD). Most of existing data mining approaches construct the model in an automatic way, the user is not involved in the mining process. Furthermore, these approaches must be able to deal with the challenge of large datasets. Our work aims at increasing the human role in the KDD process (by the way of visualization methods) and improve the performances (concerning the execution time and the memory requirement) of the methods for mining large datasets. W e present:- parallel and distributed SVM algorithms for mining massive datasets, - interactive graphical methods to explain SVM results, - cooperative approaches to involve more significatively the user in the model construction
Ammann, Lucas. "Visualisation temps réel de données à deux dimensions et demie." Strasbourg, 2010. https://publication-theses.unistra.fr/public/theses_doctorat/2010/AMMANN_Lucas_2010.pdf.
Full textHeightfield data is now a common representation for several kind of virtual objects. Indeed, they are frequently used to represent topographical or scientific data. They are also used by 3-dimensional digitisation devices to store real objects. However, some issues are introduced by this kind of data during their manipulation and especially their visualisation. During this thesis, we develop simple yet efficient methods to render heightfield data, especially data from art painting digitisation. In addition to the visualisation method, we propose a complete pipeline to acquire art paintings and to process data for the visualisation process. To generalize the proposed approach, another rendering method is described to display topographical data by combining a rasterization process with a ray-casting rendering. Both of the rendering techniques are based on an adaptive mecanism which combines several rendering algorithms to enhance visualisation performances. These mechanisms are designed to avoid pre-processing steps of the data and to make use of straightforward rendering methods
Benmerzoug, Fateh. "Analyse, modélisation et visualisation de données sismiques à grande échelle." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30077.
Full textThe main goal of the oil and gas industry is to locate and extract hydrocarbon resources, mainly petroleum and natural gas. To do this efficiently, numerous seismic measurements are conducted to gather up as much data as possible on terrain or marine surface area of interest. Using a multitude of sensors, seismic data are acquired and processed resulting in large cube-shaped data volumes. These volumes are then used to further compute additional attributes that helps in the understanding of the inner geological and geophysical structure of the earth. The visualization and exploration, called surveys, of these volumes are crucial to understand the structure of the underground and localize natural reservoirs where oil or gas are trapped. Recent advancements in both processing and imaging technologies enables engineers and geoscientists to perform larger seismic surveys. Modern seismic measurements yield large multi-hundred gigabytes of data volumes. The size of the acquired volumes presents a real challenge, both for processing such large volumes as well as their storage and distribution. Thus, data compression is a much- desired feature that helps answering the data size challenge. Another challenging aspect is the visualization of such large volumes. Traditionally, a volume is sliced both vertically and horizontally and visualized by means of 2-dimensional planes. This method necessitates the user having to manually scrolls back and forth be- tween successive slices in order to locate and track interesting geological features. Even though slicing provides a detailed visualization with a clear and concise representation of the physical space, it lacks the depth aspect that can be crucial in the understanding of certain structures. Additionally, the larger the volume gets, the more tedious and repetitive this task can be. A more intuitive approach for visualization is volume rendering. Rendering the seismic data as a volume presents an intuitive and hands on approach. By defining the appropriate color and opacity filters, the user can extract and visualize entire geo-bodies as individual continuous objects in a 3-dimensional space. In this thesis, we present a solution for both the data size and large data visualization challenges. We give an overview of the seismic data and attributes that are present in a typical seismic survey. We present an overview of data compression in a whole, discussing the necessary tools and methods that are used in the industry. A seismic data compression algorithm is then proposed, based on the concept of ex- tended transforms. By employing the GenLOT , Generalized Lapped Orthogonal Trans- forms we derive an appropriate transform filter that decorrelates the seismic data so they can be further quantized and encoded using P-SPECK, our proposed compression algorithm based on block-coding of bit-planes. Furthermore, we proposed a ray-casting out-of-core volume rendering framework that enables the visualization of arbitrarily large seismic cubes. Data are streamed on-demand and rendered using the user provided opacity and color filters, resulting in a fairly easy to use software package
Lanrezac, André. "Interprétation de données expérimentales par simulation et visualisation moléculaire interactive." Electronic Thesis or Diss., Université Paris Cité, 2023. http://www.theses.fr/2023UNIP7133.
Full textThe goal of Interactive Molecular Simulations (IMS) is to observe the conformational dynamics of a molecular simulation in real-time. Instant visual feedback enables informative monitoring and observation of structural changes imposed by the user's manipulation of the IMS. I conducted an in-depth study of knowledge to gather and synthesize all the research that has developed IMS. Interactive Molecular Dynamics (IMD) is one of the first IMS protocols that laid the foundation for the development of this approach. My thesis laboratory was inspired by IMD to develop the BioSpring simulation engine based on the elastic network model. This model allows for the simulation of the flexibility of large biomolecular ensembles, potentially revealing long-timescale changes that would not be easily captured by molecular dynamics. This simulation engine, along with the UnityMol visualization software, developed through the Unity3D game engine, and linked by the MDDriver communication interface, has been extended to converge towards a complete software suite. The goal is to provide an experimenter, whether an expert or novice, with a complete toolbox for modeling, displaying, and interactively controlling all parameters of a simulation. The particular implementation of such a protocol, based on formalized and extensible communication between the different components, was designed to easily integrate new possibilities for interactive manipulation and sets of experimental data that will be added to the restraints imposed on the simulation. Therefore, the user can manipulate the molecule of interest under the control of biophysical properties integrated into the simulated model, while also having the ability to dynamically adjust simulation parameters. Furthermore, one of the initial objectives of this thesis was to integrate the management of ambiguous interaction constraints from the HADDOCK biomolecular docking software directly into UnityMol, making it possible to use these same restraints with a variety of simulation engines. A primary focus of this research was to develop a fast and interactive protein positioning algorithm in implicit membranes using a model called the Integrative Membrane Protein and Lipid Association Method (IMPALA), developed by Robert Brasseur's team in 1998. The first step was to conduct an in-depth search of the conditions under which the experiments were performed at the time to verify the method and validate our own implementation. We will see that this opens up interesting questions about how scientific experiments can be reproduced. The final step that concluded this thesis was the development of a new universal lipid-protein interaction method, UNILIPID, which is an interactive protein incorporation model in implicit membranes. It is independent of the representation scale and can be applied at the all-atom, coarse-grain, or grain-by-grain level. The latest Martini3 representation, as well as a Monte Carlo sampling method and rigid body dynamics simulation, have been specially integrated into the method, in addition to various system preparation tools. Furthermore, UNILIPID is a versatile approach that precisely reproduces experimental hydrophobicity terms for each amino acid. In addition to simple implicit membranes, I will describe an analytical implementation of double membranes as well as a generalization to arbitrarily shaped membranes, both of which rely on novel applications
Décoret, Xavier. "Pré-traitement de grosses bases de données pour la visualisation interactive." Phd thesis, Université Joseph Fourier (Grenoble), 2002. http://tel.archives-ouvertes.fr/tel-00528890.
Full textDa, Costa David. "Visualisation et fouille interactive de données à base de points d'intérêts." Tours, 2007. http://www.theses.fr/2007TOUR4021.
Full textIn this thesis, we present the problem of the visual data mining. We generally notice that it is specific to the types of data and that it is necessary to spend a long time to analyze the results in order to obtain an answer on the aspect of data. In this thesis, we have developed an interactive visualization environment for data exploration using points of interest. This tool visualizes all types of data and is generic because it uses only one similarity measure. These methods must be able to deal with large data sets. We also sought to improve the performances of our visualization algorithms, thus we managed to represent one million data. We also extended our tool to the data clustering. Most existing data clustering methods work in an automatic way, the user is not implied iin the process. We try to involve more significantly the user role in the data clustering process in order to improve his comprehensibility of the data results
Dautraix, Isabelle. "LaStéréo échographie. : Une nouvelle technique de visualisation volumique de données ultrasonores." Lyon, INSA, 1993. http://www.theses.fr/1993ISAL0012.
Full textThis work concerns the presentation in relief of medical ultrasonic data obtained by echography B. The originality of this study consists in transposing the classical stereoscopic technicals in the field of ultrasonic imaging. By analogy with the stereo-radiography by X rays, the principle of the stereo-echographis drawn up. An homogeneous volume of data, in Cartesian coordinates, is obtained by interpolation of the ultrasonic cross-sectional images acquired by translation or rotation of an echographic probe. Thanks to the collinear equations of the photogrammetry, two stereo-echograms of the ultrasonic volume are computerized. The value of the parameters of simulated systems of shot is chosen so that the visual restitution of the relief is optimized. Moreover, in order to make the restitution of the relief easier, the ultrasonic data can be pre-treated before the calculation of the stereo-echograms and some synthetic data (simple geometrical volumes) can be added to the data. The algorithms of the computation of the stereo-echograms have been first validated on physical phantoms and then applied on actual ultrasonic data (liver, foetus,. . . ). Contrary to the technicals of surface or volume rendering, which are difficult to apply to ultrasonic data which are very noisy by nature, the stereo-echography provides a presentation in true relief which makes the interpretation and the understanding of 3D complicated structures easier
Royan, Jérôme. "Visualisation interactive de scènes urbaines vastes et complexes à travers un réseau." Rennes 1, 2005. http://www.theses.fr/2005REN1S013.
Full textPouderoux, Joachim. "Création semi-automatique de modèles numériques de terrains - Visualisation et interaction sur terminaux mobiles communicants." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2007. http://tel.archives-ouvertes.fr/tel-00354701.
Full textDans un premier temps, nous nous intéressons à la création de ces modèles à partir d'une source importante de données topographiques constituée par les cartes topographiques. Nous présentons une chaîne complète de traitements permettant de générer un MNT à partir d'une carte topographique numérisée. Nous détaillons particulièrement de nouvelles méthodes de reconstruction des courbes de niveaux et d'interpolation de ces courbes pour générer un MNT. Les différents travaux effectués dans cette thématique s'intègrent au sein de la plate-forme logicielle AutoDEM que nous avons développée durant cette thèse.
Puis, dans une deuxième partie, nous présentons une nouvelle technique permettant de visualiser des MNT en 3D sur une large gamme de dispositifs allant de stations de travail reliées à de grands écrans jusqu'à des terminaux mobiles (TM) à faibles capacités tels que les PDA ou les téléphones portables. L'intérêt majeur de la technique présentée, qui repose sur un mode connecté client-serveur, réside dans l'adaptation dynamique du modèle 3D aux capacités d'affichage du terminal. Nous nous intéressons également à des techniques de rendu à distance et présentons deux techniques permettant d'offrir d'une part une visualisation interactive temps réel et d'autre part un panorama virtuel à l'utilisateur.
Enfin, dans un troisième temps, nous décrivons des techniques nouvelles permettant à un utilisateur mobile disposant d'un TM de naviguer et d'interagir avec des données géographiques (cartes ou plans 2D et scènes 3D). La première est une technique d'interaction tangible et bi-manuelle reposant sur la détection par analyse du flux vidéo d'une cible décrivant un code couleur. La deuxième est une technique de sélection à deux niveaux adaptée aux TM ne disposant pas de dispositif de pointage continu.
Kaba, Bangaly. "Décomposition de graphes comme outil de regroupement et de visualisation en fouille de données." Clermont-Ferrand 2, 2008. http://www.theses.fr/2008CLF21871.
Full textTrellet, Mikael. "Exploration et analyse immersives de données moléculaires guidées par la tâche et la modélisation sémantique des contenus." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS262/document.
Full textIn structural biology, the theoretical study of molecular structures has four main activities organized in the following scenario: collection of experimental and theoretical data, visualization of 3D structures, molecular simulation, analysis and interpretation of results. This pipeline allows the expert to develop new hypotheses, to verify them experimentally and to produce new data as a starting point for a new scenario.The explosion in the amount of data to handle in this loop has two problems. Firstly, the resources and time dedicated to the tasks of transfer and conversion of data between each of these four activities increases significantly. Secondly, the complexity of molecular data generated by new experimental methodologies greatly increases the difficulty to properly collect, visualize and analyze the data.Immersive environments are often proposed to address the quantity and the increasing complexity of the modeled phenomena, especially during the viewing activity. Indeed, virtual reality offers a high quality stereoscopic perception, useful for a better understanding of inherently three-dimensional molecular data. It also displays a large amount of information thanks to the large display surfaces, but also to complete the immersive feeling with other sensorimotor channels (3D audio, haptic feedbacks,...).However, two major factors hindering the use of virtual reality in the field of structural biology. On one hand, although there are literature on navigation and environmental realistic virtual scenes, navigating abstract science is still very little studied. The understanding of complex 3D phenomena is however particularly conditioned by the subject’s ability to identify themselves in a complex 3D phenomenon. The first objective of this thesis work is then to propose 3D navigation paradigms adapted to the molecular structures of increasing complexity. On the other hand, the interactive context of immersive environments encourages direct interaction with the objects of interest. But the activities of: results collection, simulation and analysis, assume a working environment based on command-line inputs or through specific scripts associated to the tools. Usually, the use of virtual reality is therefore restricted to molecular structures exploration and visualization. The second thesis objective is then to bring all these activities, previously carried out in independent and interactive application contexts, within a homogeneous and unique interactive context. In addition to minimizing the time spent in data management between different work contexts, the aim is also to present, in a joint and simultaneous way, molecular structures and analyses, and allow their manipulation through direct interaction.Our contribution meets these objectives by building on an approach guided by both the content and the task. More precisely, navigation paradigms have been designed taking into account the molecular content, especially geometric properties, and tasks of the expert, to facilitate spatial referencing in molecular complexes and make the exploration of these structures more efficient. In addition, formalizing the nature of molecular data, their analysis and their visual representations, allows to interactively propose analyzes adapted to the nature of the data and create links between the molecular components and associated analyzes. These features go through the construction of a unified and powerful semantic representation making possible the integration of these activities in a unique interactive context
Devulder, Grégory. "Base de données de séquences, phylogénie et identification bactérienne." Lyon 1, 2004. http://www.theses.fr/2004LYO10164.
Full textWang, Nan. "Visualisation et interaction pour l’exploration et la perception immersive de données 3D." Thesis, Paris, ENMP, 2012. http://www.theses.fr/2012ENMP0090/document.
Full textThe objective in this case is not only to be realistic, but also to provide new and intelligible ways of model representation. This raises new issues in data perception. The question of perception of complex data, especially regarding visual feedback, is an open question, and it is the subject of this work. This PhD thesis studied the human perception in Immersive Virtual Environments of complex datasets, one of the applications is the scientific visualization of scalar values stemming from physics models, such as temperature distribution inside a vehicle prototype.The objective of the first part is to study the perceptive limits of volumetric rendering for the display of scientific volumetric data, such as a volumetric temperature distribution rendering using point cloud. We investigate the effect on the user perception of three properties of a point cloud volumetric rendering: point size, cloud density and near clipping plane position. We present an experiment where a series of pointing tasks are proposed to a set of users. User behavior and task completion time are evaluated during the test. The study allowed to choose the most suitable combination of these properties, and provided guidelines for volumetric data representation in VR immersive systems.In the second part of our work, we evaluate one interaction method and four display techniques for exploring volumetric datasets in virtual reality immersive environments. We propose an approach based on the display of a subset of the volumetric data, as isosurfaces, and an interactive manipulation of the isosurfaces to allow the user to look for local properties in the datasets. We also studied the influence of four different rendering techniques for isosurface rendering in a virtual reality system. The study is based on a search and point task in a 3D temperature field. User precision, task completion time and user movement were evaluated during the test. The study allowed to choose the most suitable rendering mode for isosurface representation, and provided guidelines for data exploration tasks in immersive environments
Wang, Nan. "Visualisation et interaction pour l'exploration et la perception immersive de données 3D." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00821004.
Full textConinx, Alexandre. "Visualisation interactive de grands volumes de données incertaines : pour une approche perceptive." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00749885.
Full textVerbanck, Marie. "Analyse exploratoire de données transcriptomiques : de leur visualisation à l'intégration d’information extérieure." Rennes, Agrocampus Ouest, 2013. http://www.theses.fr/2013NSARG011.
Full textWe propose new methodologies of exploratory statistics which are dedicated to the analysis of transcriptomic data (DNA microarray data). Transcriptomic data provide an image of the transcriptome which itself is the result of phenomena of activation or inhibition of gene expression. However, the image of the transcriptome is noisy. That is why, firstly we focus on the issue of transcriptomic data denoising, in a visualisation framework. To do so, we propose a regularised version of principal component analysis. This regularised version allows to better estimate and visualise the underlying signal of noisy data. In addition, we can wonder if the knowledge of only the transcriptome is enough to understand the complexity of relationships between genes. That is why we propose to integrate other sources of information about genes, and in an active way, in the analysis of transcriptomic data. Two major mechanisms seem to be involved in the regulation of gene expression, regulatory proteins (for instance transcription factors) and regulatory networks on the one hand, chromosomal localisation and genome architecture on the other hand. Firstly, we focus on the regulation of gene expression by regulatory proteins; we propose a gene clustering algorithm based on the integration of functional knowledge about genes, which is provided by Gene Ontology annotations. This algorithm provides clusters constituted by genes which have both similar expression profiles and similar functional annotations. The clusters thus constituted are then better candidates for interpretation. Secondly, we propose to link the study of transcriptomic data to chromosomal localisation in a methodology developed in collaboration with geneticists
Doan, Nath-Quang. "Modèles hiérarchiques et topologiques pour le clustering et la visualisation des données." Paris 13, 2013. http://scbd-sto.univ-paris13.fr/secure/edgalilee_th_2013_doan.pdf.
Full textThis thesis focuses on clustering approaches inspired from topological models and an autonomous hierarchical clustering method. The clustering problem becomes more complicated and difficult due to the growth in quality and quantify of structured data such as graphs, trees or sequences. In this thesis, we are particularly interested in self-organizing maps which have been generally used for learning topological preservation, clustering, vector quantization and graph visualization. Our studyconcerns also a hierarchical clustering method AntTree which models the ability of real ants to build structure by connect themselves. By combining the topological map with the self-assembly rules inspired from AntTree, the goal is to represent data in a hierarchical and topological structure providing more insight data information. The advantage is to visualize the clustering results as multiple hierarchical trees and a topological network. In this report, we present three new models that are able to address clustering, visualization and feature selection problems. In the first model, our study shows the interest in the use of hierarchical and topological structure through several applications on numerical datasets, as well as structured datasets e. G. Graphs and biological dataset. The second model consists of a flexible and growing structure which does not impose the strict network-topology preservation rules. Using statistical characteristics provided by hierarchical trees, it accelerates significantly the learning process. The third model addresses particularly the issue of unsupervised feature selection. The idea is to use hierarchical structure provided by AntTree to discover automatically local data structure and local neighbors. By using the tree topology, we propose a new score for feature selection by constraining the Laplacian score. Finally, this thesis offers several perspectives for future work
Esson, François. "Un logiciel de visualisation et de classification interactives de données quantitatives multidimensionnelles." Lille 1, 1997. http://www.theses.fr/1997LIL10089.
Full textA chaque nouvelle configuration du référentiel (point de vue, direction de vue) correspondra une représentation plane différente de l'ensemble des points de données. C'est la généralisation à la dimension n de ce concept qui est à la base du travail effectue. Le logiciel issu de cette nouvelle approche interactive dans le domaine de la classification multidimensionnelle et de la représentation plane de données multidimensionnelles devrait apporter un outil de travail intéressant pour des chercheurs qui sans être des spécialistes en analyse de données ou en programmation, seraient amenés à utiliser l'approche de la classification, pour leur travail
Boudjeloud-Assala, Baya Lydia. "Visualisation et algorithmes génétiques pour la fouille de grands ensembles de données." Nantes, 2005. http://www.theses.fr/2005NANT2065.
Full textWe present cooperative approaches using interactive visualization methods and automatic dimension selection methods for knowledge discovery in databases. Most existing data mining methods work in an automatic way, the user is not implied in the process. We try to involve more significantly the user role in the data mining process in order to improve his confidence and comprehensibility of the obtained models or results. Furthermore, the size of data sets is constantly increasing, these methods must be able to deal with large data sets. We try to improve the performances of the algorithms to deal with these high dimensional data sets. We developed a genetic algorithm for dimension selection with a distance-based fitness function for outlier detection in high dimensional data sets. This algorithm uses only a few dimensions to find the same outliers as in the whole data sets and can easily treat high dimensional data sets. The number of dimensions used being low enough, it is also possible to use visualization methods to explain and interpret outlier detection algorithm results. It is then possible to create a model from the data expert for example to qualify the detected element as an outlier or simply an error. We have also developed an evaluation measure for dimension selection in unsupervised classification and outlier detection. This measure enables us to find the same clusters as in the data set with its whole dimensions as well as clusters containing very few elements (outliers). Visual interpretation of the results shows the dimensions implied, they are considered as relevant and interesting for the clustering and outlier detection. Finally we present a semi-interactive genetic algorithm involving more significantly the user in the selection and evaluation process of the algorithm
Allanic, Marianne. "Gestion et visualisation de données hétérogènes multidimensionnelles : application PLM à la neuroimagerie." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2248/document.
Full textNeuroimaging domain is confronted with issues in analyzing and reusing the growing amount of heterogeneous data produced. Data provenance is complex – multi-subjects, multi-methods, multi-temporalities – and the data are only partially stored, restricting multimodal and longitudinal studies. Especially, functional brain connectivity is studied to understand how areas of the brain work together. Raw and derived imaging data must be properly managed according to several dimensions, such as acquisition time, time between two acquisitions or subjects and their characteristics. The objective of the thesis is to allow exploration of complex relationships between heterogeneous data, which is resolved in two parts : (1) how to manage data and provenance, (2) how to visualize structures of multidimensional data. The contribution follow a logical sequence of three propositions which are presented after a research survey in heterogeneous data management and graph visualization. The BMI-LM (Bio-Medical Imaging – Lifecycle Management) data model organizes the management of neuroimaging data according to the phases of a study and takes into account the scalability of research thanks to specific classes associated to generic objects. The application of this model into a PLM (Product Lifecycle Management) system shows that concepts developed twenty years ago for manufacturing industry can be reused to manage neuroimaging data. GMDs (Dynamic Multidimensional Graphs) are introduced to represent complex dynamic relationships of data, as well as JGEX (Json Graph EXchange) format that was created to store and exchange GMDs between software applications. OCL (Overview Constraint Layout) method allows interactive and visual exploration of GMDs. It is based on user’s mental map preservation and alternating of complete and reduced views of data. OCL method is applied to the study of functional brain connectivity at rest of 231 subjects that are represented by a GMD – the areas of the brain are the nodes and connectivity measures the edges – according to age, gender and laterality : GMDs are computed through processing workflow on MRI acquisitions into the PLM system. Results show two main benefits of using OCL method : (1) identification of global trends on one or many dimensions, and (2) highlights of local changes between GMD states
Lange, Benoît. "Visualisation interactive de données hétérogènes pour l'amélioration des dépenses énergétiques du bâtiment." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20172/document.
Full textEnergy efficiencies are became a major issue. Building from any country have been identified as gap of energy, building are not enough insulated and energy loss by this struc- ture represent a major part of energy expenditure. RIDER has emerged from this viewpoint, RIDER for Research for IT Driven EneRgy efficiency. This project has goal to develop a new kind of IT system to optimize energy consumption of buildings. This system is based on a component paradigm, which is composed by a pivot model, a data warehouse with a data mining approach and a visualization tool. These two last components are developed to improve content of pivot model.In this manuscript, our focus was on the visualization part of the project. This manuscript is composed in two parts: state of the arts and contributions. Basic notions, a visualization chapter and a visual analytics chapter compose the state of the art. In the contribution part, we present data model used in this project, visualization proposed and we conclude with two experimentations on real data
Boucheny, Christian. "Visualisation scientifique interactive de grands volumes de données : pour une approche perceptive." Grenoble 1, 2009. http://www.theses.fr/2009GRE10021.
Full textWith the fast increase in computing power, numerical simulations of physical phenomena can nowadays rely on up to billions of elements. To extract relevant information in the huge resulting data sets, engineers need visualization tools permitting an interactive exploration and analysis of the computed fields. The goal of this thesis is to improve the visualizations performed by engineers by taking into account the characteristics of the human visual perception, with a particular focus on the perception of space and volume during the visualization of dense 3D data. Firstly, three psychophysics experiments have shown that direct volume rendering, a technique relying on the ordered accumulation of transparencies, provide very ambiguous cues to depth. This is particularly true for static presentations, while the addition of motion and exaggerated perspective cues help to solve part of these difficulties. Then, two algorithms have been developed to improve depth perception during the visualization of complex 3D structures. They have been implemented on the GPU, to achieve interactive renderings independently of the geometric nature of the analysed data. EyeDome Lighting is a new non-photorealistic shading technique that relies on the projected depth image of the scene. This algorithm enhances the perception of shapes and relative depths in complex 3D scenes. Also, a new fast view-dependent cutaway technique has been implemented, which permits to access otherwise occluded objects while providing cues to understand the structure in depth of masking objects
Bothorel, Gwenael. "Algorithmes automatiques pour la fouille visuelle de données et la visualisation de règles d’association : application aux données aéronautiques." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/13783/1/bothorel.pdf.
Full textDillenseger, Jean-Louis. "Visualisation Scientifique en médecine.Application à la visualisation de l'anatomie et à la visualisation en épileptologie clinique." Habilitation à diriger des recherches, Université Rennes 1, 2003. http://tel.archives-ouvertes.fr/tel-00130932.
Full textPour cela, une réflexion sur l'outil de visualisation a été menée afin de proposer un cadre bien défini qui puisse guider l'élaboration d'un outil de représentation répondant à une discipline et à une problématique particulière. Le point le plus original de cette réflexion concerne un essai de formalisation de l'évaluation de la performance des outils de visualisation.
Deux grands domaines d'application ont justement permis de démontrer la pertinence de ce cadre général de la visualisation :
- La visualisation générale de l'anatomie avec, dans un premier temps, la conception d'un outil générique de visualisation de données médicale, le lancer de rayons multifonctions. Cet outil a été ensuite étendu selon deux axes de recherche, d'une part l'intégration de modèles de connaissances dans la procédure de synthèse d'images et d'autre part, l'imagerie interventionnelle et plus particulièrement des applications en urologie.
- Les apports de la visualisation pour l'interprétation des données recueillies sur le patient épileptique et plus particulièrement l'élaboration d'outils complémentaires permettant une analyse progressive des mécanismes et structures impliqués dans la crise.
Pham, Huy Hoang. "Cohérence des données dans les services coopératifs télécom pour les utilisateurs mobiles." Grenoble INPG, 2002. http://www.theses.fr/2002INPG0057.
Full textMalinowski, Simon. "Codes joints source-canal pour transmission robuste sur canaux mobiles." Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/malinowski.pdf.
Full textJoint source-channel coding has been an area of recent research activity. This is due in particular to the limits of Shannon's separation theorem, which states that source and channel coding can be performed separately in order to reach optimality. Over the last decade, various works have considered performing these operations jointly. Source codes have hence been deeply studied. In this thesis, we have worked with these two kind of codes in the joint source-channel coding context. A state model for soft decoding of variable length and quasi-arithmetic codes is proposed. This state model is parameterized by an integer T that controls a trade-off between decoding performance and complexity. The performance of these source codes on the aggregated state model is then analyzed together with their resynchronisation properties. It is hence possible to foresee the performance of a given code with respect to the aggregation parameter T. A robust decoding scheme exploiting side information is then presented. The extra redundancy is under the form of partial length constraints at different time instants of the decoding process. Finally, two different distributed coding schemes based on quasi-arithmetic codes are proposed. The first one is based on puncturing the output of the quasi-arithmetic bit-stream, while the second uses a new kind of codes : overlapped quasi-arithmetic codes. The decoding performance of these schemes is very competitive compared to classical techniques using channel codes
Wan, Tao. "Modélisation et implémentation de systèmes OLAP pour des objets mobiles." Versailles-St Quentin en Yvelines, 2007. http://www.theses.fr/2007VERS0001.
Full textThe rapid growth of gao-location techniques and mobile devices led to the profusion of Mobile Objects databases. This raises a new issu concerning their use for decision support. While conventional On-Line Analytical Processing (OLAP) systems are efficiently used in data analysi&tI:\aOks to their multidimensional modelling, they are not adapted to MOs which consider information that evolves continuously over time (e. G. , position). Ln this study, we focus on the problem of data warehousing such objects. The contributions are: (1) two multidimensional models allowing their on-line analytical processing; one offers a well-to-do implementation, whereas the other one supplies a powerful representation; (2) an optimized implementation dedicated to answer efficiently typical spatiotemporal OLAP queries in context of OLAP systems
Jamin, Clément. "Algorithmes et structures de données compactes pour la visualisation interactive d’objets 3D volumineux." Thesis, Lyon 1, 2009. http://www.theses.fr/2009LYO10133.
Full textProgressive compression methods are now mature (obtained rates are close to theoretical bounds) and interactive visualization of huge meshes has been a reality for a few years. However, even if the combination of compression and visualization is often mentioned as a perspective, very few papers deal with this problem, and the files created by visualization algorithms are often much larger than the original ones. In fact, compression favors a low file size to the detriment of a fast data access, whereas visualization methods focus on rendering speed : both goals are opposing and competing. Starting from an existing progressive compression method incompatible with selective and interactive refinements and usable on small-sized meshes only, this thesis tries to reconcile lossless compression and visualization by proposing new algorithms and data structures which radically reduce the size of the objects while supporting a fast interactive navigation. In addition to this double capability, our method works out-of-core and can handle meshes containing several hundreds of millions vertices. Furthermore, it presents the advantage of dealing with any n-dimensional simplicial complex, which includes triangle soups or volumetric meshes
Lê, Thanh Vu. "Visualisation interactive 3D pour un ensemble de données géographiques de très grande taille." Pau, 2011. http://www.theses.fr/2011PAUU3005.
Full textReal-time terrain rendering remains an active area of research for a lot of modern computer based applications such as geographic information systems (GIS), interactive 3D games, flights simulators or virtual reality. The technological breakthroughs in data aquisition, coupled with recent advances in display technology have simultaneously led to substantial increases in resolution of both the Digital Elevation Models (DEM) and the various displays used to present this information. In this phD, we have presented a new out-of-core terrain visualization algorithm that achieves per-pixel accurate shading of large textured elevation maps in real-time : our first contribution is the LOD scheme which is based on a small precomputed quadtree of geometric errors, whose nodes are selected for asynchronous loading and rendering depending on a projection in screenspace of those errors. The terrain data and its color texture are manipulated by the CPU in a unified manner as a collection of raster image patches, whose dimensions depends on their screen-space occupancy ; our second contribution is a novel method to remove artifacts that appear on the border between quadtree blocks, we generate a continuous surface without needing additional mesh ; our latest contribution is an effective method adapted to our data structure for the geomorphing, it can be implemented entirely on the GPU. The presented framework exhibits several interesting features over other existing techniques : there is no mesh manipulation or mesh data structures required ; terrain geometric complexity only depends on projected elevation error views from above result in very coarse meshes), lower geometric complexity degrades terrain silhouettes but not details brought in through normal map shading, real-time rendering with support for progressive data loading ; and geometric information and color textures are similarly and efficiently handled as raster data by the CPU. Due to simplified data structures, the system is compact, CPU and GPU efficient and is simple to implement