Dissertations / Theses on the topic 'Analyse et visualisation de données'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Analyse et visualisation de données.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Benmerzoug, Fateh. "Analyse, modélisation et visualisation de données sismiques à grande échelle." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30077.
Full textThe main goal of the oil and gas industry is to locate and extract hydrocarbon resources, mainly petroleum and natural gas. To do this efficiently, numerous seismic measurements are conducted to gather up as much data as possible on terrain or marine surface area of interest. Using a multitude of sensors, seismic data are acquired and processed resulting in large cube-shaped data volumes. These volumes are then used to further compute additional attributes that helps in the understanding of the inner geological and geophysical structure of the earth. The visualization and exploration, called surveys, of these volumes are crucial to understand the structure of the underground and localize natural reservoirs where oil or gas are trapped. Recent advancements in both processing and imaging technologies enables engineers and geoscientists to perform larger seismic surveys. Modern seismic measurements yield large multi-hundred gigabytes of data volumes. The size of the acquired volumes presents a real challenge, both for processing such large volumes as well as their storage and distribution. Thus, data compression is a much- desired feature that helps answering the data size challenge. Another challenging aspect is the visualization of such large volumes. Traditionally, a volume is sliced both vertically and horizontally and visualized by means of 2-dimensional planes. This method necessitates the user having to manually scrolls back and forth be- tween successive slices in order to locate and track interesting geological features. Even though slicing provides a detailed visualization with a clear and concise representation of the physical space, it lacks the depth aspect that can be crucial in the understanding of certain structures. Additionally, the larger the volume gets, the more tedious and repetitive this task can be. A more intuitive approach for visualization is volume rendering. Rendering the seismic data as a volume presents an intuitive and hands on approach. By defining the appropriate color and opacity filters, the user can extract and visualize entire geo-bodies as individual continuous objects in a 3-dimensional space. In this thesis, we present a solution for both the data size and large data visualization challenges. We give an overview of the seismic data and attributes that are present in a typical seismic survey. We present an overview of data compression in a whole, discussing the necessary tools and methods that are used in the industry. A seismic data compression algorithm is then proposed, based on the concept of ex- tended transforms. By employing the GenLOT , Generalized Lapped Orthogonal Trans- forms we derive an appropriate transform filter that decorrelates the seismic data so they can be further quantized and encoded using P-SPECK, our proposed compression algorithm based on block-coding of bit-planes. Furthermore, we proposed a ray-casting out-of-core volume rendering framework that enables the visualization of arbitrarily large seismic cubes. Data are streamed on-demand and rendered using the user provided opacity and color filters, resulting in a fairly easy to use software package
Trellet, Mikael. "Exploration et analyse immersives de données moléculaires guidées par la tâche et la modélisation sémantique des contenus." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS262/document.
Full textIn structural biology, the theoretical study of molecular structures has four main activities organized in the following scenario: collection of experimental and theoretical data, visualization of 3D structures, molecular simulation, analysis and interpretation of results. This pipeline allows the expert to develop new hypotheses, to verify them experimentally and to produce new data as a starting point for a new scenario.The explosion in the amount of data to handle in this loop has two problems. Firstly, the resources and time dedicated to the tasks of transfer and conversion of data between each of these four activities increases significantly. Secondly, the complexity of molecular data generated by new experimental methodologies greatly increases the difficulty to properly collect, visualize and analyze the data.Immersive environments are often proposed to address the quantity and the increasing complexity of the modeled phenomena, especially during the viewing activity. Indeed, virtual reality offers a high quality stereoscopic perception, useful for a better understanding of inherently three-dimensional molecular data. It also displays a large amount of information thanks to the large display surfaces, but also to complete the immersive feeling with other sensorimotor channels (3D audio, haptic feedbacks,...).However, two major factors hindering the use of virtual reality in the field of structural biology. On one hand, although there are literature on navigation and environmental realistic virtual scenes, navigating abstract science is still very little studied. The understanding of complex 3D phenomena is however particularly conditioned by the subject’s ability to identify themselves in a complex 3D phenomenon. The first objective of this thesis work is then to propose 3D navigation paradigms adapted to the molecular structures of increasing complexity. On the other hand, the interactive context of immersive environments encourages direct interaction with the objects of interest. But the activities of: results collection, simulation and analysis, assume a working environment based on command-line inputs or through specific scripts associated to the tools. Usually, the use of virtual reality is therefore restricted to molecular structures exploration and visualization. The second thesis objective is then to bring all these activities, previously carried out in independent and interactive application contexts, within a homogeneous and unique interactive context. In addition to minimizing the time spent in data management between different work contexts, the aim is also to present, in a joint and simultaneous way, molecular structures and analyses, and allow their manipulation through direct interaction.Our contribution meets these objectives by building on an approach guided by both the content and the task. More precisely, navigation paradigms have been designed taking into account the molecular content, especially geometric properties, and tasks of the expert, to facilitate spatial referencing in molecular complexes and make the exploration of these structures more efficient. In addition, formalizing the nature of molecular data, their analysis and their visual representations, allows to interactively propose analyzes adapted to the nature of the data and create links between the molecular components and associated analyzes. These features go through the construction of a unified and powerful semantic representation making possible the integration of these activities in a unique interactive context
Doan, Nath-Quang. "Modèles hiérarchiques et topologiques pour le clustering et la visualisation des données." Paris 13, 2013. http://scbd-sto.univ-paris13.fr/secure/edgalilee_th_2013_doan.pdf.
Full textThis thesis focuses on clustering approaches inspired from topological models and an autonomous hierarchical clustering method. The clustering problem becomes more complicated and difficult due to the growth in quality and quantify of structured data such as graphs, trees or sequences. In this thesis, we are particularly interested in self-organizing maps which have been generally used for learning topological preservation, clustering, vector quantization and graph visualization. Our studyconcerns also a hierarchical clustering method AntTree which models the ability of real ants to build structure by connect themselves. By combining the topological map with the self-assembly rules inspired from AntTree, the goal is to represent data in a hierarchical and topological structure providing more insight data information. The advantage is to visualize the clustering results as multiple hierarchical trees and a topological network. In this report, we present three new models that are able to address clustering, visualization and feature selection problems. In the first model, our study shows the interest in the use of hierarchical and topological structure through several applications on numerical datasets, as well as structured datasets e. G. Graphs and biological dataset. The second model consists of a flexible and growing structure which does not impose the strict network-topology preservation rules. Using statistical characteristics provided by hierarchical trees, it accelerates significantly the learning process. The third model addresses particularly the issue of unsupervised feature selection. The idea is to use hierarchical structure provided by AntTree to discover automatically local data structure and local neighbors. By using the tree topology, we propose a new score for feature selection by constraining the Laplacian score. Finally, this thesis offers several perspectives for future work
Esson, François. "Un logiciel de visualisation et de classification interactives de données quantitatives multidimensionnelles." Lille 1, 1997. http://www.theses.fr/1997LIL10089.
Full textA chaque nouvelle configuration du référentiel (point de vue, direction de vue) correspondra une représentation plane différente de l'ensemble des points de données. C'est la généralisation à la dimension n de ce concept qui est à la base du travail effectue. Le logiciel issu de cette nouvelle approche interactive dans le domaine de la classification multidimensionnelle et de la représentation plane de données multidimensionnelles devrait apporter un outil de travail intéressant pour des chercheurs qui sans être des spécialistes en analyse de données ou en programmation, seraient amenés à utiliser l'approche de la classification, pour leur travail
Blanchard, Frédéric. "Visualisation et classification de données multidimensionnelles : Application aux images multicomposantes." Reims, 2005. http://theses.univ-reims.fr/exl-doc/GED00000287.pdf.
Full textThe analysis of multicomponent images is a crucial problem. Visualization and clustering problem are two relevant questions about it. We decided to work in the more general frame of data analysis to answer to these questions. The preliminary step of this work is describing the problems induced by the dimensionality and studying the current dimensionality reduction methods. The visualization problem is then considered and a contribution is exposed. We propose a new method of visualization through color image that provides an immediate and sythetic image od data. Applications are presented. The second contribution lies upstream with the clustering procedure strictly speaking. We etablish a new kind of data representation by using rank transformation, fuzziness and agregation procedures. Its use inprove the clustering procedures by dealing with clusters with dissimilar density or variant effectives and by making them more robust. This work presents two important contributions to the field of data analysis applied to multicomponent image. The variety of the tools involved (originally from decision theory, uncertainty management, data mining or image processing) make the presented methods usable in many diversified areas as well as multicomponent images analysis
Ben, othmane Zied. "Analyse et visualisation pour l'étude de la qualité des séries temporelles de données imparfaites." Thesis, Reims, 2020. http://www.theses.fr/2020REIMS002.
Full textThis thesis focuses on the quality of the information collected by sensors on the web. These data form time series that are incomplete, imprecise, and are on quantitative scales that are not very comparable. In this context, we are particularly interested in the variability and stability of these time series. We propose two approaches to quantify them. The first is based on a representation using quantiles, the second is a fuzzy approach. Using these indicators, we propose an interactive visualization tool dedicated to the analysis of the quality of the harvest carried out by the sensors. This work is part of a CIFRE collaboration with Kantar
Runz, Cyril de. "Imperfection, temps et espace : modélisation, analyse et visualisation dans un SIG archéologique." Reims, 2008. http://theses.univ-reims.fr/exl-doc/GED00000848.pdf.
Full textThis thesis develops a global approach for the handling of spatiotemporal and imperfect data in an archaeological GIS. This approach allows us a better management of those data in order to model or to represent them. In this approach, a new taxonomy of imperfection is proposed for the modeling of archaeological information. Using the modeling, this work presents some new methods for data analysis in an GIS. The temporal aspect of archaeological data implies to define an index which quantifies the anteriority. The lacunar aspect is also exploited through an interrogation method using a geometrical form. This work finally explores and visualizes archaeological dataset to extract the most representative elements. This thesis, which gives an approach on the management of imperfect knowledge in time and space, links computer science and geography. The use-case of this thesis is an archaeological database associated to a GIS
Gilbert, Frédéric. "Méthodes et modèles pour la visualisation de grandes masses de données multidimensionnelles nominatives dynamiques." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14498/document.
Full textSince ten years, informations visualization domain knows a real interest.Recently, with the growing of communications, the research on social networks analysis becomes strongly active. In this thesis, we present results on dynamic social networks analysis. That means that we take into account the temporal aspect of data. We were particularly interested in communities extraction within networks and their evolutions through time. [...]
De, Runz Cyril. "Imperfection, temps et espace : modélisation, analyse et visualisation dans un SIG archéologique." Phd thesis, Université de Reims - Champagne Ardenne, 2008. http://tel.archives-ouvertes.fr/tel-00560668.
Full textMadra, Anna. "Analyse et visualisation de la géométrie des matériaux composites à partir de données d’imagerie 3D." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2387/document.
Full textThe subject of the thesis project between Laboratoire Roberval at Université de Technologie Compiègne and Center for High-Performance Composites at Ecole Polytechnique de Montréal considered the design of a deep learning architecture with semantics for automatic generation of models of composite materials microstructure based on X-ray microtomographic imagery. The thesis consists of three major parts. Firstly, the methods of microtomographic image processing are presented, with an emphasis on phase segmentation. Then, the geometric features of phase elements are extracted and used to classify and identify new morphologies. The method is presented for composites filled with short natural fibers. The classification approach is also demonstrated for the study of defects in composites, but with spatial features added to the process. A high-level descriptor "defect genome" is proposed, that permits comparison of the state o defects between specimens. The second part of the thesis introduces structural segmentation on the example of woven reinforcement in a composite. The method relies on dual kriging, calibrated by the segmentation error from learning algorithms. In the final part, a stochastic formulation of the kriging model is presented based on Gaussian Processes, and distribution of physical properties of a composite microstructure is retrieved, ready for numerical simulation of the manufacturing process or of mechanical behavior
Jourdan, Fabien. "Visualisation d'information : dessin, indices structuraux et navigation : Applications aux réseaux biologiques et aux réseaux sociaux." Montpellier 2, 2004. http://www.theses.fr/2004MON20205.
Full textLoubier, Eloïse. "Analyse et visualisation de données relationnelles par morphing de graphe prenant en compte la dimension temporelle." Phd thesis, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00423655.
Full textNos travaux conduisent à l'élaboration des techniques graphiques permettant la compréhension des activités humaines, de leurs interactions mais aussi de leur évolution, dans une perspective décisionnelle. Nous concevons un outil alliant simplicité d'utilisation et précision d'analyse se basant sur deux types de visualisations complémentaires : statique et dynamique.
L'aspect statique de notre modèle de visualisation repose sur un espace de représentation, dans lequel les préceptes de la théorie des graphes sont appliqués. Le recours à des sémiologies spécifiques telles que le choix de formes de représentation, de granularité, de couleurs significatives permet une visualisation plus juste et plus précise de l'ensemble des données. L'utilisateur étant au cœur de nos préoccupations, notre contribution repose sur l'apport de fonctionnalités spécifiques, qui favorisent l'identification et l'analyse détaillée de structures de graphes. Nous proposons des algorithmes qui permettent de cibler le rôle des données au sein de la structure, d'analyser leur voisinage, tels que le filtrage, le k-core, la transitivité, de retourner aux documents sources, de partitionner le graphe ou de se focaliser sur ses spécificités structurelles.
Une caractéristique majeure des données stratégiques est leur forte évolutivité. Or l'analyse statistique ne permet pas toujours d'étudier cette composante, d'anticiper les risques encourus, d'identifier l'origine d'une tendance, d'observer les acteurs ou termes ayant un rôle décisif au cœur de structures évolutives.
Le point majeur de notre contribution pour les graphes dynamiques représentant des données à la fois relationnelles et temporelles, est le morphing de graphe. L'objectif est de faire ressortir les tendances significatives en se basant sur la représentation, dans un premier temps, d'un graphe global toutes périodes confondues puis en réalisant une animation entre les visualisations successives des graphes attachés à chaque période. Ce procédé permet d'identifier des structures ou des événements, de les situer temporellement et d'en faire une lecture prédictive.
Ainsi notre contribution permet la représentation des informations, et plus particulièrement l'identification, l'analyse et la restitution des structures stratégiques sous jacentes qui relient entre eux et à des moments donnés les acteurs d'un domaine, les mots-clés et concepts qu'ils utilisent.
Ledieu, Thibault. "Analyse et visualisation de trajectoires de soins par l’exploitation de données massives hospitalières pour la pharmacovigilance." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1B032/document.
Full textThe massification of health data is an opportunity to answer questions about vigilance and quality of care. The emergence of big data in health is an opportunity to answer questions about vigilance and quality of care. In this thesis work, we will present approaches to exploit the diversity and volume of intra-hospital data for pharmacovigilance use and monitoring the proper use of drugs. This approach will be based on the modelling of intra-hospital care trajectories adapted to the specific needs of pharmacovigilance. Using data from a hospital warehouse, it will be necessary to characterize events of interest and identify a link between the administration of these health products and the occurrence of adverse reactions, or to look for cases of misuse of the drug. The hypothesis put forward in this thesis is that an interactive visual approach would be suitable for the exploitation of these heterogeneous and multi-domain biomedical data in the field of pharmacovigilance. We have developed two prototypes allowing the visualization and analysis of care trajectories. The first prototype is a tool for visualizing the patient file in the form of a timeline. The second application is a tool for visualizing and searching a cohort of event sequences The latter tool is based on the implementation of sequence analysis algorithms (Smith-Waterman, Apriori, GSP) for the search for similarity or patterns of recurring events. These human-machine interfaces have been the subject of usability studies on use cases from actual practice that have proven their potential for routine use
Loubier, Éloïse. "Analyse et visualisation de données relationnelles par morphing de graphe prenant en compte la dimension temporelle." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/2264/.
Full textWith word wide exchanges, companies must face increasingly strong competition and masses of information flows. They have to remain continuously informed about innovations, competition strategies and markets and at the same time they have to keep the control of their environment. The Internet development and globalization reinforced this requirement and on the other hand provided means to collect information. Once summarized and synthesized, information generally is under a relational form. To analyze such a data, graph visualization brings a relevant mean to users to interpret a form of knowledge which would have been difficult to understand otherwise. The research we have carried out results in designing graphical techniques that allow understanding human activities, their interactions but also their evolution, from the decisional point of view. We also designed a tool that combines ease of use and analysis precision. It is based on two types of complementary visualizations: statics and dynamics. The static aspect of our visualization model rests on a representation space in which the precepts of the graph theory are applied. Specific semiologies such as the choice of representation forms, granularity, and significant colors allow better and precise visualizations of the data set. The user being a core component of our model, our work rests on the specification of new types of functionalities, which support the detection and the analysis of graph structures. We propose algorithms which make it possible to target the role of the data within the structure, to analyze their environment, such as the filtering tool, the k-core, and the transitivity, to go back to the documents, and to give focus on the structural specificities. One of the main characteristics of strategic data is their strong evolution. However the statistical analysis does not make it possible to study this component, to anticipate the incurred risks, to identify the origin of a trend, and to observe the actors or terms having a decisive role in the evolution structures. With regard to dynamic graphs, our major contribution is to represent relational and temporal data at the same time; which is called graph morphing. The objective is to emphasize the significant tendencies considering the representation of a graph that includes all the periods and then by carrying out an animation between successive visualizations of the graphs attached to each period. This process makes it possible to identify structures or events, to locate them temporally, and to make a predictive reading of it. Thus our contribution allows the representation of advanced information and more precisely the identification, the analysis, and the restitution of the underlying strategic structures which connect the actors of a domain, the key words, and the concepts they use; this considering the evolution feature
Allanic, Marianne. "Gestion et visualisation de données hétérogènes multidimensionnelles : application PLM à la neuroimagerie." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2248/document.
Full textNeuroimaging domain is confronted with issues in analyzing and reusing the growing amount of heterogeneous data produced. Data provenance is complex – multi-subjects, multi-methods, multi-temporalities – and the data are only partially stored, restricting multimodal and longitudinal studies. Especially, functional brain connectivity is studied to understand how areas of the brain work together. Raw and derived imaging data must be properly managed according to several dimensions, such as acquisition time, time between two acquisitions or subjects and their characteristics. The objective of the thesis is to allow exploration of complex relationships between heterogeneous data, which is resolved in two parts : (1) how to manage data and provenance, (2) how to visualize structures of multidimensional data. The contribution follow a logical sequence of three propositions which are presented after a research survey in heterogeneous data management and graph visualization. The BMI-LM (Bio-Medical Imaging – Lifecycle Management) data model organizes the management of neuroimaging data according to the phases of a study and takes into account the scalability of research thanks to specific classes associated to generic objects. The application of this model into a PLM (Product Lifecycle Management) system shows that concepts developed twenty years ago for manufacturing industry can be reused to manage neuroimaging data. GMDs (Dynamic Multidimensional Graphs) are introduced to represent complex dynamic relationships of data, as well as JGEX (Json Graph EXchange) format that was created to store and exchange GMDs between software applications. OCL (Overview Constraint Layout) method allows interactive and visual exploration of GMDs. It is based on user’s mental map preservation and alternating of complete and reduced views of data. OCL method is applied to the study of functional brain connectivity at rest of 231 subjects that are represented by a GMD – the areas of the brain are the nodes and connectivity measures the edges – according to age, gender and laterality : GMDs are computed through processing workflow on MRI acquisitions into the PLM system. Results show two main benefits of using OCL method : (1) identification of global trends on one or many dimensions, and (2) highlights of local changes between GMD states
Dhif, Imen. "Compression, analyse et visualisation des signaux physiologiques (EEG) appliqués à la télémédecine." Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066393.
Full textDue to the large amount of EEG acquired over several days, an efficient compression technique is necessary. The lack of experts and the short duration of epileptic seizures require the automatic detection of these seizures. Furthermore, a uniform viewer is mandatory to ensure interoperability and a correct reading of transmitted EEG exams. The certified medical image WAAVES coder provides high compression ratios CR while ensuring image quality. During our thesis, three challenges are revealed : adapting WAAVES coder to the compression of the EEG signals, detecting automatically epileptic seizures in an EEG signal and ensure the interoperability of the displays of EEG exams. The study of WAAVES shows that this coder is unable to remove spatial correlation and to compress directly monodimensional signals. Therefore, we applied ICA to decorrelate signals, a scaling to resize decimal values, and image construction. To keep a diagnostic quality with a PDR less than 7%, we coded the residue. The proposed compression algorithm EEGWaaves has achieved CR equal to 56. Subsequently, we proposed a new method of EEG feature extraction based on a new calculation model of the energy expected measurement (EAM) of EEG signals. Then, statistical parameters were calculated and Neural Networks were applied to classify and detect epileptic seizures. Our method allowed to achieve a better sensitivity up to 100% and an accuracy of 99.44%. The last chapter details the deployment of our multiplatform display of physiological signals by meeting the specifications established by doctors. The main role of this software is to ensure the interoperability of EEG exams between healthcare centers
Beaulieu, Véronique. "Étude de la visualisation géographique dans un environnement d'exploration interactive de données géodécisionnelles : adaptation et améliorations." Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26354/26354.pdf.
Full textChen, Fati. "Réduction de l'encombrement visuel : Application à la visualisation et à l'exploration de données prosopographiques." Thesis, Université de Montpellier (2022-….), 2022. http://www.theses.fr/2022UMONS023.
Full textProsopography is used by historians to designate biographical records in order to study common characteristics of a group of historical actors through a collective analysis of their lives. Information visualization presents interesting perspectives for analyzing prosopographic data. It is in this context that the work presented in this thesis is situated. First, we present the ProsoVis platform to analyze and navigate through prosopographic data. We describe the different needs expressed and detail the design choices as well as the different views. We illustrate its use with the Siprojuris database which contains data on the careers of law teachers from 1800 to 1950. Visualizing so much data induces visual cluttering problems. In this context, we address the problem of overlapping nodes in a graph. Even if approaches exist, it is difficult to compare them because their respective evaluations are not based on the same quality criteria. We therefore propose a study of the state-of-the-art algorithms by comparing their results on the same criteria. Finally, we address a similar problem of visual cluttering within a map and propose an agglomeration spatial clustering approach, F-SAC, which is much faster than the state-of-the-art proposals while guaranteeing the same quality of results
De, Runz Cyril Herbin Michel Piantoni Frédéric. "Imperfection, temps et espace : modélisation, analyse et visualisation dans un SIG archéologique." Reims : S.C.D. de l'Université, 2008. http://scdurca.univ-reims.fr/exl-doc/GED00000848.pdf.
Full textLehn, Rémi. "Un système interactif de visualisation et de fouille de règles pour l'extraction de connaissances dans les bases de données." Nantes, 2000. http://www.theses.fr/2000NANT2110.
Full textEsnard, Aurélien. "Analyse, conception et réalisation d'un environnement pour le pilotage et la visualisation en ligne de simulations numériques parallèles." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2005. http://tel.archives-ouvertes.fr/tel-00080729.
Full textL'objectif de cette thèse est de concevoir et de développer une plate-forme logicielle, appelée EPSN (Environnement pour le Pilotage des Simulations Numériques), permettant de piloter une application numérique parallèle en s'appuyant sur des outils de visualisation eux-mêmes parallèles. En d'autres termes, il s'agit de mettre au service des scientifiques les capacités de la visualisation parallèle et plus largement de la réalité virtuelle (environnement immersif, murs d'images), une étape aujourd'hui cruciale pour la conception et l'exploitation de simulations numériques complexes en vraie grandeur. La mise en oeuvre d'un couplage efficace entre simulation et visualisation soulève deux problèmes majeurs, que nous étudions
dans cette thèse et pour lesquels nous souhaitons apporter une contribution : le problème de la coordination efficace des opérations de pilotages en parallèle et le problème de la redistribution pour des données complexes (grilles structurées, ensembles de particules, maillages non structurés).
Blanchard, Frédéric Herbin Michel. "Visualisation et classification de données multidimensionnelles Application aux images multicomposantes /." Reims : S.C.D. de l'Université, 2005. http://scdurca.univ-reims.fr/exl-doc/GED00000287.pdf.
Full textCantu, Alma. "Proposition de modes de visualisation et d'interaction innovants pour les grandes masses de données et/ou les données structurées complexes en prenant en compte les limitations perceptives des utilisateurs." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0068/document.
Full textAs a result of the improvement of data capture and storage, recent years have seen the amount of data to be processed increase dramatically. Many studies, ranging from automatic processing to information visualization, have been performed, but some areas are still too specific to take advantage of. This is the case of ELectromagnetic INTelligence(ELINT). This domain does not only deal with a huge amount of data but also has to handle complex data and usage as well as populations of users with less and less experience. In this thesis we focus on the use of existing and new technologies applied to visualization to propose solutions to the combination of issues such as huge amount and complex data. We begin by presenting an analysis of the ELINT field which made it possible to extract the issues that it must faces. Then, we focus on the visual solutions handling the combinations of such issues but the existing work do not contain directly such solutions. Therefore, we focus on the description of visual issues and propose a characterization of these issues. This characterization allows us to describe the existing representations and to build a recommendation tool based on how the existing work solves the issues. Finally, we focus on identifying new metaphors to complete the existing work and propose an immersive representation to solve the issues of ELINT. These contributions make it possible to analyze and use the existing and deepen the use of immersive representations for the visualization of information
Pham, Khang-Nguyen. "Analyse factorielle des correspondances pour l'indexation et la recherche d'information dans une grande base de données d'images." Phd thesis, Université Rennes 1, 2009. http://tel.archives-ouvertes.fr/tel-00532574.
Full textKoné, Malik. "Collaviz : un prototype pour la détection et la visualisation de la dynamique collective dans les forums des MOOC." Thesis, Le Mans, 2020. http://www.theses.fr/2020LEMA1029.
Full textMassive Open Online Courses (MOOCs) have seen their numbers increase significantly since the democratization of the Internet. In addition, recently with the COVID-19 pandemic, the trend has intensified. If communication devices such as discussion forums are an integral part of the learning activities of MOOCs, there is still a lack of tools allowing instructors and researchers to guide and finely analyze the learning that takes place there. Dashboards summarizing students' activites are regularly offered to instructors, but they do not allow them to understand collective activities in the forums. From a socio-constructivist point of view, the exchanges and interactions sought by instructors in forums are essential for learning (Stephens, 2014). So far, studies have analyzed interactions in two ways: semantically but on a small scale or statistically and on a large scale but ignoring the quality of the interactions. The scientific contribution of this thesis relates to the proposal of an interactive detection approach of collective activities which takes into account their temporal, semantic and social dimensions. We seek to answer the problem of detecting and observing the collective dynamics that take place in MOOC forums. By collective dynamic, we mean all the qualitative and quantitative interactions of learners in the forums and their temporal changes. We want to allow instructors to intervene to encourage these activities favorable to learning. We rely on studies (Boroujeni 2017, Dascalu 2017) which propose to combine statistical analysis of interactions and automatic language processing to study the flow of information in forums. But, unlike previous studies, our approach is not limited to global or individual-centered analysis. We propose a method of designing indicators and dashboards allowing changes of scales and customization of views in order to support instructors and researchers in their task of detecting, observing and analyzing collective dynamics. To support our approach, we set up questionnaires and conducted semi-structured interviews with the instructors. As for the evaluation of the first indicators built at each iteration of our approach, we used various data sources and formats: Coursera (CSV), Hangout (JSON), Moodle (SQL)
Jalabert, Fabien. "Cartographie des connaissances : l'intégration et la visualisation au service de la biologie : application à l'ingénierie des connaissances et à l'analyse de données d'expression de gènes." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00207602.
Full textLamirel, Jean-Charles. "Vers une approche systémique et multivues pour l'analyse de données et la recherche d'information : un nouveau paradigme." Habilitation à diriger des recherches, Université Nancy II, 2010. http://tel.archives-ouvertes.fr/tel-00552247.
Full textRenaud, Clément. "Conception d'un outil d'analyse et de visualisation des mèmes internet : le cas du réseau social chinois Sina Weibo." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0070/document.
Full textWe develop a data mining and visualisation toolkit to study how the information is shared on online social network services. This software allows to observe relationships between conversational, semantical, temporal and geographical dimensions of online communication acts. Internet memes are short messages that spread quickly through the Web. Following models that remain largely unknown, they articulate personal discussions, societal debates and large communication campaign. We analyse a set of Internet memes by using methods from social network analysis and Chinese natural language processing on a large corpus of 200 million tweets which represents/reflects the overall activity on the Chinese social network Sina Weibo in 2012. An interactive visualisation interface showing networks of words, user exchanges and their projections on geographical maps provides a detailed understanding of actual and textual aspects of each meme spread. An analysis of hashtags in the corpus shows that the main content from Sina Weibo is largely similar to the ones in traditional media (advertisement, entertainment, etc.). Therefore, we decided to not consider hashtags as memes representatives, being mostly byproducts of wellplanned strategic or marketingcampaigns. Our final approach studies a dozen of memes selected for the diversity of their topic: humor, political scandal, breaking news and marketing
Pont, Mathieu. "Analysis of Ensembles of Topological Descriptors." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS436.
Full textTopological Data Analysis (TDA) forms a collection of tools to generically, robustly and efficiently reveal implicit structural patterns hidden in complex datasets. These tools allow to compute a topological representation for each member of an ensemble of datasets by encoding its main features of interest in a concise and informative manner. A major challenge consists then in designing analysis tools for such ensembles of topological descriptors. Several tools have been well studied for persistence diagrams, one of the most used descriptor. However, they suffer from a lack of specificity, which can yield identical data representations for significantly distinct datasets. In this thesis, we aimed at developing more advanced analysis tools for ensembles of topological descriptors, capable of tackling the lack of discriminability of persistence diagrams and going beyond what was already available for these objects. First, we adapt to merge trees, descriptors having a better specificity, the tools already available for persistence diagrams such as distances, geodesics and barycenters. Then, we want to go beyond this notion of average being the barycenter in order to study the variability within an ensemble of topological descriptors. We then adapt the Principal Component Analysis framework to persistence diagrams and merge trees, resulting in a dimensionality reduction method that indicates which structures in the ensemble are most responsible for the variability. However, this framework allows only to detect linear patterns of variability in the ensemble. To tackle this we propose to generalize this framework to Auto-Encoder in order to detect non-linear, i.e. more complex, patterns in an ensemble of merge trees or persistence diagrams. Specifically, we propose a new neural network layer capable of processing natively these objects. We present applications of all this work in feature tracking in a time-varying ensemble, data reduction to compress an ensemble of topological descriptors, clustering to form homogeneous groups in an ensemble, and dimensionality reduction to create a visual map indicating how the data are organized regarding each other in the ensemble
Noel, David. "Une approche basée sur le web sémantique pour l'étude de trajectoires de vie." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM022/document.
Full textThe notion of trajectory is the subject of many works in computer science. The life trajectory has several peculiarities which distinguish it from the trajectories usually considered in these works. It is first of all its temporal hold, which is the life, the existence of the observed subject. It is then its thematic hold, this one potentially concerning multiple aspects of the life of an object or an individual. Finally, it is the metaphorical use of the term trajectory, which refers more to the meaning of the trajectory than to the description of a simple evolution in time and space. The life trajectory is used by the expert (sociologist, urban planner ...) who wishes to put in perspective the information on individuals to better understand their choices. The motivations for studying the life trajectory are depending on the application and themes considered: the relation to work and employment, family life, social life, health, residential trajectory ...We propose a Semantic Web based approach to study life trajectories, which allows their modeling, collection and analysis. This approach is embodied by a software architecture whose components are configurable for each application case. This architecture is based on a life trajectory ontology design pattern, as well as a model of explanatory factors for life events. To operationalize the proposed modeling, we designed algorithms that allow the creation of a life trajectory ontology by exploiting the previous pattern and model. For data collection, we developed APIs to facilitate i) the construction of a model-compliant data collection interface; and ii) the insertion of the collected data into a Triple Store. Our approach allows the representation, and hence the collection and exploitation of multi-granular information, whether spatial, temporal or thematic. Finally, to allow the analysis of the trajectories, we propose generic functions, which are implemented by extending the SPARQL language.The methodological approach and the proposed tools are validated on a case study on residential choices of individuals in the Grenoble metropolitan area by highlighting the characteristics of their residential trajectory and the explanatory elements of it, including from their personal and professional trajectories
Kalathur, Ravi Kiran Reddy Poch Olivier. "Approche systématique et intégrative pour le stockage, l'analyse et la visualisation des données d'expression génique acquises par des techniques à haut débit, dans des tissus neuronaux An integrated systematic approach for storage, analysis and visualization of gene expression data from neuronal tissues acquired through high-throughput techniques /." Strasbourg : Université Louis Pasteur, 2008. http://eprints-scd-ulp.u-strasbg.fr:8080/920/01/KALATHUR_R_2007.pdf.
Full textLhuillier, Antoine. "Bundling : une technique de réduction d'occultation par agrégation visuelle et son application à l'étude de la maladie d'Alzheimer." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30307/document.
Full textDense and complex data visualizations suffer from occluded items, which hinders insight retrieval. This is especially the case for very large graph or trails set. To address cluttering issues, several techniques propose to visually simplify the representation, often meeting scalability and computational speed limits. Among them, bundling techniques provide a visual simplification of node-link diagrams by spatially grouping similar items. This thesis strives to bridge the gap between the technical complexity of bundling techniques and the end-point user. The first aim of this thesis was to improve the understanding of graph and trail bundling techniques as a clutter reduction method for node-link diagrams of large data-set. To do so, we created a data-based taxonomy that organizes bundling methods on the type of data they work on. From this thorough review and based on a formal definition of path bundling, we propose a unified framework that describes the typical steps of bundling algorithms in terms of high-level operations and show how existing methods classes implement these steps. In addition, we propose a description of tasks that bundling aims to address and demonstrate them through a wide set of applications. Although many techniques exist, handling large data-sets and selectively bundling paths based on attributes is still a challenge. To answer the scalability and computational speed issues of bundling techniques, we propose a new technique which improves both. For this, we shift the bundling process from the image to the spectral space, thereby increasing computational limits. We address the later by proposing a streaming scheme allowing bundling of extremely large data-sets. Finally, as an application domain, we studied how bundling can be used as an efficient visualization technique for societal health challenges. In the context of a national study on Alzheimer disease, we focused our research on the analysis of the mental representation of geographical space for elderly people. We show that using bundling to compare the cognitive maps of dement and non-dement subjects helped neuro-psychologist to formulate new hypotheses on the evolution of Alzheimer disease. These new hypotheses led us to discover a potential marker of the disease years before the actual diagnosis
Bourneuf, Lucas. "A search space of graph motifs for graph compression : from Powergraphs to triplet concepts." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S060.
Full textPower Graph Analysis is a lossless graph compression method aiming at reducing the visual complexity of a graph. The process is to detect motifs, cliques and bicliques, which enables the hierarchical clustering of nodes, the grouping of edges, and ultimately a graph reduced to these groups. This thesis exposes first the formalization of the Power Graph Analysis search space, using Formal Concept Analysis as a theoretical ground to express the compression process. Because the independent treatment of two motifs presents some caveats, we propose a unification framework, triplet concepts, which encode a more general motif for compression. Both Power Graph Analysis and the new approach have been implemented in Answer Set Programming (ASP), a logical formalism, and we present some applications in bioinformatics of these two approaches. This thesis ends on the presentation of an high-level specification and visualization environment for graph theory
Auber, David. "Outils de visualisation de larges structures de données." Bordeaux 1, 2002. http://www.theses.fr/2002BOR12607.
Full textVerbanck, Marie. "Analyse exploratoire de données transcriptomiques : de leur visualisation à l'intégration d’information extérieure." Rennes, Agrocampus Ouest, 2013. http://www.theses.fr/2013NSARG011.
Full textWe propose new methodologies of exploratory statistics which are dedicated to the analysis of transcriptomic data (DNA microarray data). Transcriptomic data provide an image of the transcriptome which itself is the result of phenomena of activation or inhibition of gene expression. However, the image of the transcriptome is noisy. That is why, firstly we focus on the issue of transcriptomic data denoising, in a visualisation framework. To do so, we propose a regularised version of principal component analysis. This regularised version allows to better estimate and visualise the underlying signal of noisy data. In addition, we can wonder if the knowledge of only the transcriptome is enough to understand the complexity of relationships between genes. That is why we propose to integrate other sources of information about genes, and in an active way, in the analysis of transcriptomic data. Two major mechanisms seem to be involved in the regulation of gene expression, regulatory proteins (for instance transcription factors) and regulatory networks on the one hand, chromosomal localisation and genome architecture on the other hand. Firstly, we focus on the regulation of gene expression by regulatory proteins; we propose a gene clustering algorithm based on the integration of functional knowledge about genes, which is provided by Gene Ontology annotations. This algorithm provides clusters constituted by genes which have both similar expression profiles and similar functional annotations. The clusters thus constituted are then better candidates for interpretation. Secondly, we propose to link the study of transcriptomic data to chromosomal localisation in a methodology developed in collaboration with geneticists
Mavromatis, Sébastien. "Analyse de texture et visualisation scientifique." Aix-Marseille 2, 2001. http://www.theses.fr/2001AIX22060.
Full textBourqui, Romain. "Décomposition et Visualisation de graphes : Applications aux Données Biologiques." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2008. http://tel.archives-ouvertes.fr/tel-00421872.
Full textLes travaux de cette thèse ont été appliqués sur des données réelles provenant de deux domaines de la biologie : les réseaux métaboliques et les réseaux d'interactions gènes-protéines.
Wahl, François. "Un environnement d'aide aux ingénieurs basé sur une architecture en tâches et sur un module de visualisation de courbes. Application à la conception de procédés de raffinage." Phd thesis, Ecole Nationale des Ponts et Chaussées, 1994. http://tel.archives-ouvertes.fr/tel-00529958.
Full textBourien, Jérôme. "Analyse de distributions spatio-temporelles de transitoires dans des signaux vectoriels. Application à la détection-classification d'activités paroxystiques intercritiques dans des observations EEG." Phd thesis, Université Rennes 1, 2003. http://tel.archives-ouvertes.fr/tel-00007178.
Full text1. Détection des AE monovoie. La méthode de détection, qui repose sur une approche heuristique, utilise un banc de filtres en ondelettes pour réhausser la composante pointue des AE (généralement appelée "spike" dans la littérature). La valeur moyenne des statistiques obtenues en sortie de chaque filtre est ensuite analysée avec un algorithme de Page-Hinkley dans le but de détecter des changements abrupts correspondant aux spikes.
2. Fusion des AE. Cette procédure recherche des co-occurrences entre AE monovoie à l'aide d'une fenêtre glissante puis forme des AE multivoies.
3. Extraction des sous-ensembles de voies fréquement et significativement activées lors des AE multivoies (appelés "ensembles d'activation").
4. Evaluation de l'éxistence d'un ordre d'activation temporel reproductible (éventuellement partiel) au sein de chaque ensemble d'activation.
Les méthodes proposées dans chacune des étapes ont tout d'abord été évaluées à l'aide de signaux simulés (étape 1) ou à l'aide de models Markoviens (étapes 2-4). Les résultats montrent que la méthode complète est robuste aux effets des fausses-alarmes. Cette méthode a ensuite été appliquée à des signaux enregistrés chez 8 patients (chacun contenant plusieurs centaines d'AE). Les résultats indiquent une grande reproductibilité des distributions spatio-temporelles des AE et ont permis l'identification de réseaux anatomo-fonctionnels spécifiques.
Do, Thanh-Nghi. "Visualisation et séparateurs à vaste marge en fouille de données." Nantes, 2004. http://www.theses.fr/2004NANT2072.
Full textWe present the different cooperative approaches using visualization methods and support vector machine algorithms (SVM) for knowledge discovery in databases (KDD). Most of existing data mining approaches construct the model in an automatic way, the user is not involved in the mining process. Furthermore, these approaches must be able to deal with the challenge of large datasets. Our work aims at increasing the human role in the KDD process (by the way of visualization methods) and improve the performances (concerning the execution time and the memory requirement) of the methods for mining large datasets. W e present:- parallel and distributed SVM algorithms for mining massive datasets, - interactive graphical methods to explain SVM results, - cooperative approaches to involve more significatively the user in the model construction
Ammann, Lucas. "Visualisation temps réel de données à deux dimensions et demie." Strasbourg, 2010. https://publication-theses.unistra.fr/public/theses_doctorat/2010/AMMANN_Lucas_2010.pdf.
Full textHeightfield data is now a common representation for several kind of virtual objects. Indeed, they are frequently used to represent topographical or scientific data. They are also used by 3-dimensional digitisation devices to store real objects. However, some issues are introduced by this kind of data during their manipulation and especially their visualisation. During this thesis, we develop simple yet efficient methods to render heightfield data, especially data from art painting digitisation. In addition to the visualisation method, we propose a complete pipeline to acquire art paintings and to process data for the visualisation process. To generalize the proposed approach, another rendering method is described to display topographical data by combining a rasterization process with a ray-casting rendering. Both of the rendering techniques are based on an adaptive mecanism which combines several rendering algorithms to enhance visualisation performances. These mechanisms are designed to avoid pre-processing steps of the data and to make use of straightforward rendering methods
Wang, Nan. "Visualisation et interaction pour l’exploration et la perception immersive de données 3D." Thesis, Paris, ENMP, 2012. http://www.theses.fr/2012ENMP0090/document.
Full textThe objective in this case is not only to be realistic, but also to provide new and intelligible ways of model representation. This raises new issues in data perception. The question of perception of complex data, especially regarding visual feedback, is an open question, and it is the subject of this work. This PhD thesis studied the human perception in Immersive Virtual Environments of complex datasets, one of the applications is the scientific visualization of scalar values stemming from physics models, such as temperature distribution inside a vehicle prototype.The objective of the first part is to study the perceptive limits of volumetric rendering for the display of scientific volumetric data, such as a volumetric temperature distribution rendering using point cloud. We investigate the effect on the user perception of three properties of a point cloud volumetric rendering: point size, cloud density and near clipping plane position. We present an experiment where a series of pointing tasks are proposed to a set of users. User behavior and task completion time are evaluated during the test. The study allowed to choose the most suitable combination of these properties, and provided guidelines for volumetric data representation in VR immersive systems.In the second part of our work, we evaluate one interaction method and four display techniques for exploring volumetric datasets in virtual reality immersive environments. We propose an approach based on the display of a subset of the volumetric data, as isosurfaces, and an interactive manipulation of the isosurfaces to allow the user to look for local properties in the datasets. We also studied the influence of four different rendering techniques for isosurface rendering in a virtual reality system. The study is based on a search and point task in a 3D temperature field. User precision, task completion time and user movement were evaluated during the test. The study allowed to choose the most suitable rendering mode for isosurface representation, and provided guidelines for data exploration tasks in immersive environments
Wang, Nan. "Visualisation et interaction pour l'exploration et la perception immersive de données 3D." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00821004.
Full textDa, Costa David. "Visualisation et fouille interactive de données à base de points d'intérêts." Tours, 2007. http://www.theses.fr/2007TOUR4021.
Full textIn this thesis, we present the problem of the visual data mining. We generally notice that it is specific to the types of data and that it is necessary to spend a long time to analyze the results in order to obtain an answer on the aspect of data. In this thesis, we have developed an interactive visualization environment for data exploration using points of interest. This tool visualizes all types of data and is generic because it uses only one similarity measure. These methods must be able to deal with large data sets. We also sought to improve the performances of our visualization algorithms, thus we managed to represent one million data. We also extended our tool to the data clustering. Most existing data clustering methods work in an automatic way, the user is not implied iin the process. We try to involve more significantly the user role in the data clustering process in order to improve his comprehensibility of the data results
Royan, Jérôme. "Visualisation interactive de scènes urbaines vastes et complexes à travers un réseau." Rennes 1, 2005. http://www.theses.fr/2005REN1S013.
Full textBoudjeloud-Assala, Baya Lydia. "Visualisation et algorithmes génétiques pour la fouille de grands ensembles de données." Nantes, 2005. http://www.theses.fr/2005NANT2065.
Full textWe present cooperative approaches using interactive visualization methods and automatic dimension selection methods for knowledge discovery in databases. Most existing data mining methods work in an automatic way, the user is not implied in the process. We try to involve more significantly the user role in the data mining process in order to improve his confidence and comprehensibility of the obtained models or results. Furthermore, the size of data sets is constantly increasing, these methods must be able to deal with large data sets. We try to improve the performances of the algorithms to deal with these high dimensional data sets. We developed a genetic algorithm for dimension selection with a distance-based fitness function for outlier detection in high dimensional data sets. This algorithm uses only a few dimensions to find the same outliers as in the whole data sets and can easily treat high dimensional data sets. The number of dimensions used being low enough, it is also possible to use visualization methods to explain and interpret outlier detection algorithm results. It is then possible to create a model from the data expert for example to qualify the detected element as an outlier or simply an error. We have also developed an evaluation measure for dimension selection in unsupervised classification and outlier detection. This measure enables us to find the same clusters as in the data set with its whole dimensions as well as clusters containing very few elements (outliers). Visual interpretation of the results shows the dimensions implied, they are considered as relevant and interesting for the clustering and outlier detection. Finally we present a semi-interactive genetic algorithm involving more significantly the user in the selection and evaluation process of the algorithm
Kaba, Bangaly. "Décomposition de graphes comme outil de regroupement et de visualisation en fouille de données." Clermont-Ferrand 2, 2008. http://www.theses.fr/2008CLF21871.
Full textPietriga, Emmanuel. "Langages et techniques d'interaction pour la visualisation et la manipulation de masses de données." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00709533.
Full textHayat, Khizar. "Visualisation 3D adaptée par insertion synchronisée de données cachées." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2009. http://tel.archives-ouvertes.fr/tel-00400762.
Full textGros, Pierre-Emmanuel. "Etude et conception d'une plate-forme d'intégration et de visualisation de données génomiques et d'outils bioinformatiques." Paris 11, 2006. http://www.theses.fr/2006PA112139.
Full textIn this beginning of millennium, the efforts of the industrial and academic world allowed for a first version of the sequencing of the human genome. By opening one of these files of sequence, the reader reaches a text of several million characters “a”, “t”, “g”, or “c”, each one symbolizing one of the four bases which constitute the dna. This sequence of letters puts forward our misunderstanding of dna. In order better to tackle this language, a lot of databases of dna sequences, annotations, and experiments were built, several tools of treatments of information were written. The first part of this thesis resolves the integration problem of bioinformatic tools. The approach adopted for the integration of tools is to melt a distributed architecture within the basic data engine. The other facet of integration relates to the integration of data resulting from various biological databases. In a more precise way, our goal is that a user integrate his personal data (coming from an excel file, a text file,. . . ) with the data of the “institutional” bases such as those of the ncbi or swissprot. Lastly, we propose a semantic integration tool called “lysa”. This tool proposes not to explore a database through the structure of the base but through the data within. The purpose of this exploration is to make it possible for the user to find the “semantic” links between data