Dissertations / Theses on the topic 'Fusion de données géospatiales'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Fusion de données géospatiales.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Cherif, Mohamed Abderrazak. "Alignement et fusion de cartes géospatiales multimodales hétérogènes." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ5002.
The surge in data across diverse fields presents an essential need for advanced techniques to merge and interpret this information. With a special emphasis on compiling geospatial data, this integration is crucial for unlocking new insights from geographic data, enhancing our ability to map and analyze trends that span across different locations and environments with more authenticity and reliability. Existing techniques have made progress in addressing data fusion; however, challenges persist in fusing and harmonizing data from different sources, scales, and modalities.This research presents a comprehensive investigation into the challenges and solutions in vector map alignment and fusion, focusing on developing methods that enhance the precision and usability of geospatial data. We explored and developed three distinct methodologies for polygonal vector map alignment: ProximityAlign, which excels in precision within urban layouts but faces computational challenges; the Optical Flow Deep Learning-Based Alignment, noted for its efficiency and adaptability; and the Epipolar Geometry-Based Alignment, effective in data-rich contexts but sensitive to data quality. Additionally, our study delved into linear feature map alignment, emphasizing the importance of precise alignment and feature attribute transfer, pointing towards the development of richer, more informative geospatial databases by adapting the ProximityAlign approach for linear features like fault traces and road networks. The fusion aspect of our research introduced a sophisticated pipeline to merge polygonal geometries relying on space partitioning, non-convex optimization of graph data structure, and geometrical operations to produce a reliable fused map that harmonizes input vector maps, maintaining their geometric and topological integrity.In practice, the developed framework has the potential to improve the quality and usability of integrated geospatial data, benefiting various applications such as urban planning, environmental monitoring, and disaster management. This study not only advances theoretical understanding in the field but also provides a solid foundation for practical applications in managing and interpreting large-scale geospatial datasets
Noël, de Tilly Antoine. "Le raisonnement à base de logique propositionnelle à l'appui de la fusion et de la révision de bases de données géospatiales." Master's thesis, Université Laval, 2007. http://hdl.handle.net/20.500.11794/19730.
The objective of this thesis is to make a comparison between a qualitative reasoning approach based on PROLOG with another approach based on ASP. Our principal research question was the following : Can the Smodels reasoning engine, allowing for advanced non monotonic reasoning and introducing the stable model concept, allow us to solve ontological consistency checking problems as well as revision problems in a geomatic context ? To answer this question, we carried out a series of tests on a cross-section from the National Topographical Database (NTDB). In the light of the results obtained, this approach has proven very effective and contributes to the amelioration of geospatial information consistency and to the resultant improvement in spatial reasoning.
Noël, de Tilly Antoine. "Le raisonnement à base de logique propositionnelle à l'appui de la fusion et de la révision de bases de données géospatiales." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25066/25066.pdf.
The objective of this thesis is to make a comparison between a qualitative reasoning approach based on PROLOG with another approach based on ASP. Our principal research question was the following : Can the Smodels reasoning engine, allowing for advanced non monotonic reasoning and introducing the stable model concept, allow us to solve ontological consistency checking problems as well as revision problems in a geomatic context ? To answer this question, we carried out a series of tests on a cross-section from the National Topographical Database (NTDB). In the light of the results obtained, this approach has proven very effective and contributes to the amelioration of geospatial information consistency and to the resultant improvement in spatial reasoning.
Mora, Brice. "Cartographie de paramètres forestiers par fusion évidentielle de données géospatiales multi-sources application aux peuplements forestiers en régénération et feuillus matures du Sud du Québec." Thèse, Université de Sherbrooke, 2009. http://savoirs.usherbrooke.ca/handle/11143/2803.
Leroux, Boris. "Fusion de données LiDAR et photographiques pour le géoréférencement direct d’un lever topographique par micro-drone aérien." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1024.
Within the development of systems dedicated to mobile mapping, spatial data production takes a growing part. For users, this spatial data is particularly interesting when computing digital models which are used to manage efficiently the resource, they are responsible for. To collect this data Hélicéo company especially offers an aerial solution which can embed a camera or a LiDAR sensor. Like every platforms dedicated to dynamic mapping, this system needs to geo-referencing the collected data in a coordinate reference frame.Most of the mobile mapping systems perform direct georeferencing using the trajectory determined from coupled GNSS receiver and inertial measurement unit (IMU). Even if this method is well known et operational for Airborne Laser Scanning (ALS) and Mobile Mapping Systems (MMS), it is hard to transpose for lightweight platforms like drones which require special payloads.To suit this constrains, drones use Micro-Electro Mechanical Systems (MEMS) which are light, compact and require low power. However, the accuracy of the attitude computed from this MEMS sensors is degraded compared to the tactical sensors so that it does not correspond to the accuracy level expected for topographic surveys.The goal of our study was to establish and validate a new methodology based on a camera and visual odometry (VO) computing. This thesis describes the theoretical approach to compute attitude from images taken by a camera and how LiDAR points are georeferenced using GNSS receiver coupled to VO. In the second part, this thesis describes the different experiments and the process we follow to validate this method and the comparison with traditional GNSS/IMU method
Martin-Lac, Victor. "Aerial navigation based on SAR imaging and reference geospatial data." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0400.
We seek the algorithmic means of determining the kinematic state of an aerial device from an observation SAR image and reference geospatial data that may be SAR, optical or vector. We determine a transform that relates the observation and reference coordinates and whose parameters are the kinematic state. We follow three approaches. The first one is based on detecting and matching structures such as contours. We propose an iterative closest point algorithm and demonstrate how it can serve to estimate the full kinematic state. We then propose a complete pipeline that includes a learned multimodal contour detector. The second approach is based on a multimodal similarity metric, which is the means of measuring the likelihood that two local patches of geospatial data represent the same geographic point. We determine the kinematic state under the hypothesis of which the SAR image is most similar to the reference geospatial data. The third approach is based on scene coordinates regression. We predict the geographic coordinates of random image patches and infer the kinematic state from these predicted correspondences. However, in this approach, we do not address the fact that the modality of the observation and the reference are different
Devillers, Rodolphe. "Conception d'un système multidimensionnel d'information sur la qualité des données géospatiales." Phd thesis, Université de Marne la Vallée, 2004. http://tel.archives-ouvertes.fr/tel-00008930.
Ugon, Adrien. "Fusion symbolique et données polysomnographiques." Paris 6, 2013. http://www.theses.fr/2013PA066187.
In recent decades, medical examinations required to diagnose and guide to treatmentbecame more and more complex. It is even a current practice to use several examinationsin different medical specialties to study a disease through multiple approaches so as todescribe it more deeply. The interpretation is difficult because the data is both heterogeneous and also veryspecific, with skilled domain of knowledge required to analyse it. In this context, symbolic fusion appears to be a possible solution. Indeed, it wasproved to be very effective in treating problems with low or high levels of abstraction ofinformation to develop a high level knowledge. This thesis demonstrates the effectiveness of symbolic fusion applied to the treatmentof polysomnographic data for the development of an assisted diagnosis tool of Sleep ApneaSyndrome. Proper diagnosis of this sleep disorder requires a polysomnography. This medicalexamination consists of simultaneously recording of various physiological parametersduring a night. Visual interpretation is tedious and time consuming and there commonlyis some disagreement between scorers. The use of a reliable support-to-diagnosis toolincreases the consensus. This thesis develops stages of the development of such a tool
Hamaina, Rachid. "Enrichissement des référentiels géographiques pour la caractérisation morphologique des tissus urbains." Ecole centrale de Nantes, 2013. http://www.theses.fr/2013ECDN0030.
The growing availability of geographic databases makes them a great public product for which the uses are extended to cover most of spatial issues. These databases are generic and are usually not suitable for all potential uses. Furthermore, these are semantically poor. Semantic enrichment and knowledge extraction from these data can be very useful for thematic applications. We are interested here by the exploration of geographic databases to extract some useful knowledge for urban morphology characterization. A very simple city model for urban environment can be extracted from geographic databases. This is formed by a 1D street network layer and a (2D or 2. 5D) buildings footprints layer. Morphology characterization consist of the urban spatial macro-structure exploration from street network and urban spatial micro-structure analysis from buildings footprints. The city macro-structure analysis is based on geometric patterns detection. These can be associated to some urban fabric types. This can be done independently of any urban context and history. The spatial micro-structure analysis is based on: First, the construction of a hierarchic and multi-level urban model. This is suitable for morphologic issues. Second, morphologic properties are formalized and traduced to a set of indicators which is used into a clustering process to delineate some morphologically homogeneous urban areas. Finally, the hierarchic model is used to develop a new aware neighborhood density characterization since density is the most used morphologic property in urban design and analysis. These methods of urban morphology characterization are developed in a GIS environment and can be used on huge data. These use poor semantic data, are reproducible independently of urban context and allow improving classic characterizations mainly descriptive and not easily objective
Fischer, Nicolas. "Fusion statistique de fichiers de données." Paris, CNAM, 2004. http://www.theses.fr/2004CNAM0483.
It is the objective of statistical data fusion to put together data emanating from distinct sources. When data are incomplete in files, fusion methodologies enable to transfer information, i. E. Variables of interest which are available in the so called donor files into a recipient file. This technique is based on the presence of common variables between the different files. We introduce new models for qualitative data which involve logistic and PLS regression. The latter is of special interest when dealing with highly correlated data set. These methods have been successfully tested on real data set and validated according to several criteria assessing the quality of statistical analysis. Finally, a decision making process has been operationally validated by using the lift indicator
Kurdej, Marek. "Exploitation of map data for the perception of intelligent vehicles." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2174/document.
This thesis is situated in the domains of robotics and data fusion, and concerns geographic information systems. We study the utility of adding digital maps, which model the urban environment in which the vehicle evolves, as a virtual sensor improving the perception results. Indeed, the maps contain a phenomenal quantity of information about the environment : its geometry, topology and additional contextual information. In this work, we extract road surface geometry and building models in order to deduce the context and the characteristics of each detected object. Our method is based on an extension of occupancy grids : the evidential perception grids. It permits to model explicitly the uncertainty related to the map and sensor data. By this means, the approach presents also the advantage of representing homogeneously the data originating from various sources : lidar, camera or maps. The maps are handled on equal terms with the physical sensors. This approach allows us to add geographic information without imputing unduly importance to it, which is essential in presence of errors. In our approach, the information fusion result, stored in a perception grid, is used to predict the stateof environment on the next instant. The fact of estimating the characteristics of dynamic elements does not satisfy the hypothesis of static world. Therefore, it is necessary to adjust the level of certainty attributed to these pieces of information. We do so by applying the temporal discounting. Due to the fact that existing methods are not well suited for this application, we propose a family of discoun toperators that take into account the type of handled information. The studied algorithms have been validated through tests on real data. We have thus developed the prototypes in Matlab and the C++ software based on Pacpus framework. Thanks to them, we present the results of experiments performed in real conditions
Lévesque, Johann. "Évaluation de la qualité des données géospatiales : approche top-down et gestion de la métaqualité." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24759/24759.pdf.
Hotte, Sylvain. "Traitements spatiaux dans un contexte de flux massifs de données." Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/30956.
In recent years we have witnessed a significant volume increase of data streams. The traditional way of processing this information is rendered inefficient or even impossible by this high volume of data. There is an increase in the interest of real time data processing in order to derive greater value of the data. Since those data are often georeferenced, it becomes relevant to offer methods that enable spatial processing on big data streams. However, the subject of spatial processing in a context of Big Data stream has seldom been discussed in scientific research. All the studies that have been done so far involve persistent data and none of them deals with the case where two Big Data streams are in relation. The problem is therefore to determine how to adapt the processing of spatial operators when their parameters derive from two Big Spatial Data stream. Our general objective is to explore the characteristics that allow the development of such analysis and to offer potential solutions. Our research has highlighted the factors influencing the adaptation of spatial processing in a context of Big Data stream. We have determined that adaptation methods can be categorized in different categories according to the characteristics of the spatial operator but also on the characteristics of the data itself and how it is made available. We proposed general methods of spatial processing for each category in order to guide adaptation strategies. For one of these categories, where a binary operator has both operands coming from Big Data stream, we have detailed a method allowing the use of spatial operators. In order to test the effectiveness and validity of the proposed method, we applied this method to an intersection operator and to a proximity analysis operator, the "k" nearest neighbors. These tests made it possible to check the validity and to quantify the effectiveness of the proposed methods in relation to the system evolution or scalability, i.e. increasing the number of processing cores. Our tests also made it possible to quantify the effect of the variation of the partitioning level on the performances of the treatment flow. Our contribution will, hopefully, serves as a starting point for more complex spatial operator adaptation.
Engélinus, Jonathan. "Elaboration d'un moteur de traitement des données spatiales massives vectorielles optimisant l'indexation spatiale." Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/28046.
Big data are in the midst of many scientific and economic issues. Furthermore their volume is continuously increasing. As a result, the need for management and processing solutions has become critical. Unfortunately, while most of these data have a vectorial spatial component, almost none of the current systems are able to manage it. In addition, the few systems who try either do not respect the ISO standards and OGC specifications or show poor performances. The aim of this research was then to determine how to manage the vectorial massive data more completely and efficiently. The objective was to find a scalable way of indexing them, ensuring their compatibility with ISO-19125 and its extensions, and making them accessible from GIS. The result is the Elcano system. It is an extension of the massive data management system Spark which provides increased performance compared to current market solutions.
Randrianarivelo, Mamy Dina. "Proposition d’un cadre conceptuel d’arrimage des savoirs géographiques locaux dans les macro-observatoires : cas de la région DIANA Madagascar." Master's thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25275.
Ranchin, Thierry. "Fusion de données et modélisation de l'environnement." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2005. http://tel.archives-ouvertes.fr/tel-00520780.
BARRA, Vincent. "Modélisation, classification et fusion de données biomédicales." Habilitation à diriger des recherches, Université Blaise Pascal - Clermont-Ferrand II, 2004. http://tel.archives-ouvertes.fr/tel-00005998.
Mathieu, Jean. "Intégration de données temps-réel issues de capteurs dans un entrepôt de données géo-décisionnel." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28019/28019.pdf.
In the last decade, the use of sensors for measuring various phenomenons has greatly increased. As such, we can now make use of sensors to measure GPS position, temperature and even the heartbeats of a person. Nowadays, the wide diversity of sensor makes them the best tools to gather data. Along with this effervescence, analysis tools have also advanced since the creation of transactional databases, leading to a new category of tools, analysis systems (Business Intelligence (BI)), which respond to the need of the global analysis of the data. Data warehouses and OLAP (On-Line Analytical Processing) tools, which belong to this category, enable users to analyze big volumes of data, execute time-based requests and build statistic graphs in a few simple mouse clicks. Although the various types of sensor can surely enrich any analysis, such data requires heavy integration processes to be driven into the data warehouse, centerpiece of any decision-making process. The different data types produced by sensors, sensor models and ways to transfer such data are even today significant obstacles to sensors data streams integration in a geo-decisional data warehouse. Also, actual geo-decisional data warehouses are not initially built to welcome new data on a high frequency. Since the performances of a data warehouse are restricted during an update, new data is usually added weekly, monthly, etc. However, some data warehouses, called Real-Time Data Warehouses (RTDW), are able to be updated several times a day without letting its performance diminish during the process. But this technology is not very common, very costly and in most of cases considered as "beta" versions. Therefore, this research aims to develop an approach allowing to publish and normalize real-time sensors data streams and to integrate it into a classic data warehouse. An optimized update strategy has also been developed so the frequent new data can be added to the analysis without affecting the data warehouse performances.
Brulin, Damien. "Fusion de données multi-capteurs pour l'habitat intelligent." Thesis, Orléans, 2010. http://www.theses.fr/2010ORLE2066/document.
The smart home concept has been widely developed in the last years in order to propose solutions for twomain concerns : optimized energy management in building and help for in-home support for elderly people.In this context, the CAPTHOM project, in which this thesis is in line with, has been developed. To respondto these problems, many sensors, of different natures, are used to detect the human presence, to determinethe position and the posture of the person. In fact, no sensor can , alone, answers to all information justifyingthe development of a multi-sensor system and a data fusion method. In this project, the selected sensorsare passive infrared sensors (PIR), thermopiles and a video camera. No sensor is carried by the person(non invasive system). We propose a global architecture of intelligent sensor made of four fusion modulesallowing respectively to detect the human presence, to locate in 3D the person, to determine the posture andto help to make a decision according to the application. The human presence module fuses information ofthe three sensors : PIR sensors for the movement, thermopiles for the presence in case of immobility and thecamera to identify the detected entity. The 3D localisation of the person is realized thanks to position recedinghorizon estimation. This method, called Visual Receding Horizon Estimation (VRHE), formulates the positionestimation problem into an nonlinear optimisation problem under constraints in the image plane. The fusionmodule for the posture determination is based on fuzzy logic. It insures the posture determination regardlessof the person and the distance from the camera. Finally, the module to make a decision fuses the outputs of the preceding modules and gives the opportunity to launch alarms (elderly people monitoring) or to commandhome automation devices (lightning, heating) for the energy management of buildings
Roy, Tania. "Nouvelle méthode pour mieux informer les utilisateurs de portails Web sur les usages inappropriés de données géospatiales." Thesis, Université Laval, 2013. http://www.theses.ulaval.ca/2013/30370/30370.pdf.
In the case of Web portals providing access to multiple data sets, it may be difficult for a non-expert user to assess if data may present risks; the only information generally available being metadata. These metadata are generally presented in a technical language, and it can be difficult for a non-expert user to understand the scope of the metadata on their decisions. The main objective of this thesis is to propose an approach for helping users and producers of geospatial data to identify and manage risks related to the planned use of data acquired through a Web portal, a posteriori of the production of data. The approach developed uses a series of structured questions to be answered by a user of geospatial data. Depending on the answers, the user can identify risks of use. When risks of data misuse are identified, specific risk management actions are suggested.
Bouchard, Aurélie. "Bilans d'eau et d'énergie par inversion et fusion de données." Paris 6, 2006. http://www.theses.fr/2006PA066447.
For some years many satellites were launched in order to bring new and/or more information on meteorological circulations and associated cloud systems over oceanic surfaces. To process these new data, new methods, in particular methods of data assimilation allowing the processing and merging of these new various kinds of data are needed. In this context MANDOPAS4D, 4D variational data assimilation and analysis technic to retrieve 4D fields has been developed. Sensitivity tests on used dataset synthetised from a non-hydrostatic meso-scale numerical simulation will be discussed in a first part. The present work illustrates this method on a real data set devoted to the study of the hurricane Bret. This hurricane was located over the Gulf of Mexico from the 18th to the 23rd of August 1999. Thus a documentation of dynamical and thermodynamical quantity and moisture and energy budget at meso-scale and synoptic scale has been made
Everaere, Patricia. "Contribution à l'étude des opérateurs de fusion : manipulabilité et fusion disjonctive." Artois, 2006. http://www.theses.fr/2006ARTO0402.
Propositional merging operators aim at defining the beliefs/goals of a group of agents from their individual beliefs/goals, represented by propositional formulae. Two widely used criteria for comparing existing merging operators are rationality and computational complexity. Our claim is that those two criteria are not enough, and that a further one has to be considered as well, namely strategy-proofness. A merging operator is said to be non strategy-proof if there is an agent involved in the merging process who can change the result of the merging, so as to make it closer to her expected one, by lying on her true beliefs/goals. A non strategy-proof merging operator does not give any guarantee that the results it provides are adequate to the beliefs/goals of the group, since it does not incite the agents to report their true beliefs/goals. A first contribution of this thesis consists of a study of the strategy-proofness of existing propositional merging operators. It shows that no existing merging operators fully satisfy the three criteria under consideration: rationality, complexity and strategy-proofness. Our second contribution consists of two new families of disjunctive merging operators, i. E. , operators ensuring that the result of the merging process entails the disjunction of the information given at start. The operators from both families are shown as valuable alternatives to formula-based merging operators, which are disjunctive, but exhibit a high computational complexity, are not strategy-proof, and are not fully rational
Alibay, Manu. "Fusion de données capteurs étendue pour applications vidéo embarquées." Thesis, Paris, ENMP, 2015. http://www.theses.fr/2015ENMP0032/document.
This thesis deals with sensor fusion between camera and inertial sensors measurements in order to provide a robust motion estimation algorithm for embedded video applications. The targeted platforms are mainly smartphones and tablets. We present a real-time, 2D online camera motion estimation algorithm combining inertial and visual measurements. The proposed algorithm extends the preemptive RANSAC motion estimation procedure with inertial sensors data, introducing a dynamic lagrangian hybrid scoring of the motion models, to make the approach adaptive to various image and motion contents. All these improvements are made with little computational cost, keeping the complexity of the algorithm low enough for embedded platforms. The approach is compared with pure inertial and pure visual procedures. A novel approach to real-time hybrid monocular visual-inertial odometry for embedded platforms is introduced. The interaction between vision and inertial sensors is maximized by performing fusion at multiple levels of the algorithm. Through tests conducted on sequences with ground-truth data specifically acquired, we show that our method outperforms classical hybrid techniques in ego-motion estimation
Dumas, Marc-André. "Application du calcul d'incidence à la fusion de données." Thesis, Université Laval, 2006. http://www.theses.ulaval.ca/2006/23759/23759.pdf.
Rahier, Thibaud. "Réseaux Bayésiens pour fusion de données statiques et temporelles." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM083/document.
Prediction and inference on temporal data is very frequently performed using timeseries data alone. We believe that these tasks could benefit from leveraging the contextual metadata associated to timeseries - such as location, type, etc. Conversely, tasks involving prediction and inference on metadata could benefit from information held within timeseries. However, there exists no standard way of jointly modeling both timeseries data and descriptive metadata. Moreover, metadata frequently contains highly correlated or redundant information, and may contain errors and missing values.We first consider the problem of learning the inherent probabilistic graphical structure of metadata as a Bayesian Network. This has two main benefits: (i) once structured as a graphical model, metadata is easier to use in order to improve tasks on temporal data and (ii) the learned model enables inference tasks on metadata alone, such as missing data imputation. However, Bayesian network structure learning is a tremendous mathematical challenge, that involves a NP-Hard optimization problem. We present a tailor-made structure learning algorithm, inspired from novel theoretical results, that exploits (quasi)-determinist dependencies that are typically present in descriptive metadata. This algorithm is tested on numerous benchmark datasets and some industrial metadatasets containing deterministic relationships. In both cases it proved to be significantly faster than state of the art, and even found more performant structures on industrial data. Moreover, learned Bayesian networks are consistently sparser and therefore more readable.We then focus on designing a model that includes both static (meta)data and dynamic data. Taking inspiration from state of the art probabilistic graphical models for temporal data (Dynamic Bayesian Networks) and from our previously described approach for metadata modeling, we present a general methodology to jointly model metadata and temporal data as a hybrid static-dynamic Bayesian network. We propose two main algorithms associated to this representation: (i) a learning algorithm, which while being optimized for industrial data, is still generalizable to any task of static and dynamic data fusion, and (ii) an inference algorithm, enabling both usual tasks on temporal or static data alone, and tasks using the two types of data.%We then provide results on diverse cross-field applications such as forecasting, metadata replenishment from timeseries and alarms dependency analysis using data from some of Schneider Electric’s challenging use-cases.Finally, we discuss some of the notions introduced during the thesis, including ways to measure the generalization performance of a Bayesian network by a score inspired from the cross-validation procedure from supervised machine learning. We also propose various extensions to the algorithms and theoretical results presented in the previous chapters, and formulate some research perspectives
Makhoul, Abdallah. "Réseaux de capteurs : localisation, couverture et fusion de données." Besançon, 2008. http://www.theses.fr/2008BESA2025.
This thesis tackles the problems of localization, coverage and data fusion in randomly deployed sensor networks. First, we introduce a novel approach for node's localization. It is based on a single mobile beacon aware of its positions. Sensor nodes receiving beacon packets will be able to locate themselves. The mobile beacon follows a defined Hilbert curve. On the other hand, we exploit the localization phase to construct sets of active nodes that ensure as much as possible the zone coverage. To optimize the energy consumption, we construct disjoint sets of active nodes such that only one of them is active at any moment, while ensuring at the same time both the network connectivity and the area coverage. We present and study four different scheduling methods. Ln a third step, we study the problem of data fusion in sensor networks in particular the" average consensus" problem. It allows the nodes of a sensor network to track the average of n sensor measurements. To compute the average, we propose an iterative asynchronous algorithm that is robust to the dynamic topology changes and the loss of messages. To show the effectiveness of the proposed algorithms, we conducted series of simulations based on OMNet++
Burger, Brice. "Fusion de données audio-visuelles pour l'interaction homme-robot." Phd thesis, Toulouse 3, 2010. http://thesesups.ups-tlse.fr/916/.
In the framework of assistance robotics, this PHD aims at merging two channels of information (visual and aiditive potentially avaible on a robot. The goal is ton complete and/ or conf rm data that an only channel could have supplied in order to perform advanced intatraction which goal is to interpret jointly speech gesture, in particular for the use of spatial references. In this thesis, we first de cribe the speech part of this work which consists in an embedded recognition and interpretation system for continuous speech. Then comes the vision part which is composed of a visual multi-target tracker that tracks, in 3D the head and the two hands of a human in front of the robot, and a second tracker for the head orientation. The outputs of these trackers are used to feed the gesture recognitive system described later. We continue with the description of a module dedicated to the fusion of the data outputs of these information sources in a probailistic framework. Last, we demonstrate the interest and feasibility of such a multimodal interface through some demonstrations on the LAAS-CNRS robots. All the modules described in this thesis are working in quasi-real time on these real robotic platforms
Salmeron-Quiroz, Bernardino Benito. "Fusion de données multicapteurs pour la capture de mouvement." Phd thesis, Grenoble 1, 2007. http://www.theses.fr/2007GRE10062.
This thesis deals with motion capture (MoCap) which goal is to acquire the attitude of human's body. In our case, the arm and the leg are considered. The MoCap trackers are made of "software" and "hardware" parts which allow acquisition of the movement of an object or a human in space in real or differed time. Many MoCaps systems still exist, but they require an adaptation of the environment. In this thesis, a low cost, low weight attitude central unit (UCN namely a triaxes magnetometer and a triaxes accelerometer), is used. This attitude central unit has been developed within the CEA-LETI. In this work, we propose different algorithms to estimate the attitude and the linear accelerations of a rigid body. For the rotation parametrization, the unit quaternion is used. Firstly, the estimation of the attitude and the accelerations (6DDL case) from the measurements provided by ACU is done via an optimization technique. The motion capture of articulated chains (arm and leg) is also studied with ad-hoc assumptions on the accelerations in the pivot connections, the orientation of the segments as well as the accelerations in particular points of the segments can be estimated. The different approaches proposed in this work have been evaluated with simulated data and real data
Burger, Brice. "Fusion de données audio-visuelles pour l'interaction Homme-Robot." Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00494382.
Salmeron-Quiroz, Bernardino Benito. "Fusion de données multicapteurs pour la capture de mouvement." Phd thesis, Université Joseph Fourier (Grenoble), 2007. http://tel.archives-ouvertes.fr/tel-00148577.
Manzoni, Vieira Fábio. "Fusion de données AIS et radar pour la surveillance maritime." Thesis, Toulouse, ISAE, 2017. http://www.theses.fr/2017ESAE0034/document.
In the maritime surveillance domain, cooperative identification and positioning systems such as AIS (Automatic Identification System) are often coupled with non-cooperative systems for ship observation such as Synthetic Aperture Radar (SAR). In this context, the fusion of AIS and Radar data can improve the detection of certain vessels and possible identify some maritime surveillance scenarios. The first chapter introduces both AIS and Radar systems, details the data structure as well as the related signal processing. The second chapter presents the potential contribution of the joint use of raw Radar and AIS data for the detection of vessels using a generalized likelihood ratio test (GLRT). Although the performance is encouraging, in practice the implementation in real-time of the detector seems complicated. As alternative, the third chapter presents a suboptimal detection method that explores Radar raw data and a positioning map of vessels obtained from the AIS system. Differently from chapter two, in addition to the simultaneous detection by both AIS and radar, the cases where only one of the systems detects an object can now be distinguished. The problem is formalized by two successive binary hypotheses test. The results suggests that the proposed detector is less sensitive to the proximity and density of ships than a conventional radar detector. The fourth chapter presents the simulator developed to test the algorithms on different surveillance scenarios, namely a civilian ship piracy scenario, an illegal cargo transhipment and a scenario of navigation in a dense environment
Coutand, Frédérique. "Reconstruction d'images en tomographie scintigraphique cardiaque par fusion de données." Phd thesis, Université Paris Sud - Paris XI, 1996. http://pastel.archives-ouvertes.fr/pastel-00730942.
Pannetier, Benjamin. "Fusion de données pour la surveillance du champ de bataille." Phd thesis, Université Joseph Fourier (Grenoble), 2006. http://tel.archives-ouvertes.fr/tel-00377247.
Valade, Aurelien. "Capteurs intelligents : quelles méthodologies pour la fusion de données embarquées ?" Thesis, Toulouse, INSA, 2017. http://www.theses.fr/2017ISAT0007/document.
The work detailed in this document is the result of a collaborative effort of the LAAS-CNRS in Toulouse and MEAS-France / TE Connectivity during a period of three years.The goal here is to develop a methodology to design smart embedded sensors with the ability to estimate physical parameters based on multi-physical data fusion. This strategy tends to integrate sensors technologies, currently dedicated to lab measurements, in low powered embedded systems working in imperfects environments. After exploring model oriented methods, parameters estimations and Kalman filters, we detail various existing solutions upon which we can build a valid response to multi-physical data fusion problematics, for linear systems with the Kalman Filter, and for non-linear systems with the Extended Kalman Filter and the Unscented Kalman Filter.Then, we will synthesize a filter for hybrid systems, having a linear evolution model and a non-linear measurement model. For example, using the best of the two worlds in order to obtain the best complexity/precision ratio. Once we selected the estimation method, we detail computing power and algorithm complexity problematics in order to find available optimizations we can use to assess the usability of our system in a low power environment. Then we present the developed methodology application to the UQS sensor, sold by TE Connectivity, study case. This sensor uses near infra-red spectroscopy to determine the urea concentration in a urea/water solution, in order to control the nitrogen-oxyde depolluting process in gasoline engines. After a design principles presentation, we detail the model we created in order to represent the system, to simulate its behavior and to combine the measurement data to extract the desired concentration. During this step, we focus on the obstacles of our model calibration and the deviation compensation, due toworking conditions or to components aging process. Based on this development, we finally designed the hybrid models addressing the nominal working cases and the model re-calibration during the working duration of the product. After this, we presented obtained results, on simulated data, and on real-world measured data. Finally, we enhanced the methodology based on tabulated “black box” models which are easier to calibrate and cheaper to process. In conclusion, we reapplied our methodology to a different motion capture sensor, to compile all possible solutions and limits
El, Zoghby Nicole. "Fusion distribuée de données échangées dans un réseau de véhicules." Phd thesis, Université de Technologie de Compiègne, 2014. http://tel.archives-ouvertes.fr/tel-01070896.
Paoli, Jean-Noël. "Fusion de données spatialisées, application à la viticulture de précision." Montpellier, ENSA, 2004. https://hal-agrosup-dijon.archives-ouvertes.fr/tel-01997479.
During the last decade, spatial knowledge management has become increasingly popular in Agriculture and the Environment. These data can be used to consider the spatial and temporal variability of a culture. Nevertheless, this involves the aggregation of all the available data in order to produce a diagnostic and to propose suitable actions. The main problems are the heterogeneity of the different data sets (numeric or symbolic, with different spatial resolutions), and the imprecision and the uncertainty associated with the data and their locations. To overcome these problems, we propose a method to translate all the data into qualitative data and to estimate all the studied variables on the suitable locations. Our method takes into account imprecision, uncertainty, conflicts, and lack of information. We implement a new approach both for the description and for the treatment of the data. The description of the information is based on fuzzy sets and possibility theory. The spatial estimation process is based on a Choquet integral. Our approach is applied on Precision Viticulture data. Our first application is the diagnostic of expert zones manually delineated by experts. Our second application is a field segmentation in order to perform distinct levels of treatment. These examples shows that our approach is valid. Nevertheless, we identify some limitations. Further work is needed to improve our aggregation operator and to consider all the particular aspects of spatial data treatment
Ricquebourg, Vincent. "Fusion de données crédibilistes dans le cadre de l'intelligence ambiante." Valenciennes, 2008. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/79b899ff-3a94-4816-9d5e-50754ef406c4.
The Smart Home concept aims at providing contextualized services to its inhabitants. Based on a heterogeneous sensors domestic network, this new kind of Smart Home deduces the most adapted action to realize with sensory data interpretation. The first problem concerns the strong heterogeneity in the sensor domain: sensors have their own hardware and software features, and their communication standards are poorly standardised. In this thesis, our interest is the context modelling and we propose a service oriented software architecture combining complementary and/or redundant sensors. We use the Transferable Belief Model (TBM) to merge sensor data and to take into account the uncertain nature of information. This model is a variant of the Dempster-Shafer theory. The sensors reliability is taken into account during the merging process to weight a failing sensor. A sensor failure can prevent context data building. The sensor reliability is estimated with a pairwise sensor fusion. The temporal conflict analysis allows detection and identification of a failing sensor. We present a second method aiming at detecting temporal behaviour drift. A TBM fusion between a predicted symbolic state and the observed symbolic state provided by the sensor is achieved. The predicted symbolic state estimation is based on a known model of behaviour. The temporal conflict analysis allows detecting behaviour drifts. Finally, we present a case study where the previous approaches are implemented in cascade in order to detect a falling person
Lachaize, Marie. "Fusion de données : approche evidentielle pour le tri des déchets." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS113.
Automatic waste sorting is a complex matterbecause of the diversity of the objects and of the presentmaterials. It requires input from various andheterogeneous data. This PhD work deals with the datafusion problem derived from an acquisition devicecomposed of three sensors, including an hyperspectralsensor in the NIR field. We first studied the benefit ofusing the belief function theory framework (BFT)throughout the fusion approach, using in particularconflict measures to drive the process. We first studiedthe BFT in the multiclass classification problem createdby hyperspectral data. We used the Error CorrectingOutput Codes (ECOC) framework which consists inseparating the multiclass problem into several binaryones, simpler to solve. The questions of the idealdecomposition of the multiclass problem (coding) and ofthe answer combination coming from the binaryclassifiers (decoding) are still open-ended questions. Thebelief function framework allows us to propose adecoding step modelling each binary classifier as anindividual source of information, thanks to the possibilityof handling compound hypotheses. Besides, the BFTprovides indices to detect non reliable decisions whichallow for an auto-evaluation of the method performedwithout using any ground truth. In a second part dealingwith the data fusion,we propose an evidential version ofan object-based approach composed with a segmentationmodule and a classification module in order to tackle theproblems of the differences in scale, resolutions orregistrations of the sensors. The objective is then toestimate a relevant spatial support corresponding to theobjects while labelling them in terms of material. Weproposed an interactive approach with cooperationbetween the two modules in a cross-validation kind ofway. This way, the reliability of the labelling isevaluated at the segment level, while the classificationinformation acts on the initial segments in order toevolve towards an object level segmentation: consensusamong the classification information within a segment orbetween adjacent regions allow the spatial support toprogressively reach object level
Bakillah, Mohamed. "Développement d'une approche géosémantique intégrée pour ajuster les résultats des requêtes spatiotemporelles dans les bases de données géospatiales multidimensionnelles évolutives." Thesis, Université Laval, 2007. http://www.theses.ulaval.ca/2007/24136/24136.pdf.
Ouattara, Mamadou. "Fouille de données : vers une nouvelle approche intégrant de façon cohérente et transparente la composante spatiale." Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27723/27723.pdf.
In recent decades, geospatial data has been more and more present within our organization. This has resulted in massive storage of such information and this, combined with the learning potential of such information, gives birth to the need to learn from these data, to extract knowledge that can be useful in supporting decision-making process. For this purpose, several approaches have been proposed. Among this, the first has been to deal with existing data mining tools in order to extract any knowledge of such data. But due to a specificity of geospatial information, this approach failed. From this arose the need to erect the process of extracting knowledge from geospatial data in its own right; this lead to Geographic Knowledge Discovery. The answer to this problem, by GKD, is reflected in the implementation of approaches that can be categorized into two: the so-called pre-processing approaches and the dynamic treatment of spatial relationships. Given the limitations of these approaches we propose a new approach that exploits the existing data mining tools. This approach can be seen as a compromise of the two previous. It main objective is to support geospatial data type during all steps of data mining process. To do this, the proposed approach will exploit the usual relationships that geo-spatial entities share each other. A framework will then describe how this approach supports the spatial component involving geo-spatial libraries and "traditional" data mining tools
Lévesque, Marie-Andrée. "Approche formelle pour une meilleure identification et gestion des risques d'usages inappropriés des données géodécisionnelles." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25565/25565.pdf.
Inscrite au Tableau d'honneur de la Faculté des études supérieures
Ben, Ticha Mohamed Bassam. "Fusion de données satellitaires pour la cartographie du potentiel éolien offshore." Phd thesis, École Nationale Supérieure des Mines de Paris, 2007. http://tel.archives-ouvertes.fr/tel-00198912.
Scandaroli, Glauco Garcia. "Fusion de données visuo-inertielles pour l'estimation de pose et l'autocalibrage." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00861858.
Fournier, Marc. "Fusion de données 3D provenant d'un profilomètre tenu à la main." Mémoire, École de technologie supérieure, 2002. http://espace.etsmtl.ca/825/1/FOURNIER_Marc.pdf.
Glauco, Garcia Scandaroli. "Fusion de données visuo-inertielles pour l'estimation de pose et l'autocalibrage." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00849384.
Izri-Lahleb, Sonia. "Architecture de fusion de données pour le suivi dynamique de véhicules." Amiens, 2006. http://www.theses.fr/2006AMIE0603.
Arnould, Philippe. "Étude de la localisation d'un robot mobile par fusion de données." Vandoeuvre-les-Nancy, INPL, 1993. http://www.theses.fr/1993INPL095N.
Phan, Duy Hung. "Fusion de données ECG et mouvements en vue d'un système ambulatoire." Grenoble INPG, 2008. http://www.theses.fr/2008INPG0172.
The new technologies in electronic and informatics allow us to continually improve medical applications. This thesis is located at the border of the physiological signal processing and the domain of the automatic identification of diseases. We are interested in respiration and cardiac activity, and fusion of information extracted by measuring their characteristics, to provide assistance for the diagnostic of sleep apnea. Our study focused on the measurement, processing and characterization of signals from the heart and respiration, the automatic detection of sleep apnea and comparing strategies specific to each identification model. We began by studying the anatomy, the function and the conventional quantitative measures of the cardio-respiratory system. We have created a simple system to record the electrocardiogram and the respiration at the same time. Next, the signals were checked against reference measurements. We realized the algorithms to select the best parameters and extract the parameters, and to reduce the size of the imput vector. Finally, on the basis of these results, an identification engine for sleep apnea was built. Our original result is that we detect the respiratory signal, the confidence index of respiratory signal, and the heart rate from recordings of single accelerometer applied on chest. This information will help doctors diagnose diseases like arrhythmia or respiratory diseases. The final ambulatory system has several potential applications : automatic detection of sleep apnea, alarm and aid system for doctor's diagnosis, telemedicine,. .
Bradai, Benazouz. "Optimisation des Lois de Commande d’Éclairage Automobile par Fusion de Données." Mulhouse, 2007. http://www.theses.fr/2007MULH0863.
Night-time driving with conventional headlamps is particularly unsafe. Indeed, if one drives much less at night, more than half of the driving fatalities occur during this period. To reduce these figures, several automotive manufacturers and suppliers participated to the European project “Adaptive Front lighting System” (AFS). This project has the aim to define new lightings functions based on a beam adaptation to the driving situation. And, it has to end in 2008 with a change of regulation of the automotive lighting allowing so realisation of all new AFS functions. For that, they explore the possible realisation of such new lighting functions, and study the relevance, the efficiency according to the driving situation, but also the dangers associated with the use, for these lighting functions, of information from the vehicle or from the environment. Since 2003, some vehicles are equipped by bending lights, taking account only of actions of the driver on the steering wheel. These solutions make it possible to improve the visibility by directing the beam towards the interior of the bend. However, the road profile (intersections, bends, etc) not being always known for the driver, the performances related to these solutions are consequently limited. However the embedded navigation systems, on the one hand can contain information on this road profile, and on the other hand have contextual information (engineering works, road type, curve radius, speed limits …). The topic of this thesis aims to optimize lighting control laws based on fusion of navigation systems information with those of vehicle embedded sensors (cameras,…), with consideration of their efficiency and reliability. Thus, this information fusion, applied here to the decision-making, makes it possible to define driving situations and contexts of the vehicle evolution environment (motorway, city, etc) and to choose the appropriate law among the various of developed lighting control laws (code motorway lighting, town lighting, bending light). This approach makes it possible to choose in real time, and by anticipation, between these various lighting control laws. It allows, consequently, the improvement of the robustness of the lighting system. Two points are at the origin of this improvement. Firstly, using the navigation system information, we developed a virtual sensor of event-based electronic horizon analysis allowing an accurate determination of various driving situations. It uses a finite state machine. It thus makes it possible to mitigate the problems of the ponctual nature of the navigation system information. Secondly, we developed a generic virtual sensor of driving situations determination based on the evidence theory of using a navigation system and the vision. This sensor combines confidences coming from the two sources for better distinguishing between the various driving situations and contexts and to mitigate the problems of the two sources taken independently. It also allows building a confidence of the navigation system using some of their criteria. This generic sensor is generalizable with other assistance systems (ADAS) that lighting one. This was shown by applying it to a speed limit detection system SLS (Speed Limit Support). The two developed virtual sensors were applied to the optimization of lighting system (AFS) and for the SLS system. These two systems were implemented on an experimental vehicle (demonstration vehicle) and they are currently operational. They were evaluated by various types of driver going from non experts to experts. They were also shown to car manufacturers (PSA, Audi, Renault, Honda, etc. ) and during different techdays. They proved their reliability during these demonstrations on open roads with various driving situations and contexts
Gesbert, Jean-Charles. "Modélisation 3D du rachis scoliotique : fusion de données et personnalisation expérimentale." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S051/document.
This thesis is a part of a translational research project to improve scoliosis orthopedic brace design through the use, by inverse method, of a simplified and personalized comprehensive biomechanical model for each scoliotic patient’s trunk. It represents the first step of this project, namely, to develop and to implement methods, tools and protocols allowing, on one hand, 3D reconstruction of the external shape and internal components of patient’s trunk from biplanar X-rays (performed with a standard device) and the Model Maker system (Proteor), and on the other hand, measurements of the pressure infered by the brace and their registration on the reconstructed geometry. 3D modeling of the trunk with and without brace as well as pressure measurement were carried out on 11 patients. The development of a common calibration device associated to a specific protocol allows data acquisition nearly without displacements of the patient. Its ease of transportation, installation and a low cost associated with an acquisition time which not penalize the patient’s comfort make possible its use in clinical routine. The use of parametrics geometrical models associated with prediction equations of anatomical parameters provides fast initialization of the geometries of trunk’s internal elements from a reduced number of anatomical landmarks to digitize. Measurements of the pressure infered by the brace, performed thanks to an innovative device made of pressure-sensitive textile fibers allowing it to perfectly fit anatomical curves, have highlighted significant correction variations according to the patient’s position