Dissertations / Theses on the topic 'Data fusion techniques'

To see the other types of publications on this topic, follow the link: Data fusion techniques.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data fusion techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Carter, Duane B. "Analysis of Multiresolution Data fusion Techniques." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36609.

Full text
Abstract:
In recent years, as the availability of remote sensing imagery of varying resolution has increased, merging images of differing spatial resolution has become a significant operation in the field of digital remote sensing. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image of the same geographic area. This study examines properties of fused images and their ability to preserve the spectral integrity of the original image. It analyzes five current data fusion techniques for three complex scenes to assess their performance. The five data fusion models used include one spatial domain model (High-Pass Filter), two algebraic models (Multiplicative and Brovey Transform), and two spectral domain models (Principal Components Transform and Intensity-Hue-Saturation). SPOT data were chosen for both the panchromatic and multispectral data sets. These data sets were chosen for the high spatial resolution of the panchromatic (10 meters) data, the relatively high spectral resolution of the multispectral data, and the low spatial resolution ratio of two to one (2:1). After the application of the data fusion techniques, each merged image was analyzed statistically, graphically, and for increased photointerpretive potential as compared with the original multispectral images. While all of the data fusion models distorted the original multispectral imagery to an extent, both the Intensity-Hue-Saturation Model and the High-Pass Filter model maintained the original qualities of the multispectral imagery to an acceptable level. The High-Pass Filter model, designed to highlight the high frequency spatial information, provided the most noticeable increase in spatial resolution.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Glaab, Enrico. "Analysing functional genomics data using novel ensemble, consensus and data fusion techniques." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12727/.

Full text
Abstract:
Motivation: A rapid technological development in the biosciences and in computer science in the last decade has enabled the analysis of high-dimensional biological datasets on standard desktop computers. However, in spite of these technical advances, common properties of the new high-throughput experimental data, like small sample sizes in relation to the number of features, high noise levels and outliers, also pose novel challenges. Ensemble and consensus machine learning techniques and data integration methods can alleviate these issues, but often provide overly complex models which lack generalization capability and interpretability. The goal of this thesis was therefore to develop new approaches to combine algorithms and large-scale biological datasets, including novel approaches to integrate analysis types from different domains (e.g. statistics, topological network analysis, machine learning and text mining), to exploit their synergies in a manner that provides compact and interpretable models for inferring new biological knowledge. Main results: The main contributions of the doctoral project are new ensemble, consensus and cross-domain bioinformatics algorithms, and new analysis pipelines combining these techniques within a general framework. This framework is designed to enable the integrative analysis of both large- scale gene and protein expression data (including the tools ArrayMining, Top-scoring pathway pairs and RNAnalyze) and general gene and protein sets (including the tools TopoGSA , EnrichNet and PathExpand), by combining algorithms for different statistical learning tasks (feature selection, classification and clustering) in a modular fashion. Ensemble and consensus analysis techniques employed within the modules are redesigned such that the compactness and interpretability of the resulting models is optimized in addition to the predictive accuracy and robustness. The framework was applied to real-word biomedical problems, with a focus on cancer biology, providing the following main results: (1) The identification of a novel tumour marker gene in collaboration with the Nottingham Queens Medical Centre, facilitating the distinction between two clinically important breast cancer subtypes (framework tool: ArrayMining) (2) The prediction of novel candidate disease genes for Alzheimer’s disease and pancreatic cancer using an integrative analysis of cellular pathway definitions and protein interaction data (framework tool: PathExpand, collaboration with the Spanish National Cancer Centre) (3) The prioritization of associations between disease-related processes and other cellular pathways using a new rule-based classification method integrating gene expression data and pathway definitions (framework tool: Top-scoring pathway pairs) (4) The discovery of topological similarities between differentially expressed genes in cancers and cellular pathway definitions mapped to a molecular interaction network (framework tool: TopoGSA, collaboration with the Spanish National Cancer Centre) In summary, the framework combines the synergies of multiple cross-domain analysis techniques within a single easy-to-use software and has provided new biological insights in a wide variety of practical settings.
APA, Harvard, Vancouver, ISO, and other styles
3

Ansari, Abdul Wahab. "The control simulation of tactile sensors using constraint modelling techniques." Thesis, Brunel University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Delgado, Prieto Miguel. "Contributions to electromechanical systems diagnosis by means data fusion techniques." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/97043.

Full text
Abstract:
Electromechanical drives have traditionally found their field of application in the industrial sector. However, the use of such systems is spreading to other sectors within the field of transport, such as the automotive sector, or to the aircraft sector with the development of the concept of More Electric Aircraft (MEA). One of the major improvements of the MEA concept is related to the actuators of the primary flight controls, where so far only have been considered electrohydraulic actuators, although the current trend is to replace them with electromechanical actuators (EMA). Widespread use, in the future, of EMA in transport systems, is only possible with research and vances in algorithms for detection and diagnosis of faults that may occur both, in the electrical or mechanical parts, in order to ensure the reliability of the drive and the safety of users. During the last years, the study of electro-techanical systems and the fault diagnosis under varying conditions of torque and speed has been mandatory. Although these requirements have been studied deeply by different authors, most of the works are focused on single fault detection. Therefore, there is a lack of diagnosis methods able to detect different kinds of faults in an electro-mechanical actuator. There are very few studies related with diagnosis schemes capable of identifying various faults under different operating conditions, and even less analyzing deeply all the diagnosis chain to face the challenge from all possible perspectives. In this research work, it is proposed the nvestigation towards integral health monitoring schemes for electro-mechanical systems based on pattern recognition. In order to identify various faults under different operating conditions, the health monitoring scheme is developed from a data fusion point of view. The processing of great deals of information enhances the pattern recognition capabilities but, in turn, requires the mplementation of advanced techniques and methodologies. Therefore, first, it is proposed in this research work a review of the whole diagnosis chain, including the different stages (feature calculation, features reduction and classification), the methodologies and techniques. The review finishes by presenting the proposed strategies to take a step further in each diagnosis stage, proposing methodologies to be investigated which would allow a significant advance towards the integral diagnosis systems. In this sense, investigation towards a novel feature calculation methodology able to deal with non-stationary conditions is presented. Next, the feature reduction stage is covered by the proposal of collaborative methodologies by different techniques to improve the significance of the reduced feature set. Also, a more concrete approach is developed by non-lineal techniques, which are not commonly used. Finally, different classification structures are analyzed and novel classification architecture is proposed to be applied in multi-fault diagnosis problems. Experimental analyses are presented resulting from the application of the proposed strategies to different electro-mechanical arrangements. The obtained results achieve high performance levels, and the proposed methodologies can be adapted to the necessary diagnostic requirements. It should be noticed that the proposed contributions increase the information obtained from the system to a better understanding of its behavior and this, has a direct effect over the reliability of the system operation.
Els accionaments electromecànics han tingut tradicionalment el seu camp d'aplicació en el sector industrial. No obstant això l'ús d'aquest tipus de sistemes s'està estenent cap a altres sectors dins l'àmbit dels transports, com el sector de l'automòbil, o el sector de l'aeronàutica, amb el desenvolupament del concepte de l'Avió Més Elèctric (MEA). Una de les millores més importants del concepte MEA està relacionada amb els actuadors dels controls primaris de vol, on fins ara només s'han considerat actuadors electrohidràulics, encara que la tendència actual és reemplaçar-los per actuadors electromecànics (EMA). L'ús generalitzat, en el futur, d'accionaments EMA en sistemes de transport, passa per la investigació i els avenços en els algorismes de detecció i diagnòstic de fallides que es puguin produir, tant en la part elèctrica com en la mecànica, per tal de garantir la fiabilitat de l'accionament i la seguretat dels usuaris. Durant els últims anys, l'estudi de sistemes electromecànics i el diagnòstic de fallides en diverses condicions de parell i de règim de funcionament, han estat estudiats profundament per diferents autors, encara que la majoria dels treballs es centren en la detecció d'una única fallida. Per tant, hi ha una manca de mètodes de diagnòstic capaços de detectar diferents tipus de defectes en un actuador electromecànic. Hi ha molt pocs estudis relacionats amb els sistemes de diagnòstic, capaços d'identificar diverses fallides sota diferents condicions d'operació, i molt menys analitzar profundament tota la cadena de diagnòstic per afrontar el problema des de totes les perspectives possibles. En aquesta tesi, es proposa la investigació sobre tècniques per a la monitorització de condició de sistemes electromecànics, basada en el reconeixement de patrons. Per tal d'identificar diferents fallides sota diferents condicions d'operació, les tècniques propostes s'elaboren sota el prisma de la fusió de dades. El tractament de grans quantitats d'informació, millora els resultats dels algoritmes de reconeixement de patrons, però al seu torn, requereixen de l'aplicació de tècniques i metodologies avançades. Per tant, inicialment es realitza una revisió de la cadena de diagnòstic complerta, incloent les metodologies i tècniques per a les diferents etapes (càlcul d'indicadors, reducció de dimensionalitat i classificació). La revisió finalitza amb la presentació de les estratègies proposades com aportació en cada etapa de diagnòstic. Els resultats obtinguts permeten avenços significatius cap als sistemes de diagnòstic integrals. En aquest sentit, es presenta la investigació sobre metodologies de càlcul d'indicadors en condicions no estacionàries. A continuació, en l'etapa de reducció de dimensionalitat, es proposen metodologies col•laboratives aplicant diferents tècniques que permeten millorar la discriminació de classes, concretament es proposa un enfocament basant-se en tècniques no lineals, que no s'usen habitualment. Finalment, s'analitzen les diferents estructures de classificació i es proposa una arquitectura nova de classificació per ser aplicada en problemes de diagnòstic de múltiples fallides. Es presenten resultats experimentals de les diferents metodologies propostes, per a diferents configuracions electromecàniques. Els resultats obtinguts mostren un alt nivell de rendiment, i les metodologies proposades es poden adaptar als requisits de diagnòstic necessàries en diferents aplicacions. Es conclou que la informació resultant permet una millor comprensió del comportament del sistema sota test, i això té un efecte directe sobre la seva fiabilitat d'operació.
Los accionamientos electromecánicos han tenido tradicionalmente su campo de aplicación en el sector industrial. Sin embargo el uso de este tipo de sistemas se está extendiendo hacia otros sectores dentro del ámbito de los transportes, como el sector del automóvil, o el sector de la aeronáutica con el desarrollo del concepto del Avión Más Eléctrico (MEA). Una de las mejoras más importantes del concepto MEA está relacionada con los actuadores de los controles primarios de vuelo, donde hasta el momento sólo se han considerado actuadores electrohidráulicos, aunque la tendencia actual es remplazarlos por actuadores electromecánicos (EMA). El uso generalizado, en el futuro, de accionamientos EMA en sistemas de transporte, pasa por la investigación y los avances en los algoritmos de detección y diagnóstico de fallos que se puedan producir, tanto en la parte eléctrica como en la mecánica, con el fin de garantizar la fiabilidad del accionamiento y la seguridad de los usuarios. Durante los últimos años, el estudio de sistemas electromecánicos y el diagnóstico de fallos en diversas condiciones de par y de régimen de funcionamiento, han sido estudiados profundamente por diferentes autores, aunque la mayoría de los trabajos se centran en la detección de un único fallo. Por lo tanto, existe una falta de métodos de diagnóstico capaces de detectar diferentes tipos de defectos en un actuador electro-mecánico. Hay muy pocos estudios relacionados con los sistemas de diagnóstico, capaces de identificar diversos fallos bajo diferentes condiciones de operación, y mucho menos analizar profundamente toda la cadena de diagnóstico para afrontar el problema desde todas las perspectivas posibles. En esta tesis, se propone la investigación sobre técnicas para la monitorización de condición de sistemas electromecánicos, basados en el reconocimiento de patrones. Con el fin de identificar diferentes fallos bajo diferentes condiciones de operación, las técnicas propuestas se elaboran bajo el prisma de la fusión de datos. El tratamiento de grandes cantidades de información, mejora los resultados de los algoritmos de reconocimiento de patrones, pero a su vez, requieren de la aplicación de técnicas y metodologías avanzadas. Por lo tanto, inicialmente se realiza una revisión de la cadena de diagnóstico completa, incluyendo las metodologías y técnicas para las diferentes etapas (cálculo de indicadores, reducción de dimensionalidad y clasificación). La revisión finaliza con la presentación de las estrategias propuestas como aportación en cada etapa de diagnóstico. Los resultados obtenidos permiten avances significativos hacia los sistemas de diagnóstico integrales. En este sentido, se presenta la investigación sobre metodologías de cálculo de indicadores en condiciones no estacionarias. A continuación, en la etapa de reducción de dimensionalidad, se proponen metodologías colaborativas aplicando diferentes técnicas que permiten mejorar la discriminación de clases; concretamente se propone un enfoque basándose en técnicas no lineales, que no se usan habitualmente. Finalmente, se analizan las diferentes estructuras de clasificación y se propone una arquitectura novedosa de clasificación para ser aplicada en problemas de diagnóstico de múltiples fallos. Se presentan resultados experimentales de las diferentes metodologías propuestas, para diferentes configuraciones electro-mecánicas. Los resultados obtenidos muestran un alto nivel de rendimiento, y las metodologías propuestas se pueden adaptar a los requisitos de diagnóstico necesarias en diferentes aplicaciones. Se concluye que la información resultante permite una mejor comprensión del comportamiento del sistema bajo test, y esto tiene un efecto directo sobre su fiabilidad de operación.
APA, Harvard, Vancouver, ISO, and other styles
5

MacEwen, Clare. "Can data fusion techniques predict adverse physiological events during haemodialysis?" Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:1ef92d5d-920d-4ff4-b368-5e892527e675.

Full text
Abstract:
Intra-dialytic haemodynamic instability is a common and disabling problem which may lead to morbidity and mortality though repeated organ ischaemia, but it has proven difficult to link any particular blood pressure threshold with hard patient outcomes. The relationship between blood pressure and downstream organ ischaemia during haemodialysis has not been well characterised. Previous attempts to predict and prevent intra-dialytic hypotension have had mixed results, partly due to patient and event heterogeneity. Using the brain as the indicator organ, we aimed to model the dynamic relationship between blood pressure, real-time symptoms, downstream organ ischaemia during haemodialysis, in order to identify the most physiologically grounded, prognostic definition of intra-dialytic decompensation. Following on from this, we aimed to predict the onset of intra-dialytic decompensation using personalised, probabilistic models of multivariate, continuous physiological data, ultimately working towards an early warning system for intra-dialytic adverse events. This was a prospective study of 60 prevalent haemodialysis patients who underwent extensive, continuous physiological monitoring of haemodynamic, cardiorespiratory, tissue oxygenation and dialysis machine parameters for 3-4 weeks. In addition, longitudinal cognitive function testing was performed at baseline and at 12 months. Despite their use in clinical practice, we found that blood pressure thresholds alone have a poor trade off between sensitivity and specificity for predicting downstream tissue ischaemia during haemodialysis. However, the performance of blood pressure thresholds could be improved by stratification for the presence or absence of cerebral autoregulation, and personalising thresholds according to the individual lower limit of autoregulation. For patients without autoregulation, the optimal blood pressure target was a mean arterial pressure (MAP) of 70mmHg. A key finding was that cumulative intra-dialytic exposure to cerebral ischaemia, but not to hypotension per se, corresponded to change in executive cognitive function over 12 months. Therefore we chose cerebral ischaemia as the definition of intra-dialytic decompensation for predictive modelling. We were able to demonstrate that the development of cerebral desaturation could be anticipated from earlier deviations of univariate physiological data from the expected trajectory for a given patient, but sensitivity was limited by the heterogeneity of events even within one individual. The most useful phys- iological data streams included peripheral saturation variance, cerebral saturation variance, heart rate and mean arterial pressure. Multivariate data fusion techniques using these variables created promising personalised models capable of giving an early warning of decompensation. Future work will involve the refinement and prospective testing of these models. In addition, we envisage a prospective study assessing the benefit of autoregulation-guided blood pressure targets on short term outcomes such as patient symptoms and wellbeing, as well as longer term outcomes such as cognitive function.
APA, Harvard, Vancouver, ISO, and other styles
6

SKEPPE, LOVISA. "Classify Swedish bank transactions withearly and late fusion techniques." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-156312.

Full text
Abstract:
Categorising bank transactions to predened categories are essential for getting a good overview of ones personal nance. Tink provides a mobile app for automatic categorisation of bank transactions. Tink's categorisation approach is a clustering technique with longest prex match based on merchant.This thesis will examine if a machine learning model can learn to classify transactions based on its purchase, what was bought, instead of merchant.This thesis classies bank transactions in a supervised learning setting by exploring early and late fusion schemes on three types of modalities (text, amount,date) found in Swedish bank transactions. Experiments are carried out with Naive Bayes, Support Vector Machines and Decision Trees. The dierent fusionschemes are compared with no fusion, learned on only one modality, and stacked classication, learning models in a pipe-lined fashion.The early fusion concatenation schemes shows all worse performance than no fusion on the text modality. The late fusion experiments on the other hand shows no impact of modality fusion.Suggestions are made to change the feedback loop from user, to get more data labeled by users, which would potentially boost the other modalities importance
Att sköta sin privatekonomi med hjälp av kategorisering gör nog många människor omedvetet, en försöker helt enkelt få en känsla pa vad en lägger sina pengar på. För att kunna ge full översikt på hur ens privatekonomi ser ut, har Tink skapat en mobilapplikation for att automatiskt kategorisera banktransaktioner. Detta görs just nu med klustering och längsta prex matchning på forsäljningsställe. Kategoriseringen av banktransaktioner ger användaren en direkt återkoppling på hur pengaflödet ser ut samt till vad och när dessa köp görs. Den har uppsatsen kommer att undersoka om en maskininlärningsmodell kan lära sig att klassicera banktransaktioner baserat pa köp istället för försäljningsställe. Genom att undersöka två olika fusioneringsscheman på tre typer av modaliteter funna i banktransaktioner (text, pris och datum), ska vi forsoka uttröna dessa modaliteters påverkan på klassicering. De olika scheman är jamförda med ingen fusionering, dvs inlärning på endast en modalitet, och travad klassicering,dvs inlärning med era efterföljande modeller.Experimenten ar gjorda med supervised-learning och inlärningsmodellerna är Naive Bayes, Support Vector Machines samt Beslutstrad. Experimenten visar på att klassicering på text, alltså försäljningsställe ger bäst resultat i jämförelse med alla de andra experimenten. I de tidiga fusionsexperimenten visar alla modalitet-sammanslagningar sämre resultat än ingen fusion på bara text. De sena fusion experimenten visar å andra sidan ingen skillnad alls efter fusionering med modaliteternas pris och datum. Förslag på förbättrad klassicering på köp antas öka, alltså modaliteternas pris och datum bör vara mer betydande, om mer var datamärkt av användare.
APA, Harvard, Vancouver, ISO, and other styles
7

Rogowski, Justin. "Investigation into automatic identification of bottlenose dolphins using data fusion techniques." Thesis, University of Derby, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.506682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mesina, Justin E. "Urban Classification Techniques Using the Fusion of LiDAR and Spectral Data." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/17420.

Full text
Abstract:
Approved for public release; distribution is unlimited
Combining different types of data from varying sensors has the potential to be more accurate than a single sensor. This research fused airborne LiDAR data and WorldView-2 (WV-2) multispectral imagery (MSI) data to create an improved classification image of urban San Francisco, California. A decision tree scenario was created by extracting features from the LiDAR, as well as NDVI from the multispectral data. Raster masks were created using these features and were processed as decision tree nodes resulting in seven classifications. Twelve regions of interest were created, then categorized and applied to the previous seven classifications via the maximum likelihood classification. The resulting classification images were then combined. A multispectral classification image using the same ROIs was also created for comparison. The fused classification image did a better job of preserving urban geometries than MSI data alone and suffered less from shadow anomalies. The fused results however, were not as accurate in differentiating trees from grasses as using only spectral results. Overall the fused LiDAR and MSI classification performed better than the MSI classification alone but further refinements to the decision tree scheme could probably be made to improve final results.
APA, Harvard, Vancouver, ISO, and other styles
9

De, Gregorio Ludovica. "Development of new data fusion techniques for improving snow parameters estimation." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/245392.

Full text
Abstract:
Water stored in snow is a critical contribution to the world’s available freshwater supply and is fundamental to the sustenance of natural ecosystems, agriculture and human societies. The importance of snow for the natural environment and for many socio-economic sectors in several mid‐ to high‐latitude mountain regions around the world, leads scientists to continuously develop new approaches to monitor and study snow and its properties. The need to develop new monitoring methods arises from the limitations of in situ measurements, which are pointwise, only possible in accessible and safe locations and do not allow for a continuous monitoring of the evolution of the snowpack and its characteristics. These limitations have been overcome by the increasingly used methods of remote monitoring with space-borne sensors that allow monitoring the wide spatial and temporal variability of the snowpack. Snow models, based on modeling the physical processes that occur in the snowpack, are an alternative to remote sensing for studying snow characteristics. However, from literature it is evident that both remote sensing and snow models suffer from limitations as well as have significant strengths that it would be worth jointly exploiting to achieve improved snow products. Accordingly, the main objective of this thesis is the development of novel methods for the estimation of snow parameters by exploiting the different properties of remote sensing and snow model data. In particular, the following specific novel contributions are presented in this thesis: i. A novel data fusion technique for improving the snow cover mapping. The proposed method is based on the exploitation of the snow cover maps derived from the AMUNDSEN snow model and the MODIS product together with their quality layer in a decision level fusion approach by mean of a machine learning technique, namely the Support Vector Machine (SVM). ii. A new approach has been developed for improving the snow water equivalent (SWE) product obtained from AMUNDSEN model simulations. The proposed method exploits some auxiliary information from optical remote sensing and from topographic characteristics of the study area in a new approach that differs from the classical data assimilation approaches and is based on the estimation of AMUNDSEN error with respect to the ground data through a k-NN algorithm. The new product has been validated with ground measurement data and by a comparison with MODIS snow cover maps. In a second step, the contribution of information derived from X-band SAR imagery acquired by COSMO-SkyMed constellation has been evaluated, by exploiting simulations from a theoretical model to enlarge the dataset.
APA, Harvard, Vancouver, ISO, and other styles
10

Adusumilli, Srujana. "Development of Statistical Learning Techniques for INS and GPS Data Fusion." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1398772813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kwon, Samuel M. (Samuel Moonha). "Pixel-level data fusion techniques applied to the detection of gust fonts." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/38351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jarrell, Jason A. "Employ sensor fusion techniques for determining aircraft attitude and position information." Morgantown, W. Va. : [West Virginia University Libraries], 2008. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5894.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2008.
Title from document title page. Document formatted into pages; contains xii, 108, [9] p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 104-108).
APA, Harvard, Vancouver, ISO, and other styles
13

Rajan, Krithika. "Analysis of pavement condition data employing Principal Component Analysis and sensor fusion techniques." Thesis, Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Xiaofeng. "Machinery fault diagnostics based on fuzzy measure and fuzzy integral data fusion techniques." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16456/1/Xiaofeng_Liu_Thesis.pdf.

Full text
Abstract:
With growing demands for reliability, availability, safety and cost efficiency in modern machinery, accurate fault diagnosis is becoming of paramount importance so that potential failures can be better managed. Although various methods have been applied to machinery condition monitoring and fault diagnosis, the diagnostic accuracy that can be attained is far from satisfactory. As most machinery faults lead to increases in vibration levels, vibration monitoring has become one of the most basic and widely used methods to detect machinery faults. However, current vibration monitoring methods largely depend on signal processing techniques. This study is based on the recognition that a multi-parameter data fusion approach to diagnostics can produce more accurate results. Fuzzy measures and fuzzy integral data fusion theory can represent the importance of each criterion and express certain interactions among them. This research developed a novel, systematic and effective fuzzy measure and fuzzy integral data fusion approach for machinery fault diagnosis, which comprises feature set selection schema, feature level data fusion schema and decision level data fusion schema for machinery fault diagnosis. Different feature selection and fault diagnostic models were derived from these schemas. Two fuzzy measures and two fuzzy integrals were employed: the 2-additive fuzzy measure, the fuzzy measure, the Choquet fuzzy integral and the Sugeno fuzzy integral respectively. The models were validated using rolling element bearing and electrical motor experiments. Different features extracted from vibration signals were used to validate the rolling element bearing feature set selection and fault diagnostic models, while features obtained from both vibration and current signals were employed to assess electrical motor fault diagnostic models. The results show that the proposed schemas and models perform very well in selecting feature set and can improve accuracy in diagnosing both the rolling element bearing and electrical motor faults.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Xiaofeng. "Machinery fault diagnostics based on fuzzy measure and fuzzy integral data fusion techniques." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16456/.

Full text
Abstract:
With growing demands for reliability, availability, safety and cost efficiency in modern machinery, accurate fault diagnosis is becoming of paramount importance so that potential failures can be better managed. Although various methods have been applied to machinery condition monitoring and fault diagnosis, the diagnostic accuracy that can be attained is far from satisfactory. As most machinery faults lead to increases in vibration levels, vibration monitoring has become one of the most basic and widely used methods to detect machinery faults. However, current vibration monitoring methods largely depend on signal processing techniques. This study is based on the recognition that a multi-parameter data fusion approach to diagnostics can produce more accurate results. Fuzzy measures and fuzzy integral data fusion theory can represent the importance of each criterion and express certain interactions among them. This research developed a novel, systematic and effective fuzzy measure and fuzzy integral data fusion approach for machinery fault diagnosis, which comprises feature set selection schema, feature level data fusion schema and decision level data fusion schema for machinery fault diagnosis. Different feature selection and fault diagnostic models were derived from these schemas. Two fuzzy measures and two fuzzy integrals were employed: the 2-additive fuzzy measure, the fuzzy measure, the Choquet fuzzy integral and the Sugeno fuzzy integral respectively. The models were validated using rolling element bearing and electrical motor experiments. Different features extracted from vibration signals were used to validate the rolling element bearing feature set selection and fault diagnostic models, while features obtained from both vibration and current signals were employed to assess electrical motor fault diagnostic models. The results show that the proposed schemas and models perform very well in selecting feature set and can improve accuracy in diagnosing both the rolling element bearing and electrical motor faults.
APA, Harvard, Vancouver, ISO, and other styles
16

Othman, Nadia. "Fusion techniques for iris recognition in degraded sequences." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLL003/document.

Full text
Abstract:
Parmi les diverses modalités biométriques qui permettent l'identification des personnes, l'iris est considéré comme très fiable, avec un taux d'erreur remarquablement faible. Toutefois, ce niveau élevé de performances est obtenu en contrôlant la qualité des images acquises et en imposant de fortes contraintes à la personne (être statique et à proximité de la caméra). Cependant, dans de nombreuses applications de sécurité comme les contrôles d'accès, ces contraintes ne sont plus adaptées. Les images résultantes souffrent alors de diverses dégradations (manque de résolution, artefacts...) qui affectent négativement les taux de reconnaissance. Pour contourner ce problème, il est possible d’exploiter la redondance de l’information découlant de la disponibilité de plusieurs images du même œil dans la séquence enregistrée. Cette thèse se concentre sur la façon de fusionner ces informations, afin d'améliorer les performances. Dans la littérature, diverses méthodes de fusion ont été proposées. Cependant, elles s’accordent sur le fait que la qualité des images utilisées dans la fusion est un facteur crucial pour sa réussite. Plusieurs facteurs de qualité doivent être pris en considération et différentes méthodes ont été proposées pour les quantifier. Ces mesures de qualité sont généralement combinées pour obtenir une valeur unique et globale. Cependant, il n'existe pas de méthode de combinaison universelle et des connaissances a priori doivent être utilisées, ce qui rend le problème non trivial. Pour faire face à ces limites, nous proposons une nouvelle manière de mesurer et d'intégrer des mesures de qualité dans un schéma de fusion d'images, basé sur une approche de super-résolution. Cette stratégie permet de remédier à deux problèmes courants en reconnaissance par l'iris: le manque de résolution et la présence d’artefacts dans les images d'iris. La première partie de la thèse consiste en l’élaboration d’une mesure de qualité pertinente pour quantifier la qualité d’image d’iris. Elle repose sur une mesure statistique locale de la texture de l’iris grâce à un modèle de mélange de Gaussienne. L'intérêt de notre mesure est 1) sa simplicité, 2) son calcul ne nécessite pas d'identifier a priori les types de dégradations, 3) son unicité, évitant ainsi l’estimation de plusieurs facteurs de qualité et un schéma de combinaison associé et 4) sa capacité à prendre en compte la qualité intrinsèque des images mais aussi, et surtout, les défauts liés à une mauvaise segmentation de la zone d’iris. Dans la deuxième partie de la thèse, nous proposons de nouvelles approches de fusion basées sur des mesures de qualité. Tout d’abord, notre métrique est utilisée comme une mesure de qualité globale de deux façons différentes: 1) comme outil de sélection pour détecter les meilleures images de la séquence et 2) comme facteur de pondération au niveau pixel dans le schéma de super-résolution pour donner plus d'importance aux images de bonnes qualités. Puis, profitant du caractère local de notre mesure de qualité, nous proposons un schéma de fusion original basé sur une pondération locale au niveau pixel, permettant ainsi de prendre en compte le fait que les dégradations peuvent varier d’une sous partie à une autre. Ainsi, les zones de bonne qualité contribueront davantage à la reconstruction de l'image fusionnée que les zones présentant des artéfacts. Par conséquent, l'image résultante sera de meilleure qualité et pourra donc permettre d'assurer de meilleures performances en reconnaissance. L'efficacité des approches proposées est démontrée sur plusieurs bases de données couramment utilisées: MBGC, Casia-Iris-Thousand et QFIRE à trois distances différentes. Nous étudions séparément l'amélioration apportée par la super-résolution, la qualité globale, puis locale dans le processus de fusion. Les résultats montrent une amélioration importante apportée par l'utilisation de la qualité globale, amélioration qui est encore augmentée en utilisant la qualité locale
Among the large number of biometric modalities, iris is considered as a very reliable biometrics with a remarkably low error rate. The excellent performance of iris recognition systems are obtained by controlling the quality of the captured images and by imposing certain constraints on users, such as standing at a close fixed distance from the camera. However, in many real-world applications such as control access and airport boarding these constraints are no longer suitable. In such non ideal conditions, the resulting iris images suffer from diverse degradations which have a negative impact on the recognition rate. One way to try to circumvent this bad situation is to use some redundancy arising from the availability of several images of the same eye in the recorded sequence. Therefore, this thesis focuses on how to fuse the information available in the sequence in order to improve the performance. In the literature, diverse schemes of fusion have been proposed. However, they agree on the fact that the quality of the used images in the fusion process is an important factor for its success in increasing the recognition rate. Therefore, researchers concentrated their efforts in the estimation of image quality to weight each image in the fusion process according to its quality. There are various iris quality factors to be considered and diverse methods have been proposed for quantifying these criteria. These quality measures are generally combined to one unique value: a global quality. However, there is no universal combination scheme to do so and some a priori knowledge has to be inserted, which is not a trivial task. To deal with these drawbacks, in this thesis we propose of a novel way of measuring and integrating quality measures in a super-resolution approach, aiming at improving the performance. This strategy can handle two types of issues for iris recognition: the lack of resolution and the presence of various artifacts in the captured iris images. The first part of the doctoral work consists in elaborating a relevant quality metric able to quantify locally the quality of the iris images. Our measure relies on a Gaussian Mixture Model estimation of clean iris texture distribution. The interest of our quality measure is 1) its simplicity, 2) its computation does not require identifying in advance the type of degradations that can occur in the iris image, 3) its uniqueness, avoiding thus the computation of several quality metrics and associated combination rule and 4) its ability to measure the intrinsic quality and to specially detect segmentation errors. In the second part of the thesis, we propose two novel quality-based fusion schemes. Firstly, we suggest using our quality metric as a global measure in the fusion process in two ways: as a selection tool for detecting the best images and as a weighting factor at the pixel-level in the super-resolution scheme. In the last case, the contribution of each image of the sequence in final fused image will only depend on its overall quality. Secondly, taking advantage of the localness of our quality measure, we propose an original fusion scheme based on a local weighting at the pixel-level, allowing us to take into account the fact that degradations can be different in diverse parts of the iris image. This means that regions free from occlusions will contribute more in the image reconstruction than regions with artefacts. Thus, the quality of the fused image will be optimized in order to improve the performance. The effectiveness of the proposed approaches is shown on several databases commonly used: MBGC, Casia-Iris-Thousand and QFIRE at three different distances: 5, 7 and 11 feet. We separately investigate the improvement brought by the super-resolution, the global quality and the local quality in the fusion process. In particular, the results show the important improvement brought by the use of the global quality, improvement that is even increased using the local quality
APA, Harvard, Vancouver, ISO, and other styles
17

Van, Huyssteen David. "Application of sensor data fusion techniques to the light armoured vehicle reconnaissance (LAV Recce)." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0002/MQ44861.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

VILLANUEVA, JUAN MOISES MAURICIO. "DATA FUSION OF TIME OF FLIGHT TECHNIQUES USING ULTRASONIC TRANSDUCERS FOR WIND SPEED MEASUREMENT." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=32625@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
A medição da velocidade de fluidos tem relevância considerável em aplicações industriais e científicas, nas quais medições com baixa incerteza são geralmente requeridas. Nesta tese, tem-se como objetivo projetar e modelar um sistema de medição de velocidade de vento utilizando fusão de dados das informações dos tempos de trânsito obtidas pelas técnicas de detecção de limiar e diferença de fase. Para este propósito, este trabalho é composto por duas partes principais. Na primeira parte, apresenta-se uma análise da propagação de incertezas das técnicas de detecção de limiar e diferença de fase considerando duas estruturas para a medição da velocidade do vento, e faz-se a comparação das faixas de medição e suas incertezas associadas para cada estrutura de medição. Na segunda parte deste trabalho, faz-se um estudo das técnicas de fusão de dados aplicadas a instrumentação e medição, identificandose duas técnicas principais baseadas em: (a) estimação de máxima probabilidade (MLE – Maximum Likelihood Estimation), (b) relação de compatibilidade fuzzy e operadores OWA (Order Weighted Average) com agregação parcial. Em seguida, estas técnicas de fusão são aplicadas para a estimação do tempo de trânsito, considerando-se várias medições independentes do tempo de trânsito obtidas pelas técnicas de detecção de limiar e diferença de fase. Finalmente, realiza-se uma análise da incerteza quantificando-se a incerteza de cada medição sobre o resultado final de fusão. Apresenta-se um estudo de caso englobando estas duas partes do trabalho, desenvolvendo-se o projeto e modelagem de um instrumento de medição de velocidade do vento com baixa incerteza, considerando-se as incertezas associadas, e o uso de técnicas adequadas de fusão de dados para prover informações com maior exatidão e confiabilidade. Resultados experimentais são realizados em um túnel de vento de baixa velocidade com o objetivo de verificar a consistência dos estudos teóricos apresentados.
Flow speed measurement has considerable relevance in industrial and scientific applications, where measurements with low uncertainty are required. In this work, a system for wind speed measurement using ultrasonic transducers is designed and modelled. This system makes use of data fusion techniques for the time-of-flight estimation, combining independent information provided by the threshold detection and phase difference methods. For this purpose, this work consists of two main parts. The first part presents an analysis of uncertainty and error propagation concerning the threshold detection and phase difference techniques and considering two structures for the wind speed measurement. Measurement ranges are associated uncertainties are then compared for each of those estrutures. In the second part of this work, data fusion techniques applied to instrumentation and measurement are studied; two main techniques are singled out: (a) Maximum Likelihood Estimation (MLE), (b) Fuzzy compatibility relation and Order Weighted Average (OWA) operators with partial aggregation. These fusion techniques are then applied to the time-of-flight estimation, by considering several independent measurements obtained through the threshold detection and phase difference techniques. Finally, uncertainty analysis is carried out by quantifying the influence of each independent measurement on the global fusion result. A case study is also presented, where an instrument for wind speed measurements with low uncertainty is designed and modelled. Appropriate techniques of data fusion aimed at improving accuracy and realiability are considered. Experiments are performed in a wind tunnel in order to verify the consistency of the results in view of the theoretical studies.
APA, Harvard, Vancouver, ISO, and other styles
19

Engebretson, Kent Russell. "A comparison of data fusion techniques for target detetction with a wide azimuth sonar." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/39367.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 133).
by Kent Russell Engebretson.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
20

Miranda, Luís Miguel Gonçalves. "Data fusion with computational intelligence techniques: a case study of fuzzy inference for terrain assessment." Master's thesis, Faculdade de Ciências e Tecnologia, 2014. http://hdl.handle.net/10362/12338.

Full text
Abstract:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
With the constant technology progression is inherent storage of all kinds of data. Satellites, mobile phones, cameras and other type of electronic equipment, produce on daily basis an amount of data of gigantic proportions. These data alone may not convey any meaning and may even be impossible to interpret them without specific auxiliary measures. Data fusion contributes in this issue giving use of these data, processing them into proper knowledge for whom analyzes. Within data fusion there are numerous processing approaches and methodologies, being given here highlight to the one that most resembles to the imprecise human knowledge, the fuzzy reasoning. These method is applied in several areas, inclusively as inference system for hazard detection and avoidance in unmanned space missions. To this is fundamental the use of fuzzy inference systems, where the problem is modeled through a set of linguistic rules, fuzzy sets, membership functions and other information. In this thesis it was developed a fuzzy inference system, for safe landing sites using fusion of maps, and a data visualization tool. Thus, classification and validation of the information are made easier with such tools.
APA, Harvard, Vancouver, ISO, and other styles
21

Wragge-Morley, Robert. "Parameter estimation in road vehicles using non-linear adaptive observer and novel data fusion techniques." Thesis, University of Bristol, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.715766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Landberg, Markus. "Enhancement Techniques for Lane PositionAdaptation (Estimation) using GPS- and Map Data." Thesis, Linköpings universitet, Datorseende, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110812.

Full text
Abstract:
A lane position system and enhancement techniques, for increasing the robustnessand availability of such a system, are investigated. The enhancements areperformed by using additional sensor sources like map data and GPS. The thesiscontains a description of the system, two models of the system and two implementedfilters for the system. The thesis also contains conclusions and results oftheoretical and experimental tests of the increased robustness and availability ofthe system. The system can be integrated with an existing system that investigatesdriver behavior, developed for fatigue. That system was developed in aproject named Drowsi, where among others Volvo Technology participated.
Ett filpositioneringssystem undersöks och förbättringstekniker för ökandet av robusthetoch tillgängligheten av ett sådant system genom att använda ytterligaresensorkällor som kartdata och GPS. Detta examensarbete presenterar beskrivningenav ett system, två modeller och två implementerade filter. Examensarbetetinnehåller också slutsatser och resultat av teoretiska och experimentella testersom plottar och grafer av ökad robusthet och tillgängligheten av systemet. Dettasystem kan bli integrerat med ett framtaget system som tittar på körrelaterat beteendevid trötthet. Systemet är utvecklat i ett projekt kallat Drowsi, där blandandra Volvo Technology deltog.
APA, Harvard, Vancouver, ISO, and other styles
23

Kurebayashi, Shinya 1976. "Using Nuclear data and Monte-Carlo techniques to study areal density and mix in D² inertial confinement fusion implosions." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/29369.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Physics, 2004.
Includes bibliographical references.
Measurements from three classes of direct-drive implosions at the OMEGA laser system [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)] were combined with Monte-Carlo simulations to investigate models for determining hot-fuel areal density ([rho]Rhot) in compressed, D2-filled capsules, and to assess the impact of mix and other factors on the determination of pRhot. The results of the Monte-Carlo calculations were compared to predictions of commonly used models that use ratios of either secondary D3He proton yields or secondary DT neutron yields to primary DD neutron yields to provide estimates [rho]Rhot or [rho]Rhot,n, respectively, for pRhot. For the first class of implosions, where [rho]Rhot is low (=/< 3 mg/cm2), [rho]Rhot,p and [rho]Rhot,n often agree with each other and are often good estimates of the actual [rho]Rhot. For the second class of implosions, where [rho]Rhot is of order 10 mg/cm2, pRhot,p often underestimates the actual value due to secondary proton yield saturation. In addition, fuel-shell mix causes pRhot,p to further underestimate, and [Rho]Rhot,n to overestimate, [rho]Rhot. As a result, values of [Rho]Rhot,p and [Rho]Rhot,n can be interpreted as lower and upper limits, respectively. For the third class of implosions, involving cryogenic capsules, secondary protons and neutrons are produced mainly in the hot and cold fuel regions, respectively, and the effects of the mixing of hot and cold fuel must be taken into account when interpreting the values of [rho]Rhot,p and pRhot,n. From these data sets, we conclude that accurate inference of [rho]Rhot requires comprehensive measurements in combination with detailed modeling.
by Shinya Kurebayashi.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
24

Radhakrishnan, Aswathnarayan. "A Study on Applying Learning Techniques to Remote Sensing Data." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1586901481703797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nicolini, Andrea. "Multipath tracking techniques for millimeter wave communications." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17690/.

Full text
Abstract:
L'obiettivo di questo elaborato è studiare il problema del tracciamento efficiente e continuo dell'angolo di arrivo dei cammini multipli dominanti in un canale radio ad onde millimetriche. In particolare, viene considerato uno scenario di riferimento in cui devono essere tracciati il cammino diretto da una stazione base e due cammini riflessi da ostacoli in diverse condizioni operative e di movimento dell'utente mobile. Si è assunto che l'utente mobile può effettuare delle misure rumorose di angolo di arrivo dei tre cammini, uno in linea di vista e gli altri due non in linea di vista, ed eventualmente delle misure di distanza tra esso e le tre "sorgenti" (ad esempio ricavandole da misure di potenza ricevuta). Utilizzando un modello "spazio degli stati", sono stati investigati due diversi approcci: il primo utilizza un fitraggio di Kalman direttamente sulle misure di angolo di arrivo, mentre il secondo adotta un metodo a due passi in cui lo stato è rappresentato dalle posizioni della stazione base e dei due ostacoli, dalle quali vengono valutate le stime degli angoli di arrivo. In entrambi i casi è stato investigato l'impatto che ha sulla stima la fusione dei dati ottenuti dai sensori inerziali integrati nel dispositivo, ovvero velocità angolare ed accelerazione del mobile, con le misure di angolo di arrivo. Successivamente ad una fase di modellazione matematica dei due approcci, essi sono stati implementati e testati in MATLAB, sviluppando un simulatore in cui l'utente possa scegliere il valore di vari parametri a seconda dello scenario desiderato. Le analisi effettuate hanno mostrato la robustezza delle strategie proposte in diverse condizioni operative.
APA, Harvard, Vancouver, ISO, and other styles
26

Topalis, Apostolos. "Multiresolution wavelet analysis of event-related EEG potentials using ensemble of classifier data fusion techniques for early diagnosis of Alzheimer's disease /." Full text available online, 2006. http://www.lib.rowan.edu/find/theses.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

König, Rikard. "Predictive Techniques and Methods for Decision Support in Situations with Poor Data Quality." Licentiate thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-3517.

Full text
Abstract:
Today, decision support systems based on predictive modeling are becoming more common, since organizations often collectmore data than decision makers can handle manually. Predictive models are used to find potentially valuable patterns in the data, or to predict the outcome of some event. There are numerous predictive techniques, ranging from simple techniques such as linear regression,to complex powerful ones like artificial neural networks. Complexmodels usually obtain better predictive performance, but are opaque and thus cannot be used to explain predictions or discovered patterns.The design choice of which predictive technique to use becomes even harder since no technique outperforms all others over a large set of problems. It is even difficult to find the best parameter values for aspecific technique, since these settings also are problem dependent.One way to simplify this vital decision is to combine several models, possibly created with different settings and techniques, into an ensemble. Ensembles are known to be more robust and powerful than individual models, and ensemble diversity can be used to estimate the uncertainty associated with each prediction.In real-world data mining projects, data is often imprecise, contain uncertainties or is missing important values, making it impossible to create models with sufficient performance for fully automated systems.In these cases, predictions need to be manually analyzed and adjusted.Here, opaque models like ensembles have a disadvantage, since theanalysis requires understandable models. To overcome this deficiencyof opaque models, researchers have developed rule extractiontechniques that try to extract comprehensible rules from opaquemodels, while retaining sufficient accuracy.This thesis suggests a straightforward but comprehensive method forpredictive modeling in situations with poor data quality. First,ensembles are used for the actual modeling, since they are powerful,robust and require few design choices. Next, ensemble uncertaintyestimations pinpoint predictions that need special attention from adecision maker. Finally, rule extraction is performed to support theanalysis of uncertain predictions. Using this method, ensembles can beused for predictive modeling, in spite of their opacity and sometimesinsufficient global performance, while the involvement of a decisionmaker is minimized.The main contributions of this thesis are three novel techniques that enhance the performance of the purposed method. The first technique deals with ensemble uncertainty estimation and is based on a successful approach often used in weather forecasting. The other twoare improvements of a rule extraction technique, resulting in increased comprehensibility and more accurate uncertainty estimations.

Sponsorship:

This work was supported by the Information Fusion Research

Program (www.infofusion.se) at the University of Skövde, Sweden, in

partnership with the Swedish Knowledge Foundation under grant

2003/0104.

APA, Harvard, Vancouver, ISO, and other styles
28

Nguyen, Tien Dung. "Multimodal emotion recognition using deep learning techniques." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/180753/1/Tien%20Dung_Nguyen_Thesis.pdf.

Full text
Abstract:
This thesis investigates the use of deep learning techniques to address the problem of machine understanding of human affective behaviour and improve the accuracy of both unimodal and multimodal human emotion recognition. The objective was to explore how best to configure deep learning networks to capture individually and jointly, the key features contributing to human emotions from three modalities (speech, face, and bodily movements) to accurately classify the expressed human emotion. The outcome of the research should be useful for several applications including the design of social robots.
APA, Harvard, Vancouver, ISO, and other styles
29

Jdey, Aloui Imen. "Contribution des techniques de fusion et de classification des images au processus d'aide à la reconnaissance des cibles radar non coopératives." Thesis, Brest, 2014. http://www.theses.fr/2014BRES0008.

Full text
Abstract:
La reconnaissance automatique de cibles non coopératives est d’une grande importance dans divers domaines. C’est le cas pour les applications en environnement incertain aérien et maritime. Il s’avère donc nécessaire d’introduire des méthodes originales pour le traitement et l’identification des cibles radar. C’est dans ce contexte que s’inscrit notre travail. La méthodologie proposée est fondée sur le processus d’extraction de connaissance à partir de données (ECD) pour l’élaboration d’une chaine complète de reconnaissance à partir des images radar en essayant d’optimiser chaque étape de cette chaine de traitement. Les expérimentations réalisées pour constituer une base de données d’images ISAR ont été effectuées dans la chambre anéchoïque de l’ENSTA de Bretagne. Ce dispositif de mesures utilisé a l’avantage de disposer d’une maîtrise de la qualité des données représentants les entrées dans le processus de reconnaissance (ECD). Nous avons ainsi étudié les étapes composites de ce processus de l’acquisition jusqu’à l’interprétation et l’évaluation de résultats de reconnaissance. En particulier, nous nous sommes concentrés sur l’étape centrale dédiée à la fouille de données considérée comme le cœur du processus développé. Cette étape est composée de deux phases principales : une porte sur la classification et l’autre sur la fusion des résultats des classifieurs, cette dernière est nommée fusion décisionnelle. Dans ce cadre, nous avons montré que cette dernière phase joue un rôle important dans l’amélioration des résultats pour la prise de décision tout en prenant en compte les imperfections liées aux données radar, notamment l’incertitude et l’imprécision. Les résultats obtenus en utilisant d’une part les différentes techniques de classification (kppv, SVM et PMC), et d’autre part celles de de fusion décisionnelle (Bayes, vote, théorie de croyance, fusion floue) font l’objet d’une étude analytique et comparative en termes de performances
The automatic recognition of non-cooperative targets is very important in various fields. This is the case for applications in aviation and maritime uncertain environment. Therefore, it’s necessary to introduce innovative methods for radar targets treatment and identification.The proposed methodology is based on the Knowledge Discovery from Data process (KDD) for a complete chain development of radar images recognition by trying to optimize every step of the processing chain.The experimental system used is based on an ISAR image acquisition system in the anechoic chamber of ENSTA Bretagne. This system has allowed controlling the quality of the entries in the recognition process (KDD). We studied the stages of the composite system from acquisition to interpretation and evaluation of results. We focused on the center stage; data mining considered as the heart of the system. This step is composed of two main phases: classification and the results of classifiers combination called decisional fusion. We have shown that this last phase improves results for decision making by taking into account the imperfections related to radar data, including uncertainty and imprecision.The results across different classification techniques as a first step (kNN, SVM and MCP) and decision fusion in a second time (Bayes, majority vote, belief theory, fuzzy fusion) are subject of an analytical and comparative study in terms of performance
APA, Harvard, Vancouver, ISO, and other styles
30

Tsenoglou, Theocharis. "Intelligent pattern recognition techniques for photo-realistic 3D modeling of urban planning objects." Thesis, Limoges, 2014. http://www.theses.fr/2014LIMO0075.

Full text
Abstract:
Modélisation 3D réaliste des bâtiments et d'autres objets de planification urbaine est un domaine de recherche actif dans le domaine de la modélisation 3D de la ville, la documentation du patrimoine, tourisme virtuel, la planification urbaine, la conception architecturale et les jeux d'ordinateur. La création de ces modèles, très souvent, nécessite la fusion des données provenant de diverses sources telles que les images optiques et de numérisation de nuages ​​de points laser. Pour imiter de façon aussi réaliste que possible les mises en page, les activités et les fonctionnalités d'un environnement du monde réel, ces modèles doivent atteindre de haute qualité et la précision de photo-réaliste en termes de la texture de surface (par exemple pierre ou de brique des murs) et de la morphologie (par exemple, les fenêtres et les portes) des objets réels. Rendu à base d'images est une alternative pour répondre à ces exigences. Il utilise des photos, prises soit au niveau du sol ou de l'air, à ajouter de la texture au modèle 3D ajoutant ainsi photo-réalisme.Pour revêtement de texture pleine de grandes façades des modèles de blocs 3D, des images qui dépeignent la même façade doivent être correctement combinée et correctement aligné avec le côté du bloc. Les photos doivent être fusionnés de manière appropriée afin que le résultat ne présente pas de discontinuités, de brusques variations de l'éclairage ou des lacunes. Parce que ces images ont été prises, en général, dans différentes conditions de visualisation (angles de vision, des facteurs de zoom, etc.) ils sont sous différentes distorsions de perspective, mise à l'échelle, de luminosité, de contraste et de couleur nuances, ils doivent être corrigés ou ajustés. Ce processus nécessite l'extraction de caractéristiques clés de leur contenu visuel d'images.Le but du travail proposé est de développer des méthodes basées sur la vision par ordinateur et les techniques de reconnaissance des formes, afin d'aider ce processus. En particulier, nous proposons une méthode pour extraire les lignes implicites à partir d'images de mauvaise qualité des bâtiments, y compris les vues de nuit où seules quelques fenêtres éclairées sont visibles, afin de préciser des faisceaux de lignes parallèles 3D et leurs points de fuite correspondants. Puis, sur la base de ces informations, on peut parvenir à une meilleure fusion des images et un meilleur alignement des images aux façades de blocs
Realistic 3D modeling of buildings and other urban planning objects is an active research area in the field of 3D city modeling, heritage documentation, virtual touring, urban planning, architectural design and computer gaming. The creation of such models, very often, requires merging of data from diverse sources such as optical images and laser scan point clouds. To imitate as realistically as possible the layouts, activities and functionalities of a real-world environment, these models need to attain high photo-realistic quality and accuracy in terms of the surface texture (e.g. stone or brick walls) and morphology (e.g. windows and doors) of the actual objects. Image-based rendering is an alternative for meeting these requirements. It uses photos, taken either from ground level or from the air, to add texture to the 3D model thus adding photo-realism. For full texture covering of large facades of 3D block models, images picturing the same façade need to be properly combined and correctly aligned with the side of the block. The pictures need to be merged appropriately so that the result does not present discontinuities, abrupt variations in lighting or gaps. Because these images were taken, in general, under various viewing conditions (viewing angles, zoom factors etc) they are under different perspective distortions, scaling, brightness, contrast and color shadings, they need to be corrected or adjusted. This process requires the extraction of key features from their visual content of images. The aim of the proposed work is to develop methods based on computer vision and pattern recognition techniques in order to assist this process. In particular, we propose a method for extracting implicit lines from poor quality images of buildings, including night views where only some lit windows are visible, in order to specify bundles of 3D parallel lines and their corresponding vanishing points. Then, based on this information, one can achieve better merging of the images and better alignment of the images to the block façades. Another important application dealt in this thesis is that of 3D modeling. We propose an edge preserving interpolation, based on the mean shift algorithm, that operates jointly on the optical and the elevation data. It succeeds in increasing the resolution of the elevation data (LiDAR) while improving the quality (i.e. straightness) of their edges. At the same time, the color homogeneity of the corresponding imagery is also improved. The reduction of color artifacts in the optical data and the improvement in the spatial resolution of elevation data results in more accurate 3D building models. Finally, in the problem of building detection, the application of the proposed mean shift-based edge preserving smoothing for increasing the quality of aerial/color images improves the performance of binary building vs non-building pixel classification
APA, Harvard, Vancouver, ISO, and other styles
31

Garcia, garcia Miguel. "Analyse de l'hypovigilance au volant par fusion d'informations environnementales et d'indices vidéo." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT120.

Full text
Abstract:
L'hypovigilance du conducteur (que ce soit provoquée par la distraction ou la somnolence) est une des menaces principales pour la sécurité routière. Cette thèse s'encadre dans le projet Toucango, porté par la start-up Innov+, qui vise à construire un détecteur d'hypovigilance en temps réel basé sur la fusion d'un flux vidéo en proche infra-rouge et d'informations environnementales. L'objectif de cette thèse consiste donc à proposer des techniques d'extraction des indices pertinents ainsi que des algorithmes de fusion multimodale qui puissent être embarqués sur le système pour un fonctionnement en temps réel. Afin de travailler dans des conditions proches du terrain, une base de données en conduite réelle a été créée avec la collaboration de plusieurs sociétés de transports. Dans un premier temps, nous présentons un état de l'art scientifique et une étude des solutions disponibles sur le marché pour la détection de l'hypovigilance. Ensuite, nous proposons diverses méthodes basées sur le traitement d'images (pour la détection des indices pertinents sur la tête, yeux, bouche et visage) et de données (pour les indices environnementaux basés sur la géolocalisation). Nous réalisons une étude sur les facteurs environnementaux liés à l'hypovigilance et développons un système d'estimation du risque contextuel. Enfin, nous proposons des techniques de fusion multimodale de ces indices avec l'objectif de détecter plusieurs comportements d'hypovigilance : distraction visuelle ou cognitive, engagement dans une tâche secondaire, privation de sommeil, micro-sommeil et somnolence
Driver hypovigilance (whether caused by distraction or drowsiness) is one of the major threats to road safety. This thesis is part of the Toucango project, hold by the start-up Innov+, which aims to build a real-time hypovigilance detector based on the fusion of near infra-red video evidence and environmental information. The objective of this thesis is therefore to propose techniques for extracting relevant indices as well as multimodal fusion algorithms that can be embedded in the system for real-time operation. In order to work near ground truth conditions, a naturalistic driving database has been created with the collaboration of several transport companies. We first present a scientific state of the art and a study of the solutions available on the market for hypovigilance detection. Then, we propose several methods based on image (for the detection of relevant indices on the head, eyes, mouth and face) and data processing (for environmental indices based on geolocation). We carry out a study on the environmental factors related to hypovigilance and develop a contextual risk estimation system. Finally, we propose multimodal fusion techniques of these indices with the objective of detecting several hypovigilance behaviors: visual or cognitive distraction, engagement in a secondary task, sleep deprivation, microsleep and drowsiness
APA, Harvard, Vancouver, ISO, and other styles
32

Hammami, Imen. "Fusion d'images de télédétection hétérogènes par méthodes crédibilistes." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0034/document.

Full text
Abstract:
Avec l’avènement de nouvelles techniques d’acquisition d’image et l’émergence des systèmes satellitaires à haute résolution, les données de télédétection à exploiter sont devenues de plus en plus riches et variées. Leur combinaison est donc devenue essentielle pour améliorer le processus d’extraction des informations utiles liées à la nature physique des surfaces observées. Cependant, ces données sont généralement hétérogènes et imparfaites ce qui pose plusieurs problèmes au niveau de leur traitement conjoint et nécessite le développement de méthodes spécifiques. C’est dans ce contexte que s’inscrit cette thèse qui vise à élaborer une nouvelle méthode de fusion évidentielle dédiée au traitement des images de télédétection hétérogènes à haute résolution. Afin d’atteindre cet objectif, nous axons notre recherche, en premier lieu, sur le développement d’une nouvelle approche pour l’estimation des fonctions de croyance basée sur la carte de Kohonen pour simplifier l’opération d’affectation des masses des gros volumes de données occupées par ces images. La méthode proposée permet de modéliser non seulement l’ignorance et l’imprécision de nos sources d’information, mais aussi leur paradoxe. Ensuite, nous exploitons cette approche d’estimation pour proposer une technique de fusion originale qui permettra de remédier aux problèmes dus à la grande variété des connaissances apportées par ces capteurs hétérogènes. Finalement, nous étudions la manière dont la dépendance entre ces sources peut être considérée dans le processus de fusion moyennant la théorie des copules. Pour cette raison, une nouvelle technique pour choisir la copule la plus appropriée est introduite. La partie expérimentale de ce travail est dédiée à la cartographie de l’occupation des sols dans les zones agricoles en utilisant des images SPOT-5 et RADARSAT-2. L’étude expérimentale réalisée démontre la robustesse et l’efficacité des approches développées dans le cadre de cette thèse
With the advent of new image acquisition techniques and the emergence of high-resolution satellite systems, remote sensing data to be exploited have become increasingly rich and varied. Their combination has thus become essential to improve the process of extracting useful information related to the physical nature of the observed surfaces. However, these data are generally heterogeneous and imperfect, which poses several problems in their joint treatment and requires the development of specific methods. It is in this context that falls this thesis that aimed at developing a new evidential fusion method dedicated to heterogeneous remote sensing images processing at high resolution. In order to achieve this objective, we first focus our research, firstly, on the development of a new approach for the belief functions estimation based on Kohonen’s map in order to simplify the masses assignment operation of the large volumes of data occupied by these images. The proposed method allows to model not only the ignorance and the imprecision of our sources of information, but also their paradox. After that, we exploit this estimation approach to propose an original fusion technique that will solve problems due to the wide variety of knowledge provided by these heterogeneous sensors. Finally, we study the way in which the dependence between these sources can be considered in the fusion process using the copula theory. For this reason, a new technique for choosing the most appropriate copula is introduced. The experimental part of this work isdevoted to land use mapping in case of agricultural areas using SPOT-5 and RADARSAT-2 images. The experimental study carried out demonstrates the robustness and effectiveness of the approaches developed in the framework of this thesis
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Xiaoguang. "Design and Analysis of Techniques for Multiple-Instance Learning in the Presence of Balanced and Skewed Class Distributions." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32184.

Full text
Abstract:
With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, the Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Existing knowledge discovery and data analyzing techniques have shown great success in many real-world applications such as applying Automatic Target Recognition (ATR) methods to detect targets of interest in imagery, drug activity prediction, computer vision recognition, and so on. Among these techniques, Multiple-Instance (MI) learning is different from standard classification since it uses a set of bags containing many instances as input. The instances in each bag are not labeled | instead the bags themselves are labeled. In this area many researchers have accomplished a lot of work and made a lot of progress. However, there still exist some areas which are not covered. In this thesis, we focus on two topics of MI learning: (1) Investigating the relationship between MI learning and other multiple pattern learning methods, which include multi-view learning, data fusion method and multi-kernel SVM. (2) Dealing with the class imbalance problem of MI learning. In the first topic, three different learning frameworks will be presented for general MI learning. The first uses multiple view approaches to deal with MI problem, the second is a data fusion framework, and the third framework, which is an extension of the first framework, uses multiple-kernel SVM. Experimental results show that the approaches presented work well on solving MI problem. The second topic is concerned with the imbalanced MI problem. Here we investigate the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. For this problem, we propose three solution frameworks: a data re-sampling framework, a cost-sensitive boosting framework and an adaptive instance-weighted boosting SVM (with the name IB_SVM) for MI learning. Experimental results - on both benchmark datasets and application datasets - show that the proposed frameworks are proved to be effective solutions for the imbalanced problem of MI learning.
APA, Harvard, Vancouver, ISO, and other styles
34

Parshyn, Viachaslau. "Macro-segmentation sémantique des séquences multimédia." Ecully, Ecole centrale de Lyon, 2006. http://bibli.ec-lyon.fr/exl-doc/vparshyn.pdf.

Full text
Abstract:
La segmentation de vidéo en unités sémantiques temporelles fournit des indices qui sont importants pour organiser un parcours et une navigation efficaces basés sur le contenu. Dans ce travail nous sommes concernés avec le problème de macro-segmentation visant de produire automatiquement des tables des matières de vidéos. Nous proposons une méthode déterministe qui est une sorte d'un automate fini et qui permet de formuler des règles de segmentation basées sur la connaissance à priori des principes de production de vidéo. La méthode a été adoptée et évaluée sur la vidéo de tennis. Nous proposons aussi une approche statistique où les règles de segmention sont choisies de manière à ce que les indices de performance, précision et rappel, soient maximisés. L'approche est appliquée à la tâche de segmentation de films en scènes sémantiques. Dans ce travail nous sommes aussi concernés avec le problème de création automatique de résumé vidéo
Segmentation of video into temporal semantic units provides important indexing information for efficient content-based browsing and navigation. In this work we are concerned withe the problem of macro-segmentation aiming at automatic generating of content tables of videos. We propose a deterministic approach which is a sort of finite automation and allows one to formulate content parsing rules based on a priori knowledge of the video production principles. The approach has been adopted und tested on tennis video. We propose also a statistical segmentation framework where content parsing rules are chosen so as to optimize the system performance measured as recall and precision. The framework is applied to the task of film segmentation into semantic scenes. The higher segmentation performance has been shown with respect to conventional rule-based ans statistical methods. In this work we are also concerned with the problem of automatic video summarization
APA, Harvard, Vancouver, ISO, and other styles
35

Lian, Chunfeng. "Information fusion and decision-making using belief functions : application to therapeutic monitoring of cancer." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2333/document.

Full text
Abstract:
La radiothérapie est une des méthodes principales utilisée dans le traitement thérapeutique des tumeurs malignes. Pour améliorer son efficacité, deux problèmes essentiels doivent être soigneusement traités : la prédication fiable des résultats thérapeutiques et la segmentation précise des volumes tumoraux. La tomographie d’émission de positrons au traceur Fluoro- 18-déoxy-glucose (FDG-TEP) peut fournir de manière non invasive des informations significatives sur les activités fonctionnelles des cellules tumorales. Les objectifs de cette thèse sont de proposer: 1) des systèmes fiables pour prédire les résultats du traitement contre le cancer en utilisant principalement des caractéristiques extraites des images FDG-TEP; 2) des algorithmes automatiques pour la segmentation de tumeurs de manière précise en TEP et TEP-TDM. La théorie des fonctions de croyance est choisie dans notre étude pour modéliser et raisonner des connaissances incertaines et imprécises pour des images TEP qui sont bruitées et floues. Dans le cadre des fonctions de croyance, nous proposons une méthode de sélection de caractéristiques de manière parcimonieuse et une méthode d’apprentissage de métriques permettant de rendre les classes bien séparées dans l’espace caractéristique afin d’améliorer la précision de classification du classificateur EK-NN. Basées sur ces deux études théoriques, un système robuste de prédiction est proposé, dans lequel le problème d’apprentissage pour des données de petite taille et déséquilibrées est traité de manière efficace. Pour segmenter automatiquement les tumeurs en TEP, une méthode 3-D non supervisée basée sur le regroupement évidentiel (evidential clustering) et l’information spatiale est proposée. Cette méthode de segmentation mono-modalité est ensuite étendue à la co-segmentation dans des images TEP-TDM, en considérant que ces deux modalités distinctes contiennent des informations complémentaires pour améliorer la précision. Toutes les méthodes proposées ont été testées sur des données cliniques, montrant leurs meilleures performances par rapport aux méthodes de l’état de l’art
Radiation therapy is one of the most principal options used in the treatment of malignant tumors. To enhance its effectiveness, two critical issues should be carefully dealt with, i.e., reliably predicting therapy outcomes to adapt undergoing treatment planning for individual patients, and accurately segmenting tumor volumes to maximize radiation delivery in tumor tissues while minimize side effects in adjacent organs at risk. Positron emission tomography with radioactive tracer fluorine-18 fluorodeoxyglucose (FDG-PET) can noninvasively provide significant information of the functional activities of tumor cells. In this thesis, the goal of our study consists of two parts: 1) to propose reliable therapy outcome prediction system using primarily features extracted from FDG-PET images; 2) to propose automatic and accurate algorithms for tumor segmentation in PET and PET-CT images. The theory of belief functions is adopted in our study to model and reason with uncertain and imprecise knowledge quantified from noisy and blurring PET images. In the framework of belief functions, a sparse feature selection method and a low-rank metric learning method are proposed to improve the classification accuracy of the evidential K-nearest neighbor classifier learnt by high-dimensional data that contain unreliable features. Based on the above two theoretical studies, a robust prediction system is then proposed, in which the small-sized and imbalanced nature of clinical data is effectively tackled. To automatically delineate tumors in PET images, an unsupervised 3-D segmentation based on evidential clustering using the theory of belief functions and spatial information is proposed. This mono-modality segmentation method is then extended to co-segment tumor in PET-CT images, considering that these two distinct modalities contain complementary information to further improve the accuracy. All proposed methods have been performed on clinical data, giving better results comparing to the state of the art ones
APA, Harvard, Vancouver, ISO, and other styles
36

Pellicanò, Nicola. "Tackling pedestrian detection in large scenes with multiple views and representations." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS608/document.

Full text
Abstract:
La détection et le suivi de piétons sont devenus des thèmes phares en recherche en Vision Artificielle, car ils sont impliqués dans de nombreuses applications. La détection de piétons dans des foules très denses est une extension naturelle de ce domaine de recherche, et l’intérêt croissant pour ce problème est lié aux évènements de grande envergure qui sont, de nos jours, des scenarios à risque d’un point de vue de la sûreté publique. Par ailleurs, les foules très denses soulèvent des problèmes inédits pour la tâche de détection. De par le fait que les caméras ont le champ de vision le plus grand possible pour couvrir au mieux la foule les têtes sont généralement très petites et non texturées. Dans ce manuscrit nous présentons un système complet pour traiter les problèmes de détection et de suivi en présence des difficultés spécifiques à ce contexte. Ce système utilise plusieurs caméras, pour gérer les problèmes de forte occultation. Nous proposons une méthode robuste pour l’estimation de la position relative entre plusieurs caméras dans le cas des environnements requérant une surveillance. Ces environnements soulèvent des problèmes comme la grande distance entre les caméras, le fort changement de perspective, et la pénurie d’information en commun. Nous avons alors proposé d’exploiter le flot vidéo pour effectuer la calibration, avec l’objectif d’obtenir une solution globale de bonne qualité. Nous proposons aussi une méthode non supervisée pour la détection des piétons avec plusieurs caméras, qui exploite la consistance visuelle des pixels à partir des différents points de vue, ce qui nous permet d’effectuer la projection de l’ensemble des détections sur le plan du sol, et donc de passer à un suivi 3D. Dans une troisième partie, nous revenons sur la détection supervisée des piétons dans chaque caméra indépendamment en vue de l’améliorer. L’objectif est alors d’effectuer la segmentation des piétons dans la scène en partant d’une labélisation imprécise des données d’apprentissage, avec des architectures de réseaux profonds. Comme dernière contribution, nous proposons un cadre formel original pour une fusion de données efficace dans des espaces 2D. L’objectif est d’effectuer la fusion entre différents capteurs (détecteurs supervisés en chaque caméra et détecteur non supervisé en multi-vues) sur le plan du sol, qui représente notre cadre de discernement. nous avons proposé une représentation efficace des hypothèses composées qui est invariante au changement de résolution de l’espace de recherche. Avec cette représentation, nous sommes capables de définir des opérateurs de base et des règles de combinaison efficaces pour combiner les fonctions de croyance. Enfin, notre approche de fusion de données a été évaluée à la fois au niveau spatial, c’est à dire en combinant des détecteurs de nature différente, et au niveau temporel, en faisant du suivi évidentiel de piétons sur de scènes à grande échelle dans des conditions de densité variable
Pedestrian detection and tracking have become important fields in Computer Vision research, due to their implications for many applications, e.g. surveillance, autonomous cars, robotics. Pedestrian detection in high density crowds is a natural extension of such research body. The ability to track each pedestrian independently in a dense crowd has multiple applications: study of human social behavior under high densities; detection of anomalies; large event infrastructure planning. On the other hand, high density crowds introduce novel problems to the detection task. First, clutter and occlusion problems are taken to the extreme, so that only heads are visible, and they are not easily separable from the moving background. Second, heads are usually small (they have a diameter of typically less than ten pixels) and with little or no textures. This comes out from two independent constraints, the need of one camera to have a field of view as high as possible, and the need of anonymization, i.e. the pedestrians must be not identifiable because of privacy concerns.In this work we develop a complete framework in order to handle the pedestrian detection and tracking problems under the presence of the novel difficulties that they introduce, by using multiple cameras, in order to implicitly handle the high occlusion issues.As a first contribution, we propose a robust method for camera pose estimation in surveillance environments. We handle problems as high distances between cameras, large perspective variations, and scarcity of matching information, by exploiting an entire video stream to perform the calibration, in such a way that it exhibits fast convergence to a good solution. Moreover, we are concerned not only with a global fitness of the solution, but also with reaching low local errors.As a second contribution, we propose an unsupervised multiple camera detection method which exploits the visual consistency of pixels between multiple views in order to estimate the presence of a pedestrian. After a fully automatic metric registration of the scene, one is capable of jointly estimating the presence of a pedestrian and its height, allowing for the projection of detections on a common ground plane, and thus allowing for 3D tracking, which can be much more robust with respect to image space based tracking.In the third part, we study different methods in order to perform supervised pedestrian detection on single views. Specifically, we aim to build a dense pedestrian segmentation of the scene starting from spatially imprecise labeling of data, i.e. heads centers instead of full head contours, since their extraction is unfeasible in a dense crowd. Most notably, deep architectures for semantic segmentation are studied and adapted to the problem of small head detection in cluttered environments.As last but not least contribution, we propose a novel framework in order to perform efficient information fusion in 2D spaces. The final aim is to perform multiple sensor fusion (supervised detectors on each view, and an unsupervised detector on multiple views) at ground plane level, that is, thus, our discernment frame. Since the space complexity of such discernment frame is very large, we propose an efficient compound hypothesis representation which has been shown to be invariant to the scale of the search space. Through such representation, we are capable of defining efficient basic operators and combination rules of Belief Function Theory. Furthermore, we propose a complementary graph based description of the relationships between compound hypotheses (i.e. intersections and inclusion), in order to perform efficient algorithms for, e.g. high level decision making.Finally, we demonstrate our information fusion approach both at a spatial level, i.e. between detectors of different natures, and at a temporal level, by performing evidential tracking of pedestrians on real large scale scenes in sparse and dense conditions
APA, Harvard, Vancouver, ISO, and other styles
37

Hannachi, Ammar. "Imagerie multimodale et planification interactive pour la reconstruction 3D et la métrologie dimensionnelle." Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAD024/document.

Full text
Abstract:
La fabrication de pièces manufacturées génère un nombre très important de données de différents types définissant les géométries de fabrication ainsi que la qualité de production. Ce travail de thèse s’inscrit dans le cadre de la réalisation d’un système de vision cognitif dédié à l’évaluation d’objets 3D manufacturés incluant éventuellement des surfaces gauches, en tenant compte des tolérances géométriques et des incertitudes. Ce système permet un contrôle exhaustif de pièces manufacturées et offre la possibilité d’une inspection tridimensionnelle automatique de la pièce. La mise en place d’un système de mesures multi-capteurs (passifs et actifs) a permis d’améliorer significativement la qualité d’évaluation par le biais d’une reconstruction tridimensionnelle enrichie de l’objet à évaluer. En particulier, nous avons employé simultanément un système stéréoscopique de vision et un système à projection de lumière structurée afin de reconstruire les contours et les surfaces de différents objets 3D
Producing industrially manufactured parts generates a very large number of data of various types defining the manufacturing geometries as well as the quality of production. This PhD work has been carried out within the framework of the realization of a cognitive vision system dedicated to the 3D evaluation of manufactured objects including possibly free form surfaces, taking into account the geometric tolerances and uncertainties. This system allows the comprehensive control of manufactured parts, and provides the means for their automated 3D dimensional inspection. The implementation of a multi-sensor (passive and active) measuring system enabled to improve significantly the assessment quality through an enriched three-dimensional reconstruction of the object to be evaluated. Specifically, we made use simultaneously of a stereoscopic vision system and of a structured light based system in order to reconstruct the edges and surfaces of various 3D objects
APA, Harvard, Vancouver, ISO, and other styles
38

Buckley, Simon John. "A geomatics data fusion technique for change monitoring." Thesis, University of Newcastle Upon Tyne, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bertrand, Sarah. "Analyse d'images pour l'identification multi-organes d'espèces végétales." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE2127/document.

Full text
Abstract:
Cette thèse s’inscrit dans le cadre de l’ANR ReVeRIES dont l’objectif est d’utiliser les technologies mobiles pour aider l’homme à mieux connaître son environnement et notamment les végétaux qui l’entourent. Plus précisément, le projet ReVeRIES s’appuie sur une application mobile, nommée Folia développée dans le cadre du projet ANR ReVeS, capable de reconnaître les espèces d’arbres et arbustes à partir de photos de leurs feuilles. Ce prototype se différencie des autres outils car il est capable de simuler le comportement du botaniste. Dans le contexte du projet ReVeRIES, nous nous proposons d’aller beaucoup plus loin en développant de nouveaux aspects : la reconnaissance multimodale d’espèces, l’apprentissage par le jeu et les sciences citoyennes. L’objet de cette thèse porte sur le premier de ces trois aspects, à savoir l’analyse d’images d’organes de végétaux en vue de l’identification.Plus précisément, nous considérons les principaux arbres et arbustes, endémiques ou exotiques, que l’on trouve en France métropolitaine. L’objectif de cette thèse est d’étendre l’algorithme de reconnaissance en prenant en compte d’autres organes que la feuille. Cette multi-modalité est en effet essentielle si nous souhaitons que l’utilisateur apprenne et s’entraîne aux différentes méthodes de reconnaissance, pour lesquelles les botanistes utilisent la variété des organes (i.e. les feuilles, les fleurs, les fruits et les écorces). La méthode utilisée par Folia pour la reconnaissance des feuilles étant dédiée, car simulant le botaniste, ne peut s’appliquer directement aux autres organes. Ainsi, de nouveaux verrous se posent, tant au niveau dutraitement des images qu’au niveau de la fusion de données.Une première partie de la thèse a été consacrée à la mise en place de méthodes de traitement d’images pour l’identification des espèces végétales. C’est l’identification des espèces d’arbres à partir d’images d’écorces qui a été étudiée en premier. Les descripteurs développés prennent en compte la structure de l’écorce en s’inspirant des critères utilisés par les botanistes. Les fruits et les fleurs ont nécessité une étape de segmentation avant leur description. Une nouvelle méthode de segmentation réalisable sur smartphone a été développée pour fonctionner sur la grande variabilité des fleurs et des fruits. Enfin, des descripteurs ont été extraits sur les fruits et les fleurs après l’étape de segmentation. Nous avons décidé de ne pas faire de séparation entre les fleurs et les fruits car nous avons montré qu’un utilisateur novice en botanique ne sait pas toujours faire la différence entre ces deux organes sur des arbres dits «d’ornement» (non fruitiers). Pour les fruits et les fleurs, la prédiction n’est pas seulement faite sur les espèces mais aussi sur les genres et les familles, groupes botaniques traduisant d’une similarité entre ces organes.Une deuxième partie de la thèse traite de la combinaison des descripteurs des différents organes que sont les feuilles, les écorces, les fruits et les fleurs. En plus des méthodes de combinaison basiques, nous proposons de prendre en compte la confusion entre les espèces, ainsi que les prédictions d’appartenance aux taxons botaniques supérieurs à l’espèce.Enfin, un chapitre d’ouverture est consacré au traitement de ces images par des réseaux de neurones à convolutions. En effet, le Deep-Learning est de plus en plus utilisé en traitement d’images, notamment appliqué aux organes végétaux. Nous proposons dans ce contexte de visualiser les filtres de convolution extrayant de l’information, afin de faire le lien entre lesinformations extraites par ces réseaux et les éléments botaniques
This thesis is part of the ANR ReVeRIES, which aims to use mobile technologies to help people better understand their environment and in particular the plants that surround them. More precisely, the ReVeRIES project is based on a mobile application called Folia developed as part of the ANR ReVeS project and capable of recognising tree and shrub species based on photos of their leaves. This prototype differs from other tools in that it is able to simulate the behaviour of the botanist. In the context of the ReVeRIES project, we propose to go much further by developing new aspects: multimodal species recognition, learning through play and citizen science. The purpose of this thesis is to focus on the first of these three aspects, namelythe analysis of images of plant organs for identification.More precisely, we consider the main trees and shrubs, endemic or exotic, found in metropolitan France. The objective of this thesis is to extend the recognition algorithm by taking into account other organs in addition to the leaf. This multi-modality is indeed essential if we want the user to learn and practice the different methods of recognition for which botanists use the variety of organs (i.e. leaves, flowers, fruits and bark). The method used by Folia for leaf recognition being dedicated, because simulating the work of a botanist on the leaf, cannot be applied directly to other organs. Thus, new challenges are emerging, both in terms of image processing and data fusion.The first part of the thesis was devoted to the implementation of image processing methods for the identification of plant species. The identification of tree species from bark images was the first to be studied. The descriptors developed take into account the structure of the bark inspired from the criteria used by botanists. Fruits and flowers required a segmentation step before their description. A new segmentation method that can be used on smartphones has been developed to work in spite of the high variability of flowers and fruits. Finally, descriptors were extracted on fruits and flowers after the segmentation step. We decided not to separate flowers and fruits because we showed that a user new to botany does not always know the difference between these two organs on so-called "ornamental" trees (not fruit trees). For fruits and flowers, prediction is not only made on their species but also on their genus and family, botanical groups reflecting a similarity between these organs.The second part of the thesis deals with the combination of descriptors of the different organs: leaves, bark, fruits and flowers. In addition to basic combination methods, we propose to consider the confusion between species, as well as predictions of affiliations in botanical taxa higher than the species.Finally, an opening chapter is devoted to the processing of these images by convolutional neural networks. Indeed, Deep Learning is increasingly used in image processing, particularly for plant organs. In this context, we propose to visualize the learned convolution filters extracting information, in order to make the link between the information extracted by these networks and botanical elements
APA, Harvard, Vancouver, ISO, and other styles
40

Vannah, Benjamin. "Integrated Data Fusion and Mining (IDFM) Technique for Monitoring Water Quality in Large and Small Lakes." Master's thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6028.

Full text
Abstract:
Monitoring water quality on a near-real-time basis to address water resources management and public health concerns in coupled natural systems and the built environment is by no means an easy task. Furthermore, this emerging societal challenge will continue to grow, due to the ever-increasing anthropogenic impacts upon surface waters. For example, urban growth and agricultural operations have led to an influx of nutrients into surface waters stimulating harmful algal bloom formation, and stormwater runoff from urban areas contributes to the accumulation of total organic carbon (TOC) in surface waters. TOC in surface waters is a known precursor of disinfection byproducts in drinking water treatment, and microcystin is a potent hepatotoxin produced by the bacteria Microcystis, which can form expansive algal blooms in eutrophied lakes. Due to the ecological impacts and human health hazards posed by TOC and microcystin, it is imperative that municipal decision makers and water treatment plant operators are equipped with a rapid and economical means to track and measure these substances. Remote sensing is an emergent solution for monitoring and measuring changes to the earth's environment. This technology allows for large regions anywhere on the globe to be observed on a frequent basis. This study demonstrates the prototype of a near-real-time early warning system using Integrated Data Fusion and Mining (IDFM) techniques with the aid of both multispectral (Landsat and MODIS) and hyperspectral (MERIS) satellite sensors to determine spatiotemporal distributions of TOC and microcystin. Landsat satellite imageries have high spatial resolution, but such application suffers from a long overpass interval of 16 days. On the other hand, free coarse resolution sensors with daily revisit times, such as MODIS, are incapable of providing detailed water quality information because of low spatial resolution. This issue can be resolved by using data or sensor fusion techniques, an instrumental part of IDFM, in which the high spatial resolution of Landsat and the high temporal resolution of MODIS imageries are fused and analyzed by a suite of regression models to optimally produce synthetic images with both high spatial and temporal resolutions. The same techniques are applied to the hyperspectral sensor MERIS with the aid of the MODIS ocean color bands to generate fused images with enhanced spatial, temporal, and spectral properties. The performance of the data mining models derived using fused hyperspectral and fused multispectral data are quantified using four statistical indices. The second task compared traditional two-band models against more powerful data mining models for TOC and microcystin prediction. The use of IDFM is illustrated for monitoring microcystin concentrations in Lake Erie (large lake), and it is applied for TOC monitoring in Harsha Lake (small lake). Analysis confirmed that data mining methods excelled beyond two-band models at accurately estimating TOC and microcystin concentrations in lakes, and the more detailed spectral reflectance data offered by hyperspectral sensors produced a noticeable increase in accuracy for the retrieval of water quality parameters.
M.S.Env.E.
Masters
Civil, Environmental and, Construction Engineering
Engineering and Computer Science
Environmental Engineering
APA, Harvard, Vancouver, ISO, and other styles
41

Vivet, Damien. "Perception de l'environnement par radar hyperfréquence. Application à la localisation et la cartographie simultanées, à la détection et au suivi d'objets mobiles en milieu extérieur." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00659270.

Full text
Abstract:
Dans le cadre de la robotique mobile extérieure, les notions de perception et de localisation sont essentielles au fonctionnement autonome d'un véhicule. Les objectifs de ce travail de thèse sont multiples et mènent vers un but de localisation et de cartographie simultanée d'un environnement extérieur dynamique avec détection et suivi d'objet mobiles (SLAMMOT) à l'aide d'un unique capteur extéroceptif tournant de type radar dans des conditions de circulation dites "réalistes", c'est-à-dire à haute vitesse soit environ 30 km/h. Il est à noter qu'à de telles vitesses, les données acquises par un capteur tournant son corrompues par le déplacement propre du véhicule. Cette distorsion, habituellement considérée comme une perturbation, est analysée ici comme une source d'information. Cette étude vise également à évaluer les potentialités d'un capteur radar de type FMCW (onde continue modulée en fréquence) pour le fonctionnement d'un véhicule robotique autonome. Nous avons ainsi proposé différentes contributions : - une correction de la distorsion à la volée par capteurs proprioceptifs qui a conduit à une application de localisation et de cartographie simultanées (SLAM), - une méthode d'évaluation de résultats de SLAM basées segment, - une considération de la distorsion des données dans un but proprioceptif menant à une application SLAM, - un principe d'odométrie fondée sur les données Doppler propres au capteur radar, - une méthode de détection et de pistage d'objets mobiles : DATMO avec un unique radar.
APA, Harvard, Vancouver, ISO, and other styles
42

Harizi, Walid. "Caractérisation de l'endommagement des composites à matrice polymère par une approche multi-technique non destructive." Thesis, Valenciennes, 2012. http://www.theses.fr/2012VALE0033.

Full text
Abstract:
Cette étude novatrice consiste à mettre en oeuvre dans un même protocole expérimental, trois techniques de caractérisation non destructive en simultané : l’émission acoustique, la thermographie infrarouge et les ultrasons pour la caractérisation de l’endommagement des matériaux Composites à fibres continues et à Matrice Polymère (CMP) à plis croisés [0/90]S. Chaque technique a permis demontrer sa potentialité à révéler l’endommagement dépendant de ses spécificités intrinsèques. L'émission acoustique a été utilisée sous sa forme classique et couplée avec une classification de données obtenue par les k-means et la carte de Kohonen. La thermographie infrarouge a été étudiée selon ses deux formes passive et active, les méthodes ultrasonores ont été exploitées en termes d’amplitude et de vitesse des ondes longitudinales et des ondes de Lamb respectivement. Il a été montré que l’approche multitechnique adoptée dans ce travail est très intéressante pour obtenir un diagnostic complet sur l’état de santé du matériau au repos et sous différents niveaux de chargement mécanique en traction. Il s’est avéré aussi que l’aspect « complémentarité » entre les trois techniques était plus envisageable que celui de la « redondance ». La fusion des données a été utilisée pour avoir une prise de décision fiable, complète et plus crédible sur les différents mécanismes d’endommagement susceptibles d’apparaître dans un matériau CMP. Ceci n’a été possible que pour les deux techniques d’imagerie, le C-scan ultrasonore et la thermographie infrarouge. En conclusion, les résultats montrent que ces trois techniques sont potentiellement capables de qualifier l’état d’endommagement du matériau, mais qu’elles ne le quantifient pas de la même manière
This innovative study consists to implement in the same experimental procedure three non destructive techniques simultaneously: acoustic emission, infrared thermography and ultrasonic waves for the characterization of damage in cross ply Polymer Composite Materials (PCM) [0/90]S. Each technique has demonstrated its potential to reveal the damage that depends on its intrinsic characteristics. Acoustic emission has been used in its classical form and coupled with a data classification obtained by k-means and Kohonen map. Infrared thermography has been studied using both passive and active forms, ultrasonic methods have been used by exploiting amplitude and velocity of longitudinal and Lamb waves respectively. It has been shown that the adopted multi-technique approach is veryinteresting to obtain a full diagnostic of the health state of the material before and after uniaxial mechanical loading. The “complementarity” aspect between the three used techniques is showed more interesting that “redundancy” aspect. The data fusion theory was used to have a reliable, comprehensive and credible decision about the different damage mechanisms may appear in PCM material. This has been possible only for the two imaging techniques, ultrasonic C-scan and infrared thermography. All in all, the results show that these three techniques are potentially able to describe the damage state of the material, but they don’t quantify it with the same manner
APA, Harvard, Vancouver, ISO, and other styles
43

Reche, Jérôme. "Nouvelle méthodologie hybride pour la mesure de rugosités sub-nanométriques." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT050.

Full text
Abstract:
La détermination de la rugosité sub-nanométrique sur les flancs des motifs, dont les dimensions critiques atteignent une taille inférieure à 10nm, devient une étape primordiale. Mais à ce jour aucune technique de métrologie n'est suffisamment robuste pour garantir un résultat juste et précis. Une voie actuellement en cours d'exploration pour la mesure dimensionnelle consiste à hybrider différentes techniques de métrologie. Pour ce faire, des algorithmes de fusion de données sont développés afin de traiter les informations issues de multiples équipements de métrologie. Le but étant donc d’utiliser ce même type de méthode pour la mesure de rugosité de ligne. Ces travaux de thèse explicitent tout d’abord les progrès de méthodologie de mesure de rugosité de ligne au travers de la décomposition fréquentielle et des modèles associés. Les différentes techniques utilisées pour la mesure de rugosité de lignes sont présentées avec une nouveauté importante concernant le développement et l’utilisation de la technique SAXS pour ce type de mesure. Cette technique possède un potentiel élevé pour la détermination de motifs sub nanométriques. Des étalons de rugosités de ligne sont fabriqués, sur la base de l’état de l’art comportant des rugosités périodiques, mais aussi, des rugosités plus complexes déterminées par un modèle statistique utilisé normalement pour la mesure. Ces travaux se focalisent finalement sur les méthodes d’hybridation et plus particulièrement sur l’utilisation de réseaux de neurones. Ainsi, la mise en place d’un réseau de neurones est détaillée au travers de la multitude de paramètres qu’il comporte. Le choix d’un apprentissage du réseau de neurones sur simulation mène à la nécessité de savoir générer les différentes métrologies en présence
Roughness at Sub-nanometric scale determination becomes a critical issue, especially for patterns with critical dimensions below 10nm. Currently, there is no metrology technique able to provide a result with high precision and accuracy. A way, based on hybrid metrology, is currently explored and dedicated to dimensional measurements. This hybrid metrology uses data fusion algorithms in order to address data coming from different tools. This thesis presents some improvements on line roughness analysis thanks to frequency decomposition and associated model. The current techniques used for roughness determination are explained and a new one SAXS (Small Angle X-rays Scattering) is used to push again limits of extraction of roughness. This technique has a high potential to determine sub nanometrics patterns. Moreover, the design and manufacturing of reference line roughness samples is made, following the state of art with periodic roughness, but also more complex roughness determined by a statistical model usually used for measurement. Finally, this work focus on hybridization methods and more especially on neural network utilization. Thus, the establishment of a neural network is detailed through the multitude of parameters which must be set. In addition, training of the neural network on simulation leads to the capability to generate different metrology
APA, Harvard, Vancouver, ISO, and other styles
44

Griesbach, schuch Nivea. "Métrologie Hybride pour le contrôle dimensionnel en lithographie." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT063/document.

Full text
Abstract:
Afin de respecter sa feuille de route, l’industrie du semi conducteur propose des nouvelles générations de technologies (appelées nœuds technologiques) tous les deux ans. Ces technologies présentent des dimensions de motifs de plus en plus réduites et par conséquent des contrôles des dimensions de plus en plus contraints. Cette réduction des tolérances sur les résultats métrologiques entraine forcément une évolution des outils de métrologie dimensionnelle. Aujourd’hui, pour les nœuds les plus avancés, aucune technique de métrologie ne peut répondre aux contraintes imposées. Les limitations se situent aussi bien sur les principes mêmes des méthodes employées que sur la quantité nécessaire de données permettant une analyse poussée ainsi que le temps de calcul nécessaire au traitement de ces données. Dans un contexte industriel, les aspects de rapidité et de précision des résultats de métrologie ne peuvent pas être négligés, de ce fait, une nouvelle approche fondée sur de la métrologie hybride doit être évaluée. La métrologie hybride consiste à mettre en commun différentes stratégies afin de combiner leurs forces et limiter leurs faiblesses. L’objectif d’une approche hybride est d’obtenir un résultat final présentant de meilleures caractéristiques que celui obtenu par chacune des techniques séparément. Cette problématique de métrologie hybride peut se résoudre par l’utilisation de la fusion de données. Il existe un grand nombre de méthodes de fusion de données couvrant des domaines très variés des sciences et qui utilisent des approches mathématiques différentes pour traiter le problème de fusion de données. L’objectif de ce travail de thèse est de développer cette problématique de métrologie hybride et fusion de données dans le cadre d’une collaboration entre deux laboratoires : LTM/ CNRS ( Laboratoire des Technologies de la Microélectronique) et le LETI/CEA (Laboratoire d’Electronique et de Technologies de l’Information). Le concept de la fusion de données est présenté dans un contexte de métrologie hybride appliquée au domaine de la microélectronique. L’état de l’art au niveau des techniques de métrologie est présenté et discuté. En premier lieu, le CD SEM pour ces caractéristiques associant rapidité et non destructibilité, ensuite l’AFM pour sa vision juste des profils des motifs et enfin la scattérométrie pour ses aspects de précision de mesures et sa rapidité tout en conservant une approche non destructive. Le FIB-STEM, bien que destructif, se positionne sur une approche de technique de référence. Les forces et les faiblesses de ces différentes méthodes sont évaluées afin de pouvoir les introduire dans une approche de métrologie hybride et d’identifier le rôle que chacune d’entre elle peut jouer dans ce contexte. Plusieurs campagnes de mesures ont été réalisées durant cette thèse afin d’apporter des connaissances sur les caractéristiques et les limitations de ces techniques et pouvoir les inclure dans différents scénarii de métrologie hybride. La méthode retenue pour la fusion de données est fondée sur une approche Bayesienne. Cette méthode a été évaluée dans un contexte expérimental cadré par un plan d’expérience permettant la mesure de la hauteur et la largeur de lignes en combinant différentes techniques de métrologie. Les données collectées ont été exploitées pour les étapes de debiaisage mais également pour un déroulement complet de fusion et dans les deux cas, la métrologie hybride montre les avantages de cette approche pour améliorer la justesse et la précision des résultats. Avec la poursuite d’un développement poussé, la technique de métrologie hybride présentée ici semble donc pouvoir s’intégrer dans un processus de fabrication dans l’industrie du semi conducteur. Son application n’est pas seulement destinée à de la métrologie dimensionnelle mais peut fournir également des informations sur la calibration des équipements
The industry of semiconductors continues to evolve at a fast pace, proposing a new technology node around every two years. Each new technology node presents reduced feature sizes and stricter dimension control. As the features of devices continue to shrink, allowed tolerances for metrology errors must shrink as well, pushing the evolution of the metrology tools.No individual metrology technique alone can answer the tight requirements of the industry today, not to mention in the next technology generations. Besides the limitations of the metrology methods, other constraints such as the amount of metrology data available for higher order analysis and the time required for generating such data are also relevant and impact the usage of metrology in production. For the production of advanced technology nodes, neither speed nor precision may be sacrificed, which calls for cleverer metrology approaches, such as the Hybrid Metrology.Hybrid Metrology consists of employing different metrology strategies together in order to combine their strengths while mitigating their weaknesses. This hybrid approach goal is to improve the measurements in such a way that the final data presents better characteristics that each method separately. One of the techniques than can be used to combine the data coming from different metrology techniques is called Data Fusion. There are a large number of developed methods of Data Fusion, using different mathematical tools, to address the data fusion process.The first goal of this thesis project was to start developing the topics of Data Fusion and Hybrid Metrology within the two laboratories whose cooperation made this work possible: LTM (Laboratoire des Technologies de la Microélectronique) and LETI (Laboratoire d'électronique et de technologie de l'information). This thesis presents the concepts of Data Fusion in the context of Hybrid Metrology applied to dimensional measuring for the semiconductors industry. This concept can be extensively used in many other fields of applications.In this work the basics of state-of-the-art metrology techniques is presented and discussed. The focus is the CD-SEM, for its fast and almost-non-destructive metrology; the AFM, for its accurate profile view of patterns and non-destructive characteristic; the Scatterometry, for its precision, global and fast measurements; and the FIB-STEM, as a reference on accuracy for any type of profile, although destructive. The strengths and weaknesses of these methods were discussed in order to introduce the need of Hybrid Metrology and to identify the role that each of those methods can play in this context.Several experiments were performed during this thesis work in order to provide further knowledge about the characteristics and limitations of each metrology method and to be used as either inputs or reference on the different Hybrid Metrology scenarios proposed.The selected method for fuse the data coming from different metrology methods was the Bayesian approach. This technique was evaluated in different experimental contexts, both for Height and CD metrology combining different metrology methods. Results were evaluated for both the debiasing step alone and for the complete fusion flow. In both cases, it was clear the advantages of using a Hybrid Metrology approach for improving the measurement precision and accuracy.The presented Hybrid Metrology technique may be used by the semiconductor industry in different steps of the fabrication process. This technique can also provide information for machine calibration, such as a CD-SEM tool being calibrated based on Hybrid Metrology results generated using the CD-SEM itself together with Scatterometry data
APA, Harvard, Vancouver, ISO, and other styles
45

Elshehaly, Mai, D. Gračanin, M. Gad, H. G. Elmongui, and K. Matković. "Interactive Fusion and Tracking For Multi‐Modal Spatial Data Visualization." 2015. http://hdl.handle.net/10454/17884.

Full text
Abstract:
Yes
Scientific data acquired through sensors which monitor natural phenomena, as well as simulation data that imitate time‐identified events, have fueled the need for interactive techniques to successfully analyze and understand trends and patterns across space and time. We present a novel interactive visualization technique that fuses ground truth measurements with simulation results in real‐time to support the continuous tracking and analysis of spatiotemporal patterns. We start by constructing a reference model which densely represents the expected temporal behavior, and then use GPU parallelism to advect measurements on the model and track their location at any given point in time. Our results show that users can interactively fill the spatio‐temporal gaps in real world observations, and generate animations that accurately describe physical phenomena.
APA, Harvard, Vancouver, ISO, and other styles
46

Shun, Hsueh Chi, and 薛吉順. "Multisernsor Data Fusion Techniques for Robotic Control in Military Applications." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/25408018477637986448.

Full text
Abstract:
碩士
中正理工學院
兵器系統工程研究所
87
Single sensor perceptual systems in robot has not been entirely successful for more demanding tasks in navigation, tracking and goal recognition. This has limited the potential benefits of robots for applications in space, defense, and manufacturing. Multisensor data fusion (MSDF) is being increasingly viewed as an important perceptual activity in robotics. The use of sensory data from a range of disparate, multiple sensors is to automatically extract the maximum amount of information possible about the sensed environment under all operating conditions. In the first part of this thesis we show the results of the implementation of the neural networks (NN) tracking controller on a SCORBOT ER VII manipulator. The NN feed-forward controller was employed in the first one control scheme, which is composed of a feedback proportional derivative (PD) controller. In the second control scheme, a new NN was incorporated for augmenting the first scheme in order to compensate the trajectory planner. It is shown that the tracking performance of the dual NN with PD controller is far better than that of the PD standard controller or NN with PD feed-forward/feedback controllers. In the second part of this thesis we employed an Elman neural network as sensory data fusion computational technique for approximation of the visual/servo (camera/joints) optimal mapping. The results showed that the network can successfully predict the next position of the tracking target. The new approach applied on a learning predictive controller for the SCORBOT ER VII robot manipulator has successfully operated tracking and interception a moving object. Furthermore, we proposed a MSDF control scheme, which combines the first one control scheme (PD+NN) and sensory data fusion model in the last part of this thesis. A visual/servo fusion model, an Elman neural network structure, was used to handle tracking process and to adapt itself through learning. We also verified the effectiveness of this scheme through visual/servo fusion control simulation of a SCORBOT ER VII robot manipulator to track a random moving target. This thesis consists of visual servoing, multisensor fusion and robot control for trajectory tracking. The main features of the MSDF control scheme are: (1) Data fusion is used for two disparate sensors. (2) Fusion mode is the mode-free and learning properties for the arbitrary motion tracking. (3) Robot motion control exhibits more robustness and adaptation.
APA, Harvard, Vancouver, ISO, and other styles
47

Tsai, Bai-Li, and 蔡百里. "A Study on Travel Time Estimation Applications of Data Fusion Techniques." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/41279504684983110548.

Full text
Abstract:
碩士
淡江大學
運輸管理學系碩士班
94
In recent years, Government tries to carry out the development of Advanced Traveler Information System - among the nine service domains of Intelligent Information System. In order to provide accurate information for road users, to stand on the choices of routes and transportation, estimating the path travel time is an important issue. To estimate travel time, vehicle detectors and probe vehicles collecting information (e.g., flow, occupancy and speed, etc.) are being used. For the moment, there is quite few vehicle detectors can still be used. Under the insufficient resource and budget, it is uneasy to set up vehicle detectors widely, otherwise, to add probe vehicles in the short term to make up for the shortage of information gathering. This study applies the concept of vehicle speed distributes in space of roadway segment and intends to investigate how many probe vehicles are enough to describe or estimate travel time for a roadway segment. The aspect of investigation is a roadway segment or a roadway segment containing intersection, according to the concept of a sample distribution which reflects population characteristics. As a result, probe vehicle can be considered as an instantaneous fixed vehicle detector by using the instantaneous speed and position of probe vehicles and it sets up a speed distribution of samples, from the inside, explores the size of probe vehicles and reflects population to estimate instantaneous travel time. Furthermore, by using the instantaneous sample method and vehicle detector data to test the data fusion, the feasibility of this method will be determined. After conferring the size of probe vehicle, data collection through real network and establishment of the simulation network can be used when parameters are evaluated. To collect data from vehicle detectors and probe vehicles through simulation, and then carrying out data fusion to estimate travel time. Vehicle detector estimates density by using flow and occupancy rate, accords with OH and Webster model to estimate travel time, and matches up the travel time which probe vehicles drive end of the roadway segment. For this reason, this study contains: (1) Investigate and test the algorithm of probe vehicle size. (2) The comparison and suitable situation of data fusion. (3) Estimate travel time using data fusion, and hope to provide more accurate travel information for road user. The result of this study exhibits that sizes of the probe vehicle are more than other studies by using the instantaneous distribution of speed. According to different length and flow rate of roadway segment with different probe vehicle size, it distributes about ten to sixty percent, and the average is similar to Tetsuhiro (2005) who brought up that forty percent probe vehicles can collect traffic information nonstop. Besides, the test of data fusion uses instantaneous sampling method and the result exhibits that Weighted Average is better in the one roadway segment case, Artificial Neural Network is better in the two roadway segments case, and data fusion can reduce the travel time errors from each detector has estimated. The result of data fusion exhibits that Weighted Average is suitable for the road length under 400 meters, probe vehicle rate upon 10 percent, and update in 3 minutes (i.e., real time); Artificial Neural Network is suitable for the road length upon 400 meters, probe vehicle rate under 10 percent, and update in 5 minutes (i.e., comparatively longer time). Finally, advantages and disadvantages of two methods are provided for the related applications.
APA, Harvard, Vancouver, ISO, and other styles
48

Shi, Bo-Yuan, and 施博元. "Mobile Vehicle Location Estimation using Wireless Communication Landmarks and Data Fusion Techniques." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/46442003243322567468.

Full text
Abstract:
碩士
育達商業技術學院
資訊管理所
94
[Abstract] The mobile location is to continuously confirm the position, direction, and velocity of a mobile vehicle. When the vehicle move, the vehicle’s position, velocity and direction have to be known and predicted. The mobile vehicle has to confirm its position and the trajectory for not being lost. Global Positioning System (GPS) is often utilized to eliminate the accumulated error from the Inertial Navigation System (INS). However, the accuracy of GPS positioning is highly related to the number and distribution of the available GPS satellites being tracked. This research uses the landmark of wireless communication by using the pseudo-satellite (pseudolite, PL), which is a ground-based GPS satellite-like signal transmitter. The problem dealt with here is that of estimating the kinematic state components of a vehicle in autonomous navigation using range, elevation and azimuth angles measured by Line of Sight (LOS) coordinate system. The estimates of the absolute position and velocity of the vehicle in Local Inertial Cartesian Coordinate System (LICCS) are provided by Kalman filter and the data fusion algorithm called covariance matching method. Performance results for the proposed algorithm are compared with those of Kalman filters, using difference simulations of typical vehicle maneuvering scenarios. Results of this research show that the Averaged Root Mean Square Error (ARMSE) of position and velocity with the filters was found to be larger (about 26% and 8%) than with the data fusion. Keyword:Landmark, Kalman Filter, Data Fusion.
APA, Harvard, Vancouver, ISO, and other styles
49

林明輝. "The Application of Data Fusion Techniques in Cellular-based Radio Location System." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/57296449779899686124.

Full text
Abstract:
碩士
長庚大學
電機工程研究所
91
This thesis presents data fusion methods for wireless location systems. Time-base wireless location systems are usually based on the processing of time-of-arrival (TOA) or time-difference-of-arrival (TDOA). In TOA and TDOA systems, different geometric locations of mobile stations and base stations may result in GDOP errors. Data fusion techniques are usually used in multisensor system. The conception of data fusion is to increase the accuracy of system by combining measurements from multisensor. By utilizing data fusion methods to process the measures of TOA and TDOA effectively, we can reduce the effects of GDOP error and increase the accuracy of wireless location systems. Non-line-of-sight (NLOS) is the major error source in wireless location systems. The mitigation of NLOS errors by using line-of-sight (LOS) reconstruction technique is discussed. In this thesis, three data fusion techniques, including Bayes rules, Dempster-Shafer evidential reasoning, and fuzzy logic are presented. The data fusion methods and LOS reconstruction technique are applied in TOA and TDOA wireless location systems. Simulation results show that the structures with data fusion methods effective increase the accuracy of wireless location systems.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Jing-Cheng, and 陳滰埕. "Study of Experimental Design and Data Fusion Techniques to Construction Engineering Application." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2f5a3t.

Full text
Abstract:
碩士
國立臺灣科技大學
營建工程系
105
There are many uncertainties in construction engineering problem, such as structure safety detection, reliability analysis, building damage detection. Their experiment results vary because they could be influenced by many internal and external factors. In order to make the detection complete, it is necessary to use experimental design to explore all possible factors and then to apply data fusion to conclude the results. This study takes the pipeline leak detection as an example to test the above-mentioned process. This research designs possible water network condition using design of experiement (DOE). We set demand’s measurement data and supply pressure as input and use simulation software to simulate water network conditions. Then we get pipe flow initial parameters and put into optimization algorithm to calibrate them. After that, the concepts of volume balance and tolerance design are used to determine the leakage. Finally, using data fusion techniques analyze leak probability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography