Academic literature on the topic 'Algorithmes de fusion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithmes de fusion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithmes de fusion"

1

Thomson, Ashlee J., Jacqueline A. Rehn, Susan L. Heatley, Laura N. Eadie, Elyse C. Page, Caitlin Schutz, Barbara J. McClure, et al. "Reproducible Bioinformatics Analysis Workflows for Detecting IGH Gene Fusions in B-Cell Acute Lymphoblastic Leukaemia Patients." Cancers 15, no. 19 (September 26, 2023): 4731. http://dx.doi.org/10.3390/cancers15194731.

Full text
Abstract:
B-cell acute lymphoblastic leukaemia (B-ALL) is characterised by diverse genomic alterations, the most frequent being gene fusions detected via transcriptomic analysis (mRNA-seq). Due to its hypervariable nature, gene fusions involving the Immunoglobulin Heavy Chain (IGH) locus can be difficult to detect with standard gene fusion calling algorithms and significant computational resources and analysis times are required. We aimed to optimize a gene fusion calling workflow to achieve best-case sensitivity for IGH gene fusion detection. Using Nextflow, we developed a simplified workflow containing the algorithms FusionCatcher, Arriba, and STAR-Fusion. We analysed samples from 35 patients harbouring IGH fusions (IGH::CRLF2 n = 17, IGH::DUX4 n = 15, IGH::EPOR n = 3) and assessed the detection rates for each caller, before optimizing the parameters to enhance sensitivity for IGH fusions. Initial results showed that FusionCatcher and Arriba outperformed STAR-Fusion (85–89% vs. 29% of IGH fusions reported). We found that extensive filtering in STAR-Fusion hindered IGH reporting. By adjusting specific filtering steps (e.g., read support, fusion fragments per million total reads), we achieved a 94% reporting rate for IGH fusions with STAR-Fusion. This analysis highlights the importance of filtering optimization for IGH gene fusion events, offering alternative workflows for difficult-to-detect high-risk B-ALL subtypes.
APA, Harvard, Vancouver, ISO, and other styles
2

Carrara, Matteo, Marco Beccuti, Fulvio Lazzarato, Federica Cavallo, Francesca Cordero, Susanna Donatelli, and Raffaele A. Calogero. "State-of-the-Art Fusion-Finder Algorithms Sensitivity and Specificity." BioMed Research International 2013 (2013): 1–6. http://dx.doi.org/10.1155/2013/340620.

Full text
Abstract:
Background. Gene fusions arising from chromosomal translocations have been implicated in cancer. RNA-seq has the potential to discover such rearrangements generating functional proteins (chimera/fusion). Recently, many methods for chimeras detection have been published. However, specificity and sensitivity of those tools were not extensively investigated in a comparative way.Results. We tested eight fusion-detection tools (FusionHunter, FusionMap, FusionFinder, MapSplice, deFuse, Bellerophontes, ChimeraScan, and TopHat-fusion) to detect fusion events using synthetic and real datasets encompassing chimeras. The comparison analysis run only on synthetic data could generate misleading results since we found no counterpart on real dataset. Furthermore, most tools report a very high number of false positive chimeras. In particular, the most sensitive tool, ChimeraScan, reports a large number of false positives that we were able to significantly reduce by devising and applying two filters to remove fusions not supported by fusion junction-spanning reads or encompassing large intronic regions.Conclusions. The discordant results obtained using synthetic and real datasets suggest that synthetic datasets encompassing fusion events may not fully catch the complexity of RNA-seq experiment. Moreover, fusion detection tools are still limited in sensitivity or specificity; thus, there is space for further improvement in the fusion-finder algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Fu Hongyu, 付宏语, 巩岩 Gong Yan, 汪路涵 Wang Luhan, 张艳微 Zhang Yanwei, 郎松 Lang Song, 张志 Zhang Zhi, and 郑汉青 Zheng Hanqing. "多聚焦显微图像融合算法." Laser & Optoelectronics Progress 61, no. 6 (2024): 0618022. http://dx.doi.org/10.3788/lop232015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tan, Yuxiang, Yann Tambouret, and Stefano Monti. "SimFuse: A Novel Fusion Simulator for RNA Sequencing (RNA-Seq) Data." BioMed Research International 2015 (2015): 1–5. http://dx.doi.org/10.1155/2015/780519.

Full text
Abstract:
The performance evaluation of fusion detection algorithms from high-throughput sequencing data crucially relies on the availability of data with known positive and negative cases of gene rearrangements. The use of simulated data circumvents some shortcomings of real data by generation of an unlimited number of true and false positive events, and the consequent robust estimation of accuracy measures, such as precision and recall. Although a few simulated fusion datasets from RNA Sequencing (RNA-Seq) are available, they are of limited sample size. This makes it difficult to systematically evaluate the performance of RNA-Seq based fusion-detection algorithms. Here, we present SimFuse to address this problem. SimFuse utilizes real sequencing data as the fusions’ background to closely approximate the distribution of reads from a real sequencing library and uses a reference genome as the template from which to simulate fusions’ supporting reads. To assess the supporting read-specific performance, SimFuse generates multiple datasets with various numbers of fusion supporting reads. Compared to an extant simulated dataset, SimFuse gives users control over the supporting read features and the sample size of the simulated library, based on which the performance metrics needed for the validation and comparison of alternative fusion-detection algorithms can be rigorously estimated.
APA, Harvard, Vancouver, ISO, and other styles
5

Dehghannasiri, Roozbeh, Donald E. Freeman, Milos Jordanski, Gillian L. Hsieh, Ana Damljanovic, Erik Lehnert, and Julia Salzman. "Improved detection of gene fusions by applying statistical methods reveals oncogenic RNA cancer drivers." Proceedings of the National Academy of Sciences 116, no. 31 (July 15, 2019): 15524–33. http://dx.doi.org/10.1073/pnas.1900391116.

Full text
Abstract:
The extent to which gene fusions function as drivers of cancer remains a critical open question. Current algorithms do not sufficiently identify false-positive fusions arising during library preparation, sequencing, and alignment. Here, we introduce Data-Enriched Efficient PrEcise STatistical fusion detection (DEEPEST), an algorithm that uses statistical modeling to minimize false-positives while increasing the sensitivity of fusion detection. In 9,946 tumor RNA-sequencing datasets from The Cancer Genome Atlas (TCGA) across 33 tumor types, DEEPEST identifies 31,007 fusions, 30% more than identified by other methods, while calling 10-fold fewer false-positive fusions in nontransformed human tissues. We leverage the increased precision of DEEPEST to discover fundamental cancer biology. Namely, 888 candidate oncogenes are identified based on overrepresentation in DEEPEST calls, and 1,078 previously unreported fusions involving long intergenic noncoding RNAs, demonstrating a previously unappreciated prevalence and potential for function. DEEPEST also reveals a high enrichment for fusions involving oncogenes in cancers, including ovarian cancer, which has had minimal treatment advances in recent decades, finding that more than 50% of tumors harbor gene fusions predicted to be oncogenic. Specific protein domains are enriched in DEEPEST calls, indicating a global selection for fusion functionality: kinase domains are nearly 2-fold more enriched in DEEPEST calls than expected by chance, as are domains involved in (anaerobic) metabolism and DNA binding. The statistical algorithms, population-level analytic framework, and the biological conclusions of DEEPEST call for increased attention to gene fusions as drivers of cancer and for future research into using fusions for targeted therapy.
APA, Harvard, Vancouver, ISO, and other styles
6

Nandeesh, M. D., and Dr M. Meenakshi. "Image Fusion Algorithms for Medical Images-A Comparison." Bonfring International Journal of Advances in Image Processing 5, no. 3 (July 31, 2015): 23–26. http://dx.doi.org/10.9756/bijaip.8051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Karan, Canan, Elaine Tan, Humaira Sarfraz, Christine Marie Walko, Richard D. Kim, Todd C. Knepper, and Ibrahim Halil Sahin. "Clinical and molecular characterization of fusion genes in colorectal cancer." Journal of Clinical Oncology 40, no. 16_suppl (June 1, 2022): e15568-e15568. http://dx.doi.org/10.1200/jco.2022.40.16_suppl.e15568.

Full text
Abstract:
e15568 Background: Next-generation sequencing (NGS) based molecular profiling technologies have revealed several oncogenic fusion genes that are actionable with small molecule inhibitors leading to practice change, particularly in lung cancer. The molecular and clinical characteristics of these gene fusions are not well defined in colorectal cancer patients (CRC). In this study, we aimed to define clinical and molecular characteristics of fusion genes in patients with CRC who underwent molecular profiling. Methods: Molecular characteristics of tissue confirmed 917 CRC patients were retrieved from the Moffit Cancer Center Clinical Genomics Action Committee database. Patients’ demographic and clinicopathological features and treatment history were collected from the database. All fusion genes were shown by hybridization-based NGS computational algorithms that determined cancer‐related genes, including single‐nucleotide variations, indels, microsatellite instability (MSI) status. Results: Among a total of 917 patients, 24 patients with CRC (2.6%) were found to have at least one fusion gene with a total number of 26 pathogenic fusions. The gene fusions are shown in Table. The most common, potentially targetable, fusion genes in our cohort were (1) RET fusions 0.5% (5/917), (2) ALK fusions 0.4% (4/917), (3) ROS1 fusions 0.2% (2/917), (4) NTRK1 fusion 0.1% (1/917), (5) NRG1 fusion 0.1% (1/917). Fusion genes were more common in MSI-H CRC (N = 27), and 3 (11.1%) patients with MSI-H CRC were found to have fusion genes [(RET (2) and NTRK(1)]. Fusion genes were present in both RAS wild-type (54%; 13/24) and RAS mutant (46%; 11/24) tumors. Most patients were older than 50 years (75%, 18/24) and had left-sided tumor (61.1%) tumor. Conclusions: Fusion genes are rare events in CRC. While fusion genes seem to be more prevalent in MSI-H CRC, RAS status does not correlate with the frequency of fusion genes. Actionable RET and ALK/ROS gene fusion are more common than NTRK fusion genes in this cohort of CRC patients.[Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
8

Foltz, Steven M., Qingsong Gao, Christopher J. Yoon, Amila Weerasinghe, Hua Sun, Lijun Yao, Mark A. Fiala, et al. "Comprehensive Multi-Omics Analysis of Gene Fusions in a Large Multiple Myeloma Cohort." Blood 132, Supplement 1 (November 29, 2018): 1898. http://dx.doi.org/10.1182/blood-2018-99-117245.

Full text
Abstract:
Abstract Introduction: Gene fusions are the result of genomic rearrangements that create hybrid protein products or bring the regulatory elements of one gene into close proximity of another. Fusions often dysregulate gene function or expression through oncogene overexpression or tumor suppressor underexpression (Gao, Liang, Foltz, et al. Cell Rep 2018). Some fusions such as EML4--ALK in lung adenocarcinoma are known druggable targets. Fusion detection algorithms utilize discordantly mapped RNA-seq reads. Careful consideration of detection and filtering procedures is vital for large-scale fusion detection because current methods are prone to reporting false positives and show poor concordance. Multiple myeloma (MM) is a blood cancer in which rapidly expanding clones of plasma cells spread in the bone marrow. Translocations that juxtapose the highly-expressed IGH enhancer with potential oncogenes are associated with overexpression of partner genes, although they may not lead to a detectable gene fusion in RNA-seq data. Previous studies have explored the fusion landscape of multiple myeloma cohorts (Cleynen, et al. Nat Comm 2017; Nasser, et al. Blood 2017). In this study, we developed a novel gene fusion detection pipeline and post-processing strategy to analyze 742 patient samples at the primary time point and 64 samples at follow-up time points (806 total samples) from the Multiple Myeloma Research Foundation (MMRF) CoMMpass Study using RNA-seq, WGS, and clinical data. Methods and Results: We overlapped five fusion detection algorithms (EricScript, FusionCatcher, INTEGRATE, PRADA, and STAR-Fusion) to report fusion events. Our filtered call set consisted of 2,817 fusions with a median of 3 fusions per sample (mean 3.8), similar to glioblastoma, breast, ovarian, and prostate cancers in TCGA. Major recurrent fusions involving immunoglobulin genes included IGH--WHSC1 (88 primary samples), IGL--BMI1 (29), and the upstream neighbor of MYC, PVT1, paired with IGH (6), IGK (3), and IGL (11). For each event, we used WGS data when available to determine if there was genomic support of the gene fusion (based on discordant WGS reads, SV event detection, and MMRF CoMMpass Seq-FISH WGS results) (Miller, et al. Blood 2016). WGS validation rates varied by the level of RNA-seq evidence supporting each fusion, with an overall rate of 24.1%, which is comparable to previously observed pan-cancer validation rates using low-pass WGS. We calculated the association between fusion status and gene expression and identified genes such as BCL2L11, CCND1/2, LTBR, and TXNDC5 that showed significant overexpression (t-test). We explored the clinical connections of fusion events through survival analysis and clinical data correlations, and by mining potentially druggable targets from our Database of Evidence for Precision Oncology (dinglab.wustl.edu/depo) (Sun, Mashl, Sengupta, et al. Bioinformatics 2018). Major examples of upregulated fusion kinases that could potentially be targeted with off-label drug use include FGFR3 and NTRK1. We examined the evolution of fusion events over multiple time points. In one MMRF patient with a t(8;14) translocation joining the IGH locus and transcription factor MAFA, we observed IGH fusions with TOP1MT (neighbor of MAFA) at all four time points with corresponding high expression of TOP1MT and MAFA. Using non-MMRF single-cell RNA data from different patients, we were able to track cell-type composition over time as well as detect subpopulations of cells harboring fusions at different time points with potential treatment implications. Discussion: Gene fusions offer potential targets for alternative MM therapies. Careful implementation of gene fusion detection algorithms and post-processing are essential in large cohort studies to reduce false positives and enrich results for clinically relevant information. Clinical fusion detection from untargeted RNA-seq remains a challenge due to poor sensitivity, specificity, and usability. By combining MMRF CoMMpass data from multiple platforms, we have produced a comprehensive fusion profile of 742 MM patients. We have shown novel gene fusion associations with gene expression and clinical data, and we identified candidates for druggability studies. Disclosures Vij: Bristol-Myers Squibb: Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding; Celgene: Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding; Jazz Pharmaceuticals: Honoraria, Membership on an entity's Board of Directors or advisory committees; Jansson: Honoraria, Membership on an entity's Board of Directors or advisory committees; Amgen: Honoraria, Membership on an entity's Board of Directors or advisory committees; Karyopharma: Honoraria, Membership on an entity's Board of Directors or advisory committees; Takeda: Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Brad B., Yanglong Mou, Lauryn Keeler, Christophe Magnan, Vincent Funari, Lawrence Weiss, Shari Brown, and Sally Agersborg. "A Highly Sensitive and Specific Gene Fusion Algorithm Based on Multiple Fusion Callers and an Ensemble Machine Learning Approach." Blood 136, Supplement 1 (November 5, 2020): 12–13. http://dx.doi.org/10.1182/blood-2020-142020.

Full text
Abstract:
Background: Gene Fusion events are common occurrences in malignancies, and are frequently drivers of malignancy. FISH and qPCR are two methods often used for identifying highly prevalent gene fusions/translocations. However, these are single target assays, requiring a lot of effort and sample if multiple assays are needed for multiple targets like sarcoma. High-throughput parallel (NextGen) DNA and RNA sequencing are also in current use to detect and characterize gene fusions. RNA sequencing (RNAseq) has the advantage that multiple markers can be targeted at one time and RNA fusions are readily identified from their product transcripts. While many fusion calling algorithms exist for use on RNAseq data, sensitive fusion callers, needed for samples of low tumor content, often present high false positive rates. Further, there currently is no single variable or element in NGS data that can be used to filter out false positive calls by extant callers. Individual sensitive fusion callers may be considered weak predictors of gene fusions. Combining their results into a single fusion call involves evaluating many elements, which can be a time consuming and difficult manual task. In order to achieve higher accuracy in fusion calls than can be achieved using individual fusion callers, we have combined the results of multiple fusion callers by use of an ensemble learning approach based on random forest models. Our method selects the best group of callers from among several callers, and provides an algorithmic means of combining their results, presenting a metric that can be immediately interpreted as the probability that a called fusion is a true fusion call. Methods: Random forest models were generated with the randomForest package in R, and then tuned using the R caret package. Training data sets consisted of fusion calls deemed true by review and by orthogonal methods including PCR/Sanger sequencing and the commercial Archer™ fusion calling system. We present the results of training on calls made by five fusion callers Arriba, STAR-Fusion, FusionCatcher, deFuse, and Kallisto/pizzly. Logistic training variables (seen vs not seen by the fusion caller) were used for the five callers. Variables also included metrics for the magnitude and balance of coverage on either side of candidate fusion breakpoints reported by Arriba and STAR Fusion ("coverage balance") and a single metric consisting of the number of sequencing reads that cross the candidate breakpoint. The model was validated by 10-fold cross-validation on 598 fusion calls by the five callers. Results: The resulting model is superior to the simple strategy of requiring agreement by n of five callers, particularly with regard to specificity (Table 1). Also, "importance of variables," reported by randomForest, gauges the relative contribution of variables in the model. Here it shows that one caller, Kallisto\pizzly, does not contribute to the model (Table 2). Conclusion: Random Forest modeling provides a viable means of combining gene fusion call data from multiple callers into a single fusion calling tool with improved performance over simple combinations of fusion calls. An additional benefit is seen in that building and evaluating such models can guide the selection of fusion callers, thereby eliminating non-contributory calling methods and ensuring optimal utilization of computational resources. Disclosures Thomas: NeoGenomics,Inc.: Current Employment. Mou:NeoGenomics: Current Employment. Keeler:NeoGenomics: Current Employment. Magnan:NeoGenomics: Current Employment. Funari:NeoGenomics: Current Employment. Weiss:Merck: Other: Speaker; Bayer: Other: speaker; Genentech: Other: Speaker; NeoGenomics: Current Employment. Brown:NeoGenomics,Inc.: Current Employment. Agersborg:NeoGenomics: Current Employment.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Changqi, Cong Zhang, and Naixue Xiong. "Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review." Electronics 9, no. 12 (December 17, 2020): 2162. http://dx.doi.org/10.3390/electronics9122162.

Full text
Abstract:
Infrared and visible image fusion technologies make full use of different image features obtained by different sensors, retain complementary information of the source images during the fusion process, and use redundant information to improve the credibility of the fusion image. In recent years, many researchers have used deep learning methods (DL) to explore the field of image fusion and found that applying DL has improved the time-consuming efficiency of the model and the fusion effect. However, DL includes many branches, and there is currently no detailed investigation of deep learning methods in image fusion. In this work, this survey reports on the development of image fusion algorithms based on deep learning in recent years. Specifically, this paper first conducts a detailed investigation on the fusion method of infrared and visible images based on deep learning, compares the existing fusion algorithms qualitatively and quantitatively with the existing fusion quality indicators, and discusses various fusions. The main contribution, advantages, and disadvantages of the algorithm. Finally, the research status of infrared and visible image fusion is summarized, and future work has prospected. This research can help us realize many image fusion methods in recent years and lay the foundation for future research work.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Algorithmes de fusion"

1

Kaci, Souhila. "Connaissances et préférences : représentation et fusion en logique possibiliste." Toulouse 3, 2002. http://www.theses.fr/2002TOU30029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zarrouati-Vissière, Nadège. "La réalité augmentée : fusion de vision et navigation." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://pastel.archives-ouvertes.fr/pastel-00961962.

Full text
Abstract:
Cette thèse a pour objet l'étude d'algorithmes pour des applications de réalité visuellement augmentée. Plusieurs besoins existent pour de telles applications, qui sont traités en tenant compte de la contrainte d'indistinguabilité de la profondeur et du mouvement linéaire dans le cas de l'utilisation de systèmes monoculaires. Pour insérer en temps réel de manière réaliste des objets virtuels dans des images acquises dans un environnement arbitraire et inconnu, il est non seulement nécessaire d'avoir une perception 3D de cet environnement à chaque instant, mais également d'y localiser précisément la caméra. Pour le premier besoin, on fait l'hypothèse d'une dynamique de la caméra connue, pour le second on suppose que la profondeur est donnée en entrée: ces deux hypothèses sont réalisables en pratique. Les deux problèmes sont posés dans lecontexte d'un modèle de caméra sphérique, ce qui permet d'obtenir des équations de mouvement invariantes par rotation pour l'intensité lumineuse comme pour la profondeur. L'observabilité théorique de ces problèmes est étudiée à l'aide d'outils de géométrie différentielle sur la sphère unité Riemanienne. Une implémentation pratique est présentée: les résultats expérimentauxmontrent qu'il est possible de localiser une caméra dans un environnement inconnu tout en cartographiant précisément cet environnement.
APA, Harvard, Vancouver, ISO, and other styles
3

Arezki, Yassir. "Algorithmes de références 'robustes' pour la métrologie dimensionnelle des surfaces asphériques et des surfaces complexes en optique." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLN058.

Full text
Abstract:
Les formes asphériques et les surfaces complexes sont une classe très avancée d'éléments optiques. Leur application a considérablement augmenté au cours des dernières années dans les systèmes d'imagerie, l'astronomie, la lithographie, etc. La métrologie de ces pièces est très difficile, en raison de la grande gamme dynamique d'information acquise et la traçabilité à l'unité SI mètre. Elle devrait faire usage de la norme infinie; (Méthode de zone minimum ou la méthode Min-Max) pour calculer l'enveloppe entourant les points dans le jeu de données en réduisant au minimum la différence entre l'écart maximum et l'écart minimal entre la surface et l'ensemble de données. Cette méthode a une grande complexité en fonction du nombre de points, enplus, les algorithmes impliqués sont non-déterministes. Bien que cette méthode fonctionne pour des géométries simples (lignes, plans, cercles, cylindres, cônes et sphères), elle est encore un défi majeur lorsqu' utilisée pour des géométries complexes (asphérique et surfaces complexes). Par conséquent, l'objectif de la thèse est le développement des algorithmes d'ajustement Min-Max pour les deux surfaces asphériques et complexes, afin de fournir des algorithmes de référence robustes pour la grande communauté impliquée dans ce domaine. Les algorithmes de référence à développer devraient être évalués et validés sur plusieurs données de référence (Softgauges) qui seront générées par la suite
Aspheres and freeform surfaces are a very challenging class of optical elements. Their application has grown considerably in the last few years in imaging systems, astronomy, lithography, etc. The metrology for aspheres is very challenging, because of the high dynamic range of the acquired information and the traceability to the SI unit meter. Metrology should make use of the infinite norm; (Minimum Zone Method or Min-Max method) to calculate the envelope enclosing the points in the dataset by minimizing the difference between the maximum deviation and the minimum deviation between the surface and the dataset. This method grows in complexity as the number of points in the dataset increases, and the involved algorithms are non-deterministic. Despite the fact that this method works for simple geometries (lines, planes, circles, cylinders, cones and spheres) it is still a major challenge when used on complex geometries (asphere and freeform surfaces). Therefore, the main objective is to address this key challenge about the development of Min-Max fitting algorithms for both aspherical and freeform surfaces as well as least squares fitting algorithms, in order to provide robust reference algorithms for the large community involved in this domain. The reference algorithms to be developed should be evaluated and validated on several reference data (softgauges) that will be generated using reference data generators
APA, Harvard, Vancouver, ISO, and other styles
4

Laporterie, Florence. "Représentations hiérarchiques d'images avec des pyramides morphologiques : application à l'analyse et à la fusion spatio-temporelle de données en observation de la Terre." Toulouse, ENSAE, 2002. http://www.theses.fr/2002ESAE0001.

Full text
Abstract:
Le mémoire présente le développement d'une représentation multi-échelle d'images par une famille de pyramides morphologiques et ses applications à l'analyse et à la fusion d'images en télédétection. La représentation hiérarchique proposée est basée sur une approche pyramidale utilisant les filtres non linéaires de la morphologie mathématique. La partie II propose d'abord un état de l'art des transformations pyramidales puis décrit le principe de la pyramide morphologique. Ses propriétés sont étudiées au travers de différents paramétrages et des familles qui en découlent. La pyramide morphologique permet d'une part, de séparer à chaque niveau de résolution les éléments de détails par leur taille et leur réflectance par rapport à l'environnement, et, d'autre part, de représenter les images aux niveaux de perception inférieurs. La partie III du mémoire est consacrée aux applications des pyramides morphologiques à l'analyse des surfaces observées. La décomposition des éléments imbriqués dans les scènes en signaux séparables à différentes résolutions démontre la capacité de caractérisation multi-échelle. On montre également comment des traitements de reconstruction appliqués aux détails contribuent à cette analyse. Différents exemples de paysages analysés illustrent la méthodologie développée. La partie IV consiste à fusionner des données de résolutions différentes, notamment celles issues de capteurs à haute résolution et à grand champ de vue. L'approche de fusion par pyramide morphologique créé ainsi des données de synthèse à haute résolution spatiale et à haute fréquence temporelle qui permettent une approche nouvelle du suivi des surfaces terrestres. Les résultats de ce principe de fusion sont présentés en fonction de différents jeux de dates d'acquisition des images. La conclusion souligne trois perspectives très prometteuses. Premièrement, la pyramide morphologique peut être utilisée comme un navigateur exploitant les différents niveaux de résolution spatiale permettant l'accès à une information plus ou moins détaillée. Ensuite, la pyramide morphologique ouvre des opportunités intéressantes au recalage d'images de résolutions différentes. Finalement, la pyramide morphologique est un cadre intéressant pour la compression de données par le choix des différents paramètres autorisés.
APA, Harvard, Vancouver, ISO, and other styles
5

Poinsot, Audrey. "Traitements pour la reconnaissance biométrique multimodale : algorithmes et architectures." Thesis, Dijon, 2011. http://www.theses.fr/2011DIJOS010.

Full text
Abstract:
Combiner les sources d'information pour créer un système de reconnaissance biométrique multimodal permet d'atténuer les limitations de chaque caractéristique utilisée, et donne l'opportunité d'améliorer significativement les performances. Le travail présenté dans ce manuscrit a été réalisé dans le but de proposer un système de reconnaissance performant, qui réponde à des contraintes d'utilisation grand-public, et qui puisse être implanté sur un système matériel de faible coût. La solution choisie explore les possibilités apportées par la multimodalité, et en particulier par la fusion du visage et de la paume. La chaîne algorithmique propose un traitement basé sur les filtres de Gabor, ainsi qu’une fusion des scores. Une base multimodale réelle de 130 sujets acquise sans contact a été conçue et réalisée pour tester les algorithmes. De très bonnes performances ont été obtenues, et ont été confirmées sur une base virtuelle constituée de deux bases publiques (les bases AR et PolyU). L'étude approfondie de l'architecture des DSP, et les différentes implémentations qui ont été réalisées sur un composant de type TMS320c64x, démontrent qu'il est possible d'implanter le système sur un unique DSP avec des temps de traitement très courts. De plus, un travail de développement conjoint d'algorithmes et d'architectures pour l'implantation FPGA a démontré qu'il était possible de réduire significativement ces temps de traitement
Including multiple sources of information in personal identity recognition reduces the limitations of each used characteristic and gives the opportunity to greatly improve performance. This thesis presents the design work done in order to build an efficient generalpublic recognition system, which can be implemented on a low-cost hardware platform. The chosen solution explores the possibilities offered by multimodality and in particular by the fusion of face and palmprint. The algorithmic chain consists in a processing based on Gabor filters and score fusion. A real database of 130 subjects has been designed and built for the study. High performance has been obtained and confirmed on a virtual database, which consists of two common public biometric databases (AR and PolyU). Thanks to a comprehensive study on the architecture of the DSP components and some implementations carried out on a DSP belonging to the TMS320c64x family, it has been proved that it is possible to implement the system on a single DSP with short processing times. Moreover, an algorithms and architectures development work for FPGA implementation has demonstrated that these times can be significantly reduced
APA, Harvard, Vancouver, ISO, and other styles
6

Awad, Mohamad M. "Mise en oeuvre d'un système coopératif adaptatif de segmentation d'images multicomposantes." Rennes 1, 2008. http://www.theses.fr/2008REN1S031.

Full text
Abstract:
The exploitation of images acquired by various sensors for a difficult application such as the Remote Sensing presents a wide investigation field and poses many problems for all levels of the image processing chain. Also, the development of adaptive and optimized segmentation and fusion methods proves out to be indispensable. Image segmentation and fusion are the key phases of all systems of recognition or interpretation by vision: the rate of identification or the quality of interpretation depends indeed closely on the sharpness of the analysis and the relevance of results of these phases. Although the topic was studied extensively in the literature, it does not exist a universal and efficient method of classification and fusion allowing an accurate identification of classes of a real image when this one is composed at a time of uniform regions (weak local variation of luminance) and of textured nature. In addition, the majority of these methods require a priori knowledge which is difficult to obtain in practice. Furthermore, they assume the existence of models which can estimate its parameters, and fit to given data. However, such a parametric approach is not robust and its performance is severely affected by the correctness of the utilized parametric model. In the framework of this thesis, a cooperative adaptive mutli-component image segmentation system using the minimum a priori knowledge is developed. The segmentation methods used in this system are nonparametric. The system works by analyzing the image in several hierarchical levels of complexity while integrating several methods in cooperation mechanisms. Three cooperative approaches are created between different methods such as Hybrid Genetic Algorithm, Fuzzy C-Means, Self-Organizing Map and Non-Uniform Rational B-Spline. In order to finalize image segmentation results, a fusion process of the results of the segmentation of the above three methods is used. The assessment of the system is achieved through several experiments by using different satellite and aerial images. The obtained results show the high efficiency and accuracy of developed system
Dans le domaine de la télédétection, l'exploitation des images acquises par divers capteurs présente un large champ d'investigation et pose de nombreux problèmes à tous les niveaux dans la chaîne de traitement des images. Aussi, le développement d’approches de segmentation et de fusion optimisées et adaptatives, s’avère indispensable. La segmentation et la fusion sont deux étapes essentielles dans tout système de reconnaissance ou d’interprétation par vision: Le taux d'identification ou la qualité de l'interprétation dépend en effet, étroitement de la qualité de l'analyse et la pertinence des résultats de ces phases. Bien que le sujet ait été étudié en détail dans la littérature, il n'existe pas de méthodes universelles et efficaces de segmentation et de fusion qui permettent une identification précise des classes d'une image réelle lorsque celle-ci est composée à la fois de régions uniformes (faible variation locale de luminance) et texturées. En outre, la majorité de ces méthodes nécessitent des connaissances a priori qui sont en pratique difficilement accessibles. En outre, certaines d’entre elles supposent l'existence de modèles dont les paramètres doivent être estimés. Toutefois, une telle approche paramétrique est non robuste et ses performances sont sévèrement altérées par l’ajustement de l'utilisation de modèles paramétriques. Dans le cadre de cette thèse, un système coopératif et adaptatif de segmentation des images multicomposantes est développé. Ce système est non-paramétrique et utilise le minimum de connaissances a priori. Il permet l’analyse de l'image à plusieurs niveaux hiérarchiques en fonction de la complexité tout en intégrant plusieurs méthodes dans les mécanismes de coopération. Trois approches sont intégrées dans le processus coopératif: L’Algorithme Génétique Hybride, l'Algorithme des C-Moyennes Floues, le Réseau de Kohonen (SOM) et la modélisation géométrique par ’’Non-Uniform Rational B-Spline’’. Pour fusionner les différents résultats issus des méthodes coopératives, l’algorithme génétique est appliqué. Le système est évalué sur des images multicomposantes satellitaires et aériennes. Les différents résultats obtenus montrent la grande efficacité et la précision de ce système
APA, Harvard, Vancouver, ISO, and other styles
7

Khiari, Nefissa. "Biométrie multimodale basée sur l’iris et le visage." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLE014/document.

Full text
Abstract:
Cette thèse vise à apporter une contribution dans le domaine de la biométrie par l’iris (l’une des modalités les plus précises et difficiles à pirater) et le visage (l’une des modalités les moins intrusives et les moins coûteuses). A travers ce travail, nous abordons plusieurs aspects importants de la biométrie mono et multimodale. Nous commençons par dresser un état de l’art sur la biométrie monomodale par l’iris et par le visage et sur la multimodalité iris/visage, avant de proposer plusieurs approches personnelles de reconnaissance d’individus pour chacune des deux modalités. Nous abordons, en particulier, la reconnaissance faciale par des approches classiques reposant sur des combinaisons d’algorithmes et des approches bio-inspirées émulant le mécanisme de la vision humaine. Nous démontrons l’intérêt des approches bio-inspirées par rapport aux approches classiques à travers deux méthodes. La première exploite les résultats issus de travaux neuroscientifiques indiquant l’importance des régions et des échelles de décomposition utiles à l’identification d’un visage. La deuxième consiste à appliquer une méthode de codage par ordre de classement dans la phase de prétraitement pour renforcer le contenu informatif des images de visage. Nous retenons la meilleure approche de chacune des modalités de l’iris et du visage pour concevoir deux méthodes biométriques multimodales. A travers ces méthodes, nous évaluons différentes stratégies classiques de fusion multimodale au niveau des scores. Nous proposons ensuite une nouvelle règle de fusion de scores basée sur un facteur de qualité dépendant du taux d’occultation des iris. Puis, nous mettons en avant l’intérêt de l’aspect double échantillons de l’iris dans une approche multimodale.L’ensemble des méthodes proposées sont évaluées sur la base multimodale réelle IV² capturée dans des environnements variables voire dégradés et en suivant un protocole bien précis fourni dans le cadre de la campagne d’évaluation IV². Grâce à une étude comparative avec les algorithmes participants à la campagne IV², nous prouvons la compétitivité de nos algorithmes qui arrivent dans plusieurs cas à se positionner en tête de liste
This thesis aims to make a contribution in the field of biometrics based on iris (one of the most accurate and hard to hack biometrics) in conjunction with face (one of the cheapest and less intrusive biometrics).Through this work, we discuss several important aspects of unimodal and multimodal biometrics. After an overview on unimodal and multimodal biometrics based on iris and face, we propose several personal approaches of biometric authentication using each single trait. Particularly, we address facial recognition first with conventional approaches based on combined algorithms, then with bio-inspired approaches emulating the human vision mechanism. We demonstrate the interest of bio-inspired approaches over conventional approaches through two proposed methods. The first one exploits the results of neuroscientific work indicating the relevant regions and scales in a face identification task. The second consists in applying a rank order coding method at the preprocessing step so as to enhance the information content of face images.We keep the best unimodal approach of iris and face recognition to design two multimodal biometric methods. Through these methods, we evaluate different classic strategies of multimodal score-level fusion. Afterwards, we propose a new score-level fusion rule based on a quality metric according to irises occultation rates. Then, we point out the interest of the double-sample iris aspect in a multimodal approach.All the proposed methods are evaluated on the real multimodal IV² database captured under variable to degraded environments, and following a specific protocol provided as part of the IV² evaluation campaign. After a comparative study with the participant algorithms in the IV² campaign, we prove the competitiveness of our algorithms witch outperform most of the participant ones in the IV² campaign in many experiments
APA, Harvard, Vancouver, ISO, and other styles
8

Salmeron-Quiroz, Bernardino Benito. "Fusion de données multicapteurs pour la capture de mouvement." Phd thesis, Université Joseph Fourier (Grenoble), 2007. http://tel.archives-ouvertes.fr/tel-00148577.

Full text
Abstract:
Cette thèse est située dans le contexte des applications de la capture de mouvement humain dont le but est d'inférer la position du corps humain. Les systèmes de capture de mouvement sont des outils "software" et hardware " qui permettent le traitement en temps réel ou en temps différé de données permettant de retrouver le mouvement (position, orientation) d'un objet ou d'un humain dans l'espace. Différents systèmes de capture de mouvement existent sur le marché. Ils diffèrent essentiellement par leur technologie mais nécessitent une adaptation de l'environnement et parfois l'équipement de la personne. Dans cette thèse, on présente un nouveau système de capture de mouvement permettant d'obtenir l'orientation 3D ainsi que l'accélération linéaire d'un mobile à partir des mesures fournies par une minicentrale, développée au sein du CEA-LETI. Cette minicentrale utilise une configuration minimale, à savoir un triaxe magnétomètre et un triaxe accéléromètre. Dans ce travail, on propose différents algorithmes d'estimation de l'attitude et des accélérations recherchées. La rotation est modélisée à l'aide d'un quaternion unitaire. Dans un premier temps, on a considéré le cas d'une seule centrale d'attitude. On s'est intéressé au problème à 6DDL, dont le but est d'estimer l'orientation d'un corps rigide et ses trois accélérations linéaires à partir des mesures fournies par la minicentrale et d'un algorithme d'optimisation. Dans un second temps, on s'est intéressé au cas de la capture de mouvement de chaînes articulées (bras et jambe). A partir d'hypothèses ad-hoc sur les accélérations au niveau des liaisons pivot, on reconstruit le mouvement de la chaîne articulé (orientation) du segment ainsi que l'accélération en des points particuliers du segment. Ces différentes approches ont été validées avec des données simulés et réelles.
APA, Harvard, Vancouver, ISO, and other styles
9

Salmeron-Quiroz, Bernardino Benito. "Fusion de données multicapteurs pour la capture de mouvement." Phd thesis, Grenoble 1, 2007. http://www.theses.fr/2007GRE10062.

Full text
Abstract:
Cette thèse est située dans le contexte des applications de la capture de mouvement humain dont le but est d'inférer la position du corps humain. Les systèmes de capture de mouvement sont des outils "software" et hardware " qui permettent le traitement en temps réel ou en temps différé de données permettant de retrouver le mouvement (position, orientation) d'un objet ou d'un humain dans l'espace. Différents systèmes de capture de mouvement existent sur le marché. Ils diffèrent essentiellement par leur technologie mais nécessitent une adaptation de l'environnement et parfois l'équipement de la personne. Dans cette thèse, on présente un nouveau système de capture de mouvement permettant d'obtenir l'orientation 3D ainsi que l'accélération linéaire d'un mobile à partir des mesures fournies par une minicentrale, développée au sein du CEA-LETI. Cette minicentrale utilise une configuration minimale, à savoir un triaxe magnétomètre et un triaxe accéléromètre. Dans ce travail, on propose différents algorithmes d'estimation de l'attitude et des accélérations recherchées. La rotation est modélisée à l'aide d'un quaternion unitaire. Dans un premier temps, on a considéré le cas d'une seule centrale d'attitude. On s'est intéressé au problème à 6DDL, dont le but est d'estimer l'orientation d'un corps rigide et ses trois accélérations linéaires à partir des mesures fournies par la minicentrale et d'un algorithme d'optimisation. Dans un second temps, on s'est intéressé au cas de la capture de mouvement de chaînes articulées (bras et jambe). A partir d'hypothèses ad-hoc sur les accélérations au niveau des liaisons pivot, on reconstruit le mouvement de la chaîne articulé (orientation) du segment ainsi que l'accélération en des points particuliers du segment. Ces différentes approches ont été validées avec des données simulés et réelles
This thesis deals with motion capture (MoCap) which goal is to acquire the attitude of human's body. In our case, the arm and the leg are considered. The MoCap trackers are made of "software" and "hardware" parts which allow acquisition of the movement of an object or a human in space in real or differed time. Many MoCaps systems still exist, but they require an adaptation of the environment. In this thesis, a low cost, low weight attitude central unit (UCN namely a triaxes magnetometer and a triaxes accelerometer), is used. This attitude central unit has been developed within the CEA-LETI. In this work, we propose different algorithms to estimate the attitude and the linear accelerations of a rigid body. For the rotation parametrization, the unit quaternion is used. Firstly, the estimation of the attitude and the accelerations (6DDL case) from the measurements provided by ACU is done via an optimization technique. The motion capture of articulated chains (arm and leg) is also studied with ad-hoc assumptions on the accelerations in the pivot connections, the orientation of the segments as well as the accelerations in particular points of the segments can be estimated. The different approaches proposed in this work have been evaluated with simulated data and real data
APA, Harvard, Vancouver, ISO, and other styles
10

Bader, Kaci. "Tolérance aux fautes pour la perception multi-capteurs : application à la localisation d'un véhicule intelligent." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2161/document.

Full text
Abstract:
La perception est une entrée fondamentale des systèmes robotiques, en particulier pour la localisation, la navigation et l'interaction avec l'environnement. Or les données perçues par les systèmes robotiques sont souvent complexes et sujettes à des imprécisions importantes. Pour remédier à ces problèmes, l'approche multi-capteurs utilise soit plusieurs capteurs de même type pour exploiter leur redondance, soit des capteurs de types différents pour exploiter leur complémentarité afin de réduire les imprécisions et les incertitudes sur les capteurs. La validation de cette approche de fusion de données pose deux problèmes majeurs.Tout d'abord, le comportement des algorithmes de fusion est difficile à prédire,ce qui les rend difficilement vérifiables par des approches formelles. De plus, l'environnement ouvert des systèmes robotiques engendre un contexte d'exécution très large, ce qui rend les tests difficiles et coûteux. L'objet de ces travaux de thèse est de proposer une alternative à la validation en mettant en place des mécanismes de tolérance aux fautes : puisqu'il est difficile d'éliminer toutes les fautes du système de perception, on va chercher à limiter leurs impacts sur son fonctionnement. Nous avons étudié la tolérance aux fautes intrinsèquement permise par la fusion de données en analysant formellement les algorithmes de fusion de données, et nous avons proposé des mécanismes de détection et de rétablissement adaptés à la perception multi-capteurs. Nous avons ensuite implémenté les mécanismes proposés pour une application de localisation de véhicules en utilisant la fusion de données par filtrage de Kalman. Nous avons finalement évalué les mécanismes proposés en utilisant le rejeu de données réelles et la technique d'injection de fautes, et démontré leur efficacité face à des fautes matérielles et logicielles
Perception is a fundamental input for robotic systems, particularly for positioning, navigation and interaction with the environment. But the data perceived by these systems are often complex and subject to significant imprecision. To overcome these problems, the multi-sensor approach uses either multiple sensors of the same type to exploit their redundancy or sensors of different types for exploiting their complementarity to reduce the sensors inaccuracies and uncertainties. The validation of the data fusion approach raises two major problems. First, the behavior of fusion algorithms is difficult to predict, which makes them difficult to verify by formal approaches. In addition, the open environment of robotic systems generates a very large execution context, which makes the tests difficult and costly. The purpose of this work is to propose an alternative to validation by developing fault tolerance mechanisms : since it is difficult to eliminate all the errors of the perceptual system, We will try to limit impact in their operation. We studied the inherently fault tolerance allowed by data fusion by formally analyzing the data fusion algorithms, and we have proposed detection and recovery mechanisms suitable for multi-sensor perception, we implemented the proposed mechanisms on vehicle localization application using Kalman filltering data fusion. We evaluated the proposed mechanims using the real data replay and fault injection technique
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Algorithmes de fusion"

1

service), ScienceDirect (Online, ed. Image fusion: Algorithms and applications. Amsterdam: Academic Press/Elsevier, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Antony, Richard T. Principles of data fusion automation. Boston: Artech House, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abdelgawad, Ahmed, and Magdy Bayoumi. Resource-Aware Data Fusion Algorithms for Wireless Sensor Networks. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-1350-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abdelgawad, Ahmed. Resource-Aware Data Fusion Algorithms for Wireless Sensor Networks. Boston, MA: Springer US, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

C, Jain L., and Martin N. M, eds. Fusion of neural networks, fuzzy sets, and genetic algorithms: Industrial applications. Boca Raton: CRC press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Carpenter, J. Russell. Progress in navigation filter estimate fusion and its application to spacecraft rendezvous. [Washington, D.C.]: National Aeronautics and Space Administration, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Carpenter, J. Russell. Progress in navigation filter estimate fusion and its application to spacecraft rendezvous. [Washington, D.C.]: National Aeronautics and Space Administration, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

V, Dasarathy Belur, and Society of Photo-optical Instrumentation Engineers., eds. Sensor fusion--architectures, algorithms, and applications III: 7-9 April 1999, Orlando, Florida. Bellingham, Wash: SPIE, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

V, Dasarathy Belur, and Society of Photo-optical Instrumentation Engineers., eds. Sensor fusion--architectures, algorithms, and applications II: 16-17 April 1998, Orlando, Florida. Bellingham, Wash: SPIE, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

V, Dasarathy Belur, and Society of Photo-optical Instrumentation Engineers., eds. Sensor fusion--architectures, algorithms, and applications V: 18-20 April, 2001, Orlando, USA. Bellingham, Wash: SPIE, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Algorithmes de fusion"

1

Jøsang, Audun. "Belief Fusion." In Artificial Intelligence: Foundations, Theory, and Algorithms, 207–36. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42337-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nimier, V. "Soft Sensor Management for Multisensor Tracking Algorithm." In Multisensor Fusion, 365–79. Dordrecht: Springer Netherlands, 2002. http://dx.doi.org/10.1007/978-94-010-0556-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Dayi, Maodeng Li, Xiangyu Huang, and Xiaowen Zhang. "Estimation Fusion Algorithm." In Space Science and Technologies, 63–89. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4879-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nehiwal, Jayesh, Harish Kumar Khyani, Shrawan Ram Patel, and Chandershekhar Singh. "Nuclear Fusion: Energy of Future." In Algorithms for Intelligent Systems, 295–301. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-8820-4_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jouan, A., B. Jarry, and H. Michalska. "Tracking Closely Maneuvering Targets in Clutter with an IMM-JVC Algorithm." In Multisensor Fusion, 581–92. Dordrecht: Springer Netherlands, 2002. http://dx.doi.org/10.1007/978-94-010-0556-2_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Turhan-Sayan, G. "Multi-Aspect Data Fusion Applied to Electromagnetic Target Classification using Enetic Algorithm." In Multisensor Fusion, 533–39. Dordrecht: Springer Netherlands, 2002. http://dx.doi.org/10.1007/978-94-010-0556-2_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abdelgawad, Ahmed, and Magdy Bayoumi. "Proposed Centralized Data Fusion Algorithms." In Resource-Aware Data Fusion Algorithms for Wireless Sensor Networks, 37–57. Boston, MA: Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-1350-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Onoue, Y., Z. Hu, H. Iwasaki, and M. Takeichi. "A Calculational Fusion System HYLO." In Algorithmic Languages and Calculi, 76–106. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-0-387-35264-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

van Inge, Anthony, L. O. Hertzberger, A. G. Starreveld, and F. C. A. Groen. "Algorithms on a SIMD processor array." In Multisensor Fusion for Computer Vision, 307–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-662-02957-2_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fosbury, Adam M., John L. Crassidis, and Jemin George. "Contextual Tracking in Surface Applications: Algorithms and Design Examples." In Context-Enhanced Information Fusion, 339–79. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-28971-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithmes de fusion"

1

Ismail, Hesham, and Balakumar Balachandran. "Feature Extraction Algorithm Fusion for SONAR Sensor Data Based Environment Mapping." In ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-37116.

Full text
Abstract:
Mobile platforms that make use of concurrent localization and mapping algorithms have industrial applications for autonomous inspection and maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. An important component of these algorithms is feature extraction, which involves detection of significant features that represent the environment. For example, points and lines can be used to represent features such as corners, edges, and walls. Feature extraction algorithms make use of relative position and angle data from sensor measurements gathered as the mobile vehicle traverses the environment. In this paper, sound navigation and ranging (SONAR) sensor data obtained from a mobile vehicle platform are considered for feature extraction and related algorithms are developed and studied. In particular, different combinations of commonly used feature extraction algorithms are examined to enhance the representation of the environment. The authors fuse the Triangulation Based Fusion (TBF), Hough Transfrom (HT), and SONAR salient feature extraction algorithms with the clustering algorithm. It is shown that the novel algorithm fusion can be used to capture walls, corners as well as features such as gaps in walls. This capability can be used to obtain additional information about the environment. Details of the algorithm fusion are discussed and presented along with results obtained through experiments.
APA, Harvard, Vancouver, ISO, and other styles
2

Abdolsamadi, Amirmahyar, Pingfeng Wang, and Prasanna Tamilselvan. "A Generic Fusion Platform of Failure Diagnostics for Resilient Engineering System Design." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-47009.

Full text
Abstract:
Effective health diagnostics provides benefits such as improved safety, improved reliability, and reduced costs for the operation and maintenance of complex engineered systems. This paper presents a multi-attribute classification fusion approach which leverages the strengths provided by multiple membership classifiers to form a robust classification model for structural health diagnostics. The developed classification fusion approach conducts the health diagnostics with three primary stages: (i) fusion formulation using a k-fold cross validation model; (ii) diagnostics with multiple multi-attribute classifiers as member algorithms; and (iii) classification fusion through a weighted majority voting with dominance system. State-of-the-art classification techniques from three broad categories (i.e., supervised learning, unsupervised learning, and statistical inference) are employed as the member algorithms. The developed classification fusion approach is demonstrated with the 2008 PHM challenge problem. The developed fusion diagnostics approach outperforms any stand-alone member algorithm with better diagnostic accuracy and robustness.
APA, Harvard, Vancouver, ISO, and other styles
3

Yousif, Ahmed Luay Yousif, and Mohamed Elsobky. "LIDAR Phenomenological Sensor Model: Development and Validation." In Mobility 4.0. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-01-1902.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">In the rapidly evolving era of software and autonomous driving systems, there is a pressing demand for extensive validation and accelerated development. This necessity arises from the need for copious amounts of data to effectively develop and train neural network algorithms, especially for autonomous vehicles equipped with sensor suites encompassing various specialized algorithms, such as object detection, classification, and tracking. To construct a robust system, sensor data fusion plays a vital role. One approach to ensure an ample supply of data is to simulate the physical behavior of sensors within a simulation framework. This methodology guarantees redundancy, robustness, and safety by fusing the raw data from each sensor in the suite, including images, polygons, and point clouds, either on a per-sensor level or on an object level. Creating a physical simulation for a sensor is an extensive and intricate task that demands substantial computational power. Alternatively, another method involves statistically and phenomenologically modeling the sensor by simulating the behavior of the perception stack. This technique enables faster-than-real-time simulation, expediting the development process. This paper aims to elucidate the development and validation of a phenomenological LIDAR sensor model, as well as its utilization in the development of sensor fusion algorithms. By leveraging this approach, researchers can effectively simulate sensor behavior, facilitate faster development cycles, and enhance algorithmic advancements in autonomous systems.</div></div>
APA, Harvard, Vancouver, ISO, and other styles
4

Tamilselvan, Prasanna, Pingfeng Wang, and Chao Hu. "Design of a Robust Classification Fusion Platform for Structural Health Diagnostics." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12601.

Full text
Abstract:
Efficient health diagnostics provides benefits such as improved safety, improved reliability, and reduced costs for the operation and maintenance of engineered systems. This paper presents a multi-attribute classification fusion approach which leverages the strengths provided by multiple membership classifiers to form a robust classification model for structural health diagnostics. Health diagnosis using the developed approach consists of three primary steps: (i) fusion formulation using a k-fold cross validation model; (ii) diagnostics with multiple multi-attribute classifiers as member algorithms; and (iii) classification fusion through a weighted majority voting with dominance system. State-of-the-art classification techniques from three broad categories (i.e., supervised learning, unsupervised learning, and statistical inference) were employed as the member algorithms. The proposed classification fusion approach is demonstrated with a bearing health diagnostics problem. Case study results indicated that the proposed approach outperforms any stand-alone member algorithm with better diagnostic accuracy and robustness.
APA, Harvard, Vancouver, ISO, and other styles
5

Roussel, Stephane, Hemanth Porumamilla, Charles Birdsong, Peter Schuster, and Christopher Clark. "Enhanced Vehicle Identification Utilizing Sensor Fusion and Statistical Algorithms." In ASME 2009 International Mechanical Engineering Congress and Exposition. ASMEDC, 2009. http://dx.doi.org/10.1115/imece2009-12012.

Full text
Abstract:
Several studies in the area of vehicle detection and identification involve the use of probabilistic analysis and sensor fusion. While several sensors utilized for identifying vehicle presence and proximity have been researched, their effectiveness in identifying vehicle types has remained inadequate. This study presents the utilization of an ultrasonic sensor coupled with a magnetic sensor and the development of statistical algorithms to overcome this limitation. Mathematical models of both the ultrasonic and magnetic sensors were constructed to first understand the intrinsic characteristics of the individual sensors and also to provide a means of simulating the performance of the combined sensor system and to facilitate algorithm development. Preliminary algorithms that utilized this sensor fusion were developed to make inferences relating to vehicle proximity as well as type. It was noticed that while it helped alleviate the limitations of the individual sensors, the algorithm was affected by high occurrences of false positives. Also, since sensors carry only partial information about the surrounding environment and their measured quantities are partially corrupted with noise, probabilistic techniques were employed to extend the preliminary algorithms to include these sensor characteristics. These statistical techniques were utilized to reconstruct partial state information provided by the sensors and to also filter noisy measurement data. This probabilistic approach helped to effectively utilize the advantages of sensor fusion to further enhance the reliability of inferences made on vehicle identification. In summary, the study investigated the enhancement of vehicle identification through the use of sensor fusion and statistical techniques. The algorithms developed showed encouraging results in alleviating the occurrences of false positive inferences. One of the several applications of this study is in the use of ultrasonic-magnetic sensor combination for advanced traffic monitoring such as smart toll booths.
APA, Harvard, Vancouver, ISO, and other styles
6

Wen, Yao-Jung, Alice M. Agogino, and Kai Goebel. "Fuzzy Validation and Fusion for Wireless Sensor Networks." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-60964.

Full text
Abstract:
Miniaturized, distributed, networked sensors — called motes — promise to be smaller, less expensive and more versatile than other sensing alternatives. While these motes may have less individual reliability, high accuracy for the overall system is still desirable. Sensor validation and fusion algorithms provide a mechanism to extract pertinent information from massively sensed data and identify incipient sensor failures. Fuzzy approaches have proven to be effective and robust in challenging sensor validation and fusion applications. The algorithm developed in this paper — called mote-FVF (fuzzy validation and fusion) — uses a fuzzy approach to define the correlation among sensor readings, assign a confidence value to each of them, and perform a fused weighted average. A sensor network implementing mote-FVF for monitoring the illuminance in a dimmable fluorescent lighting environment empirically demonstrates the timely response of the algorithm to sudden changes in normal operating conditions while correctly isolating faulty sensor readings.
APA, Harvard, Vancouver, ISO, and other styles
7

Munro, Deborah S., and Munish C. Gupta. "Correlation of Strain on Instrumentation to Simulated Posterolateral Lumbar Fusion in a Sheep Model." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-65696.

Full text
Abstract:
Determining the stability and integrity of posterolateral lumbar spinal fusions continues to be one of the leading challenges facing surgeons today. Radiographs have long been the gold standard for evaluating spinal fusion, but they often give delayed or inaccurate results. It is the goal of this research to develop a new method for determining the strength and stability of a posterolateral lumbar spinal fusion using a sensor based on two strain gauges attached to a spinal rod. It was hypothesized that the spinal implants, in particular the plates or rods, would respond to this change in strain as the stiffness of the fusion increased. To investigate this hypothesis, an in vitro sheep model of the lumbar spine was developed and bony fusion was simulated with polymethylmethacrylate (PMMA), also known as bone cement. Eight sheep spines were prepared for use in a test fixture that applied a physiological moment of 5 Nm in flexion. One of the spinal rods was instrumented with two strain gauges in a Wheatstone half bridge, and all of the results were ported directly to a data collection system on a dynamic fatigue test machine. For each spine, the magnitude of the strain was plotted versus amount of simulated fusion. To evaluate the effect of simulated fusion, a moving average of the slope of three sequential strain values was used. The results showed there is a strong correlation between strain and spinal fusion and that a computer algorithm could be developed that would be more accurate than current techniques in predicting when a spinal fusion is mature enough for a patient to resume normal activities.
APA, Harvard, Vancouver, ISO, and other styles
8

Skonnikov, Petr Nikolaevich. "Comparative Analysis of Image Fusion Techniques." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-449-454.

Full text
Abstract:
The relevance of multispectral image fusion problem during search and rescue operations is shown. Well-known algorithms for multispectral image fusion are considered and implemented. The comparison involved algorithms based on averaging, maximum method, analysis of low and high frequency components, assessment of information content, addition of differences, extraction of local contrasts, Laplace pyramid, wavelet transform, principal component analysis, 3D low pass filter, power transformation, tv channel priority, Pytyev morphology, diffuse morphology and local weighting summation. Based on publicly available multispectral image datasets, a combined database to compare the algorithms considered including 496 pairs of images has been compiled. The results of image fusion using the considered algorithms are obtained. The aim of the work is to compare well-known image fusion algorithms in terms of objective quality metric. The comparison of fusion results was carried out according to combined quality metric. Based on comparison results, the authors concluded that the best values of combined quality metric for multispectral image fusion are provided by the algorithms based on local weight summation, principal component analysis and Laplace pyramid.
APA, Harvard, Vancouver, ISO, and other styles
9

Bather, J. "Tracking and data fusion." In IEE International Seminar Target Tracking: Algorithms and Applications. IEE, 2001. http://dx.doi.org/10.1049/ic:20010234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Liyao, Bin Sun, and Chunyan Han. "Training Instance Random Sampling Based Evidential Classification Forest Algorithms." In 2018 International Conference on Information Fusion (FUSION). IEEE, 2018. http://dx.doi.org/10.23919/icif.2018.8455427.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Algorithmes de fusion"

1

Meyer, David, and Jeffrey Remmel. Distributed Algorithms for Sensor Fusion. Fort Belvoir, VA: Defense Technical Information Center, October 2002. http://dx.doi.org/10.21236/ada415039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yocky, D. A., M. D. Chadwick, S. P. Goudy, and D. K. Johnson. Multisensor data fusion algorithm development. Office of Scientific and Technical Information (OSTI), December 1995. http://dx.doi.org/10.2172/172138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pao, Lucy Y. Distributed Multisensor Fusion Algorithms for Tracking Applications. Fort Belvoir, VA: Defense Technical Information Center, May 2000. http://dx.doi.org/10.21236/ada377900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bassu, Devasis. Fast Multiscale Algorithms for Information Representation and Fusion. Fort Belvoir, VA: Defense Technical Information Center, April 2013. http://dx.doi.org/10.21236/ada608426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bassu, Devasis. Fast Multiscale Algorithms for Information Representation and Fusion. Fort Belvoir, VA: Defense Technical Information Center, January 2011. http://dx.doi.org/10.21236/ada538312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Varshney, Pramod K., Chilukuri K. Mohan, and Krishan G. Mehrotra. Adaptive Models and Fusion Algorithms for Information Exploitation. Fort Belvoir, VA: Defense Technical Information Center, May 2009. http://dx.doi.org/10.21236/ada516533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bassu, Devasis. Fast Multiscale Algorithms for Information Representation and Fusion. Fort Belvoir, VA: Defense Technical Information Center, July 2012. http://dx.doi.org/10.21236/ada565467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bassu, Devasis. Fast Multiscale Algorithms for Information Representation and Fusion. Fort Belvoir, VA: Defense Technical Information Center, October 2012. http://dx.doi.org/10.21236/ada570238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bassu, Devasis. Fast Multiscale Algorithms for Information Representation and Fusion. Fort Belvoir, VA: Defense Technical Information Center, January 2013. http://dx.doi.org/10.21236/ada574842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

DVore, Ronald A. New Theory and Algorithms for Scalable Data Fusion. Fort Belvoir, VA: Defense Technical Information Center, June 2013. http://dx.doi.org/10.21236/ada587535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography